![]() |
VOOZH | about |
View the favourite problems of jbbaehr22
| Likes this problem | 142 143 242 346 347 351 354 |
| Interested in collaborating | 142 143 351 354 |
| Currently working on this problem | 142 347 351 |
| This problem looks tractable | 242 351 |
| The results on this problem could be formalisable | 142 242 |
| I am working on formalising the results on this problem | 242 351 |
Showing the 5 most recent comments (out of 10 total) - view all.
I just made a correction to README.md regarding my interpretation of Woett's Comment 15.
I had incorrectly stated it as a conjecture about growth rates, when in fact Woett proved a theorem about strongly complete sequences.
Woett's actual result states that if a sequence A is "strongly complete"
(meaning P(Aβ²) contains all large integers for every cofinite subsequence Aβ²), then a_{n+1}/a_n must be < Ο = (1+β5)/2 infinitely often.
Reference: https://github.com/Woett/Mathematical-shorts/blob/main/No%20sequence%20that%20grows%20at%20least%20as%20fast%20as%20the%20Fibonacci%20sequence%20is%20strongly%20complete.pdf
I now understand Ο not as a barrier to growth rate (our constructionsachieve average growth 2 > Ο), but rather as a **frustration barrier for dimensional lifting
The key insight is that dips below Ο (via the frustration parameter Ξ±) ensure proper contact between consecutive dimensional shells in the tower.
For example, when ΞΊ_n = 1 and Ξ± = 3/2, we get 2^1 - 3/2 = 0.5 < Ο, which maintains connectivity across the dimensional structure. This resolves the apparent paradox: we achieve density 1 (via average growth 2) while satisfying Woett's condition (dipping below Ο infinitely often via frustration).
Apologies for the initial mischaracterization - the corrected interpretation actually makes much more sense.
I'm assuming your answer was the Math Wiki one ... and if so you make the most valuable point here:
But ultimately, mathematicians are not really after proofs, despite appearances; they are after understanding. This is discussed quite well in Thurston's article "On proof and progress in mathematics".
Does a 100k line LEAN proof add understanding ? (It can do if the mechanics of the proof are understood and the purpose of the LEAN is to verify those mechanics).
But AI proofs at scale - are they adding understanding or black boxes that say "move on nothing to see here now"
(Its been a slow afternoon and I'm trying to bridge an idea in LEAN ... so procrastinating here instead ;-) )
Your "Proof" of FLT is precisely my point - this is trivial - but embedded in 1.5M lines of Mathlib and 100k AI generated LEAN proofs are similar dragons - they're just very hard to see at this scale.
This is not me dissing LEAN - or any formal system - they need to be our assistants - they prevent peer review "Reject proof the Kerning is off" and force engagement with the meat of the argument - they don't replace it.
You don't need to be malicious you just need subtle category errors e.g. working in Q .. drift into R because "its obvious" and then back to Q:
import Mathlib.Data.Real.Basic -- Brings in β as completion of β
import Mathlib.Data.Rat.Basic -- β properties
-- Suppose we're working purely in β, proving a lemma about bounded rational sequences
lemma bounded_rat_seq (s : β β β) (h_bdd : β M : β, β n, |s n| β€ M) :
β sup : β, β n, s n β€ sup β§ β Ξ΅ > 0, β n, sup - Ξ΅ < s n := by
-- Intent: Stay in β, but we "meander" by using a tactic that assumes β structure
have h_real : β sup_real : β, β n, (s n : β) β€ sup_real β§ ... -- Subtle bridge: Coerce to β
{ -- Accidental invocation of real supremum property (which β lacks)
apply Real.exists_isLUB_image; exact h_bdd; -- Looks right, but injects β completeness
-- This "bridge" exists via embedding β βͺ β, but it's invalid for pure β proof
}
-- Now extract back to β, but the sup might not be rational (e.g., if seq approaches β2)
let sup : β := β¨sup_realβ© -- Coercion hides the error; compiles but semantically wrong
sorry -- Proof "works" superficially, but the bridge introduced irrationals implicitly
(This is a crude example - but this is the sort of tactic that can creep into very large proofs and go completely unnoticed)
There's a grand irony here too ... Kevin's compact FLT proof is now meat for the LLM training grinder:
unsafe def badNatUnsafe : β := unsafeCast (-4294967296 : β€)
Is now training data (whoops I've given it more gravitas on Erdosproblems) for the LLM meat grinder.
Does it ? Or is it forced to reduce to the kernel ...
This is just intuition - but if ZFC underpins all of mathematics (and this is fairly uncontroversial!) then all branches of Maths are equivalent under transform - they're just different ways of looking at the same problem - cofiniteness in discrete combiniatorics is "differentiable everywhere" in analytics.
But maths verticals remain maths verticals - there is no universal translator to move between Algebraic Topology to Combinatorics to ...
That says that whilst we have axiomatic foundations to every branch there is no direct transform from every branch to ZFC.
If ZFC is a spherical manifold - inflating it should encompass all branchs of maths. But right now there exists punctures in the manifold (unbridged axioms Classical.choice, Decideable, sorry) that become genus-n holes that those branches slip through.
The intuition is that what Mathlib does is provide "Ricci-Flow" for those holes - topological surgery removing infinities by annealing the manifold - sorry's, tactics, decideables, choice are stick on plasters to ensure the Manifold is smooth - BUT - the patches aren't necessarily the correct ones - they couple the manifold in ways that mean parallel transport between branches and back again doesn't end up where you started -> the inadvertent coupling, the 3-sat clause not in the proof that goes to FALSE but is unnoticed because we made it True.
Thanks for the response Terry my intuition says we might be talking at cross purposes - even though the LEAN kernel is small the libraries are vast (MathLib 1.5M lines). Maths is siloed - Russell failed to self-bootstrap - Godel showed why. Are we certain that self-consistency under a branches assumed axioms makes it true across mathematics ?
Thought experiment: Walk the mathlib tree and organise such that all axioms out to the boundary of the tree are self-consistent both within their branches and all the way back to ZFC. Do all branches minimise to those rules - or do we have axioms that have to-date resisted reduction ?
Large proofs (and proof trees) including Mathlib use tactics, sorry's and Decideables which end up being axiomatic - patching unknowns with what seems right.
This is unit tests vs integration tests vs real world tests - 1 assumption somewhere skews the representative manifold - the GPS that leads you through a river where it thought a bridge was...
I am old enough to remember VIPER - which retrospectively had holes - but at the time was claimed to be a formally proven microprocessor - the holes were assumptions (axioms) about coupling between layers that hadn't been tested - the "unit tests" (layers) all passed - their coupling hadn't been.