Then ZFC (and by extension, T) proves that M halts, but T does not determine how many steps it takes for M to halt.
My number is bigger!
Moderators: jestingrabbit, Moderators General, Prelates
Re: My number is bigger!
Actually, I just realized that it's very easy to construct a Turing machine which provably halts, but whose running time is indeterminate, by a similar trick to before. If T* is ZFC, then define M as follows:
Then ZFC (and by extension, T) proves that M halts, but T does not determine how many steps it takes for M to halt.
If the continuum hypothesis holds, then M is a 1state Turing machine which halts after 1 step. Otherwise, M is a 2state Turing machine which halts after 2 steps.
Then ZFC (and by extension, T) proves that M halts, but T does not determine how many steps it takes for M to halt.
Re: My number is bigger!
Sorry, the two stupid constructions I gave don't actually work. They are valid definitions of Turing machines in ZFC, but Yudkowsky's algorithm iterates through explicit codes for Turing machines, not all definitions of Turing machines. So Yudkowsky's algorithm doesn't trip up on these examples.
Nevertheless, here's an example of a Turing machine M that can be explicitly coded (I've been a bit vague about whether this coding takes place in T or T*, but it doesn't much matter, I think).
This requires some additional level of coding to formalize, but I hope it's clear that this is not a problem. Because T has higher consistency strength than ZFC, T proves that M halts, and ZFC proves [T proves that M halts]. But ZFC doesn't prove that M halts (unless ZFC is inconsistent, by Gödel). So Phi(M) doesn't hold when we take our metatheory to be ZFC. If we take our metatheory to be ZFC, we can't prove that f is welldefined.
We can ask which metatheories might work. If T* proves that ZFC is consistent, then T* will satisfy Phi(M) for the above M, and if T* proves that T is consistent, then T* will satisfy Phi(M'), where M' is like M but iterating through proof in T instead of ZFC. In fact, so long as T* proves that T is consistent, I don't know a way to cook up a Turing machine M where Phi(M) will not hold. So if you insist that your metatheory T* proves Con(T), and if in addition you're lucky, you might be able to prove that f is welldefined. But it's really not clear.
Nevertheless, here's an example of a Turing machine M that can be explicitly coded (I've been a bit vague about whether this coding takes place in T or T*, but it doesn't much matter, I think).
M iterates through all proofs in ZFC. M halts if it finds a proof that 0=1.
This requires some additional level of coding to formalize, but I hope it's clear that this is not a problem. Because T has higher consistency strength than ZFC, T proves that M halts, and ZFC proves [T proves that M halts]. But ZFC doesn't prove that M halts (unless ZFC is inconsistent, by Gödel). So Phi(M) doesn't hold when we take our metatheory to be ZFC. If we take our metatheory to be ZFC, we can't prove that f is welldefined.
We can ask which metatheories might work. If T* proves that ZFC is consistent, then T* will satisfy Phi(M) for the above M, and if T* proves that T is consistent, then T* will satisfy Phi(M'), where M' is like M but iterating through proof in T instead of ZFC. In fact, so long as T* proves that T is consistent, I don't know a way to cook up a Turing machine M where Phi(M) will not hold. So if you insist that your metatheory T* proves Con(T), and if in addition you're lucky, you might be able to prove that f is welldefined. But it's really not clear.
Re: My number is bigger!
Sorry, this example still doesn't work, because what actually happens is that T proves that M does _not_ halt while ZFC fails to prove that M does _not_ halt.
But I think my point stands: in order to show in our metatheory T* that Yudkowsky's function is welldefined, we need something like Phi(M) for each Turing machine M, i.e. we need to know that if T proves that M halts, then T* proves that M halts. And I don't see any way to establish this without basically assuming that T* proves everything that T proves. But this is a very strong hypothesis about our metatheory, and it seems to me that that this hypothesis is too strong. I'd be willing to grant that the metatheory should prove that T is consistent, but not that it actually prove the same things that T proves.
But I think my point stands: in order to show in our metatheory T* that Yudkowsky's function is welldefined, we need something like Phi(M) for each Turing machine M, i.e. we need to know that if T proves that M halts, then T* proves that M halts. And I don't see any way to establish this without basically assuming that T* proves everything that T proves. But this is a very strong hypothesis about our metatheory, and it seems to me that that this hypothesis is too strong. I'd be willing to grant that the metatheory should prove that T is consistent, but not that it actually prove the same things that T proves.
 emlightened
 Posts: 36
 Joined: Sat Sep 26, 2015 9:35 pm UTC
 Location: Somewhere cosy.
Re: My number is bigger!
For the purposes of these posts, I'll assume that ZFC+I0 is consistent.
I disagree with the argument that follows from 'T* cannot be ZFC', in particular that just because your argument holds for ZFC, something weaker than T, it must hold for all reasonable T*. If we know that ZFC+I0 is consistent, and augment ZFC+I0 with that statement (so we then have the theory ZFC+I0+'ZFC+I0 is consistent (different from T=ZFC+I0+'T is consistent')), it is clear that we have made another consistent theory, even though the theory cannot prove itself to be consistent. It is also clear that this theory can model ZFC+I0 and prove that that model is consistent, and hence prove any statement provable in ZFC+I0.
As to your original point, although (*) may not be enough alone, the remainder of the proof does not depend on M. Choose T = ZFC+I0 and T* = T+'T is consistent'. (Remember that T is consistent by assumption; it is obvious that this implies the consistency of T*, although we cannot state such symbolically.) the statement '(T proves M halts) => M halts' is implied by 'T is consistent' T* imples that 'T is consistent', so for this choice of T and T*, we have Phi(M) for all M. The only restrictions on T* is that T* implies that T is consistent, T* can model T, and T* is consistent, and the above choice of T* satisfies all of these conditions. In fact, the choice of T* = T+'T is consistent' is sufficient for any consistent theory T. (Note that although T is consistent implies that T+'T is consistent' is consistent, we cannot prove that that is true for all theories T, just any theory T.)
Lastly: Why must T* be complete? All it needs to do is be able to prove everything that T can, and T's consistency, as all we need to do is check through proofs in T. We will, inevitably, not count some Turing machines that halt with our algorithm, but this is precisely what makes it computable.
I disagree with the argument that follows from 'T* cannot be ZFC', in particular that just because your argument holds for ZFC, something weaker than T, it must hold for all reasonable T*. If we know that ZFC+I0 is consistent, and augment ZFC+I0 with that statement (so we then have the theory ZFC+I0+'ZFC+I0 is consistent (different from T=ZFC+I0+'T is consistent')), it is clear that we have made another consistent theory, even though the theory cannot prove itself to be consistent. It is also clear that this theory can model ZFC+I0 and prove that that model is consistent, and hence prove any statement provable in ZFC+I0.
As to your original point, although (*) may not be enough alone, the remainder of the proof does not depend on M. Choose T = ZFC+I0 and T* = T+'T is consistent'. (Remember that T is consistent by assumption; it is obvious that this implies the consistency of T*, although we cannot state such symbolically.) the statement '(T proves M halts) => M halts' is implied by 'T is consistent' T* imples that 'T is consistent', so for this choice of T and T*, we have Phi(M) for all M. The only restrictions on T* is that T* implies that T is consistent, T* can model T, and T* is consistent, and the above choice of T* satisfies all of these conditions. In fact, the choice of T* = T+'T is consistent' is sufficient for any consistent theory T. (Note that although T is consistent implies that T+'T is consistent' is consistent, we cannot prove that that is true for all theories T, just any theory T.)
Lastly: Why must T* be complete? All it needs to do is be able to prove everything that T can, and T's consistency, as all we need to do is check through proofs in T. We will, inevitably, not count some Turing machines that halt with our algorithm, but this is precisely what makes it computable.
Bleb. Idk when I'm returning.
Re: My number is bigger!
Kind of an offtopic question: What is the fundamental sequence of phi(3,w,7)?
Would this do:
phi(3,1,phi(3,w,6)+1)
phi(3,2,phi(3,w,6)+1)
phi(3,3,phi(3,w,6)+1)
phi(3,4,phi(3,w,6)+1)
.
.
.
?
Would this do:
phi(3,1,phi(3,w,6)+1)
phi(3,2,phi(3,w,6)+1)
phi(3,3,phi(3,w,6)+1)
phi(3,4,phi(3,w,6)+1)
.
.
.
?
 emlightened
 Posts: 36
 Joined: Sat Sep 26, 2015 9:35 pm UTC
 Location: Somewhere cosy.
Re: My number is bigger!
Sorry, when I said "the fact that f is not welldefined for a reasonable choice of metatheory is already fatal, I think", I meant "the fact that there exists a reasonable choice of metatheory such that f is not welldefined is already fatal." But I now agree that ZFC is not a reasonable choice of metatheory because any reasonable metatheory should probably prove Con(T). However, I think there are reasonable philosophical grounds (which I'll get to in a minute) to say that we should not assume in our metatheory that a strong settheoretic statement like I0 actually holds _in the metatheory_: at most we should be allowed to require that the metatheory proves the consistency of ZFC+I0, or perhaps of some stronger theory. Does this seem like a reasonable philosophical position to you? From this philosophical position, we ought to be able to prove that f is welldefined in T* = ZFC + Con(T) or something like that, rather than T* = T + Con(T).
If we take T* = T + Con(T), then I agree that T* will prove each statement Phi(M) (In fact now I can't see why T* = T doesn't suffice, but I have a feeling that the undefinability of truth should get in the way somehow).
But if we take T* = ZFC + Con(T), then things are not so clear. The schema Phi(M) does not hold, I think.
From a formalist perspective (like the one that I think you were advocating), the problem with proving f is welldefined in the metatheory T + Con(T) is that the metatheory has been "cooked up" to get the answer we want. It's not particularly interesting to say "there exists a metatheory, jerryrigged for the job, in which the result we want is true". You should be able to say "in some reasonable class of metatheories, the result we want is true", or "In a metatheory I was already interested in, the result we want is true". For, maybe tomorrow you will be inspired (or forced) to work in a different metatheory  if you can't take this result that f is welldefined with you to that other metatheory, then it seems to be a pretty empty result.
To give it a bit of a finer point, the original rules of the game here said nothing about what metatheory we're working in. So it seems weird to me to accept an answer that is welldefined in some metatheories, but not some other equally reasonable ones.
Maybe I should say something about what constitutes a "reasonable" metatheory. It seems to me that if two metatheories are equiconsistent, then it's hard to say that one is better from a formalist perspective. Since T + Con(T) and ZFC + Con(T) are (I think?) equiconsistent, it seems to me that they should be regarded as "equally reasonable".
Regarding your last comment: I wasn't saying that we should require T* be complete. I was just saying that something like that is the usual sort of fiction that mathematicians resort to when they don't want to explicitly worry about their metatheory. I didn't mean much by it.
If we take T* = T + Con(T), then I agree that T* will prove each statement Phi(M) (In fact now I can't see why T* = T doesn't suffice, but I have a feeling that the undefinability of truth should get in the way somehow).
But if we take T* = ZFC + Con(T), then things are not so clear. The schema Phi(M) does not hold, I think.
From a formalist perspective (like the one that I think you were advocating), the problem with proving f is welldefined in the metatheory T + Con(T) is that the metatheory has been "cooked up" to get the answer we want. It's not particularly interesting to say "there exists a metatheory, jerryrigged for the job, in which the result we want is true". You should be able to say "in some reasonable class of metatheories, the result we want is true", or "In a metatheory I was already interested in, the result we want is true". For, maybe tomorrow you will be inspired (or forced) to work in a different metatheory  if you can't take this result that f is welldefined with you to that other metatheory, then it seems to be a pretty empty result.
To give it a bit of a finer point, the original rules of the game here said nothing about what metatheory we're working in. So it seems weird to me to accept an answer that is welldefined in some metatheories, but not some other equally reasonable ones.
Maybe I should say something about what constitutes a "reasonable" metatheory. It seems to me that if two metatheories are equiconsistent, then it's hard to say that one is better from a formalist perspective. Since T + Con(T) and ZFC + Con(T) are (I think?) equiconsistent, it seems to me that they should be regarded as "equally reasonable".
Regarding your last comment: I wasn't saying that we should require T* be complete. I was just saying that something like that is the usual sort of fiction that mathematicians resort to when they don't want to explicitly worry about their metatheory. I didn't mean much by it.
Re: My number is bigger!
Maybe I can put it this way. There's nothing to stop someone else from running Yudkowsky's construction with T' = ZF + a Berkeley cardinal. Since T and T' are inconsistent with one another, how do we compare the results?
Re: My number is bigger!
It's taking a long time for my last two posts to get through moderation, but let me elaborate on the point I just brought up: one can construct a function f_T' similar to Yudkowsky's function f_T using a stronger theory T' (and so this function f_T' intuitively grows faster than Yudkowsky's function f_T), but it's difficult to find a metatheory where f_T and f_T' are both welldefined, so it's difficult to actually compare their growth rates
T will continue to be ZFC + I0, the strongest extension of ZFC not known to be inconsistent. Let T' be ZF + R, where R is the existence of a Reinhardt cardinal (http://cantorsattic.info/Kunen_inconsistency#Reinhardt_cardinal). The discussion would work equally well with T' being ZF + SR (superReinhardt cardinal) or ZF + B (Berkeley cardinal), or (with a couple of changes) even ZF + AD (AD = the axiom of determinacy), but let's just stick to Reinhardt cardinals for definiteness. See http://cantorsattic.info/Upper_attic for a list of large cardinal axioms in order of consistency strength.
My understanding is that T' is known to have higher consistency strength than T, so that ZFC + Con(T') proves Con(T), in fact, I believe that T' = ZF + R actually proves ZF + I0. But crucially, T' + T = T' + AC (AC = the axiom of choice) = ZFC + R is known to be inconsistent  this is the Kunen inconsistency (see the Cantor's attic page I linked to).
We can do Yudkowsky's construction of a function starting from either T or T'  let's call the resulting functions f_T and f_T'. The natural question for this thread is, which one grows faster? (In particular, which is bigger  say, (f_T)^10(10), or (f_T')^10(10)?). In order to answer this question, we need to find a metatheory in which we can analyze both f_T and f_T' at the same time. But which one? We showed that f_T was welldefined in a metatheory like T* = T + Con(T), whereas we showed that f_T' is welldefined in a metatheory like T'* = T' + Con(T'). We can't just add these theories together, because T* + T'* = T + T' + Con(T) + Con(T') is known to be inconsistent (since T+T' is inconsistent by Kunen).
This would not be a problem if we could take T* = ZFC + Con(T) and T'* = ZFC + Con(T'). Then we would have T* + T'* = ZFC + Con(T) + Con(T') = ZFC + Con(T') = T'*, which is consistent so long as T and T' are consistent. If we could show that f_T and f_T' were consistent in these T*, T'*, then we could compare them using this T'* for a metatheory. But f_T and f_T' are not welldefined in these metatheories. Maybe some more elaborate version of f_T and f_T' could be seen to be welldefined in these metatheories or similar ones, but it's not clear to me.
Here's an idea for how this could be done. The statement that M halts is an arithmetical statement (i.e. it only talks about natural numbers, using quantifiers that only range over natural numbers). In fact it is a Sigma01 statement (an arithmetical Sigma1 statement: it's an arithmetical statement with one existential quantifier and no more): it says that there exists a natural number X such that M halts after X steps. We want to use X to construct our function. The trouble is, although T thinks there is such an X, there's no reason for the metatheory to think that there is such an X, and we really need X to exist according to the metatheory (one way of saying this, if we're working with a model of T in our metatheory, is that X might be a _nonstandard_ natural number in our model of T). emlightened's approach was to just assume that the metatheory contained T itself, so that we could just rerun T's proof in the metatheory to obtain an X there. But we saw that this causes problems when we want to talk about f_T' in the same metatheory.
My idea is to just assume that the metatheory proves all the _arithmetical statements_ that T proves  or maybe just the Sigma01 statements. So we take our metatheory to be something like T* = ZFC + Arith(T) (just throw in all the arithmetical consequences of T), or T* = ZFC + Sigma01(T) (throw in the Sigma01 consequences of T). Similarly, we can describe f_T' in the metatheory T'* = ZFC + Arith(T'). Although T + T' is inconsistent, we might be lucky and have that Arith(T) + Arith(T') is not inconsistent. Then we could compare f_T to f_T' in the metatheory T* + T'* = ZFC + Arith(T) + Arith(T'). In fact, I suspect that maybe Arith(T') contains Arith(T), so we'd just have T*+ T'* = T'* as our metatheory.
This makes more demands on the metatheory than I would have liked on philosophical grounds, but maybe I can live with that. If this approach works.
T will continue to be ZFC + I0, the strongest extension of ZFC not known to be inconsistent. Let T' be ZF + R, where R is the existence of a Reinhardt cardinal (http://cantorsattic.info/Kunen_inconsistency#Reinhardt_cardinal). The discussion would work equally well with T' being ZF + SR (superReinhardt cardinal) or ZF + B (Berkeley cardinal), or (with a couple of changes) even ZF + AD (AD = the axiom of determinacy), but let's just stick to Reinhardt cardinals for definiteness. See http://cantorsattic.info/Upper_attic for a list of large cardinal axioms in order of consistency strength.
My understanding is that T' is known to have higher consistency strength than T, so that ZFC + Con(T') proves Con(T), in fact, I believe that T' = ZF + R actually proves ZF + I0. But crucially, T' + T = T' + AC (AC = the axiom of choice) = ZFC + R is known to be inconsistent  this is the Kunen inconsistency (see the Cantor's attic page I linked to).
We can do Yudkowsky's construction of a function starting from either T or T'  let's call the resulting functions f_T and f_T'. The natural question for this thread is, which one grows faster? (In particular, which is bigger  say, (f_T)^10(10), or (f_T')^10(10)?). In order to answer this question, we need to find a metatheory in which we can analyze both f_T and f_T' at the same time. But which one? We showed that f_T was welldefined in a metatheory like T* = T + Con(T), whereas we showed that f_T' is welldefined in a metatheory like T'* = T' + Con(T'). We can't just add these theories together, because T* + T'* = T + T' + Con(T) + Con(T') is known to be inconsistent (since T+T' is inconsistent by Kunen).
This would not be a problem if we could take T* = ZFC + Con(T) and T'* = ZFC + Con(T'). Then we would have T* + T'* = ZFC + Con(T) + Con(T') = ZFC + Con(T') = T'*, which is consistent so long as T and T' are consistent. If we could show that f_T and f_T' were consistent in these T*, T'*, then we could compare them using this T'* for a metatheory. But f_T and f_T' are not welldefined in these metatheories. Maybe some more elaborate version of f_T and f_T' could be seen to be welldefined in these metatheories or similar ones, but it's not clear to me.
Here's an idea for how this could be done. The statement that M halts is an arithmetical statement (i.e. it only talks about natural numbers, using quantifiers that only range over natural numbers). In fact it is a Sigma01 statement (an arithmetical Sigma1 statement: it's an arithmetical statement with one existential quantifier and no more): it says that there exists a natural number X such that M halts after X steps. We want to use X to construct our function. The trouble is, although T thinks there is such an X, there's no reason for the metatheory to think that there is such an X, and we really need X to exist according to the metatheory (one way of saying this, if we're working with a model of T in our metatheory, is that X might be a _nonstandard_ natural number in our model of T). emlightened's approach was to just assume that the metatheory contained T itself, so that we could just rerun T's proof in the metatheory to obtain an X there. But we saw that this causes problems when we want to talk about f_T' in the same metatheory.
My idea is to just assume that the metatheory proves all the _arithmetical statements_ that T proves  or maybe just the Sigma01 statements. So we take our metatheory to be something like T* = ZFC + Arith(T) (just throw in all the arithmetical consequences of T), or T* = ZFC + Sigma01(T) (throw in the Sigma01 consequences of T). Similarly, we can describe f_T' in the metatheory T'* = ZFC + Arith(T'). Although T + T' is inconsistent, we might be lucky and have that Arith(T) + Arith(T') is not inconsistent. Then we could compare f_T to f_T' in the metatheory T* + T'* = ZFC + Arith(T) + Arith(T'). In fact, I suspect that maybe Arith(T') contains Arith(T), so we'd just have T*+ T'* = T'* as our metatheory.
This makes more demands on the metatheory than I would have liked on philosophical grounds, but maybe I can live with that. If this approach works.
Re: My number is bigger!
I've actually been reading through the whole thread, and I submit a different concern regarding numbers submitted as some sort of proof enumeration in ZFC or stronger.
So far, as I can see, I've only seen handwaving regarding proof length. Note that proof length is crucial since the limit on proof length is what separates those entries from computable to non computable.
I really have no intuition either way, but let me give arguments to consider:
* Can anyone prove any lower or upper bound on the proof lengths involved? Given the careful strictness that any operation in formal logic and set theory involves, this is absolutely not as trivial as it seems. Ignore, for instance, the wikipedia articles on this  as far as I eyeballed them, none specify the *language* used, which is rather crucial. Typically, those are restricted to really basic things  the empty set, "element of", maybe a couple other symbols such as set brackets and variables. This means that even naming finite numbers in the usual counting (i.e. successor) way will take loads and loads of symbols. Every time. Functions are far worse, especially if they are complex. This is usually handwaved over since set theory really isn't concerned with "how big is number foo" in its own terms, but it plays a major role in how large proofs grow.
* Given the above, it seems absolutely possible that even relatively low ordinals require something on the order of their own fast growing hierarchy of symbols of their own magnitude to prove them total.
* Worse, there is absolutely no way I can see how we can determine any sort of lower or upper bound of proof length of totallity of bounded busy beaver type functions. It seems, to me, entirely possible that for reasonably big n that the length of a proof of totality in any given logic system for a nproofbounded busy beaver is essentially that bounded busy beaver in length. Perhaps even more  I don't immediately see why it shouldn't be.
Given all that, I think any of the zfc+ defined numbers here require serious research and proofs to actually be called bigger than the things in the (comparatively, given the lack of fundamental sequences named) more welldefined fast growing hierarchy. It may well be that they outgrow them, but only at arguments that themselves are outgrown by the numbers named in the fgh here.
Finally, a number from Friedman (the guy who defined TREE(3), SCG(13) and other ramsey/combinatorical numbers/functions that blow other things out of the water): In http://www.cs.nyu.edu/pipermail/fom/200 ... 10260.html we see that in a certain theory, the proof that TREE is a total function requires 2^^1000 symbols. Note that this is just a lower bound, so it says nothing about how long that actually would be. The current (as far as I know) lower bound for Graham's number (as it was originally intended as a ramsey problem) is 11. Just to illustrate that bounds mean nothing unless they converge.
So, while I certainly agree that prooftheoretically defined functions with sufficiently big strength will eventually outgrow other functions, it is in no way clear to me that the actual numbers named here are bigger than those defined by the fast growing hierarchy.
I will also note that I think the "short cuts" ignoring fundamental sequences using ordinals are weak, but most of them are convoluted, spread over dozens pages with dozens edits, so it's hard to find something specific to criticize.
To illustrate the latter, I'll quickly define a pseudoordinal "somewhat" fast growing hierarchy:
f_0[(x)=x+1
f_n+1(x)=f^f(n)_n(x)
For limits, f_\alpha(x)=f_x(x)
Grows fast, right? But it will get stuck at [imath]f_\epsilon[/imath]. At latest. Because it doesn't use fundamental sequences.
Lastly and off topic  are the math tags broken somehow? I tried a couple ways to put the last part into the intended format, but it didn't work?
So far, as I can see, I've only seen handwaving regarding proof length. Note that proof length is crucial since the limit on proof length is what separates those entries from computable to non computable.
I really have no intuition either way, but let me give arguments to consider:
* Can anyone prove any lower or upper bound on the proof lengths involved? Given the careful strictness that any operation in formal logic and set theory involves, this is absolutely not as trivial as it seems. Ignore, for instance, the wikipedia articles on this  as far as I eyeballed them, none specify the *language* used, which is rather crucial. Typically, those are restricted to really basic things  the empty set, "element of", maybe a couple other symbols such as set brackets and variables. This means that even naming finite numbers in the usual counting (i.e. successor) way will take loads and loads of symbols. Every time. Functions are far worse, especially if they are complex. This is usually handwaved over since set theory really isn't concerned with "how big is number foo" in its own terms, but it plays a major role in how large proofs grow.
* Given the above, it seems absolutely possible that even relatively low ordinals require something on the order of their own fast growing hierarchy of symbols of their own magnitude to prove them total.
* Worse, there is absolutely no way I can see how we can determine any sort of lower or upper bound of proof length of totallity of bounded busy beaver type functions. It seems, to me, entirely possible that for reasonably big n that the length of a proof of totality in any given logic system for a nproofbounded busy beaver is essentially that bounded busy beaver in length. Perhaps even more  I don't immediately see why it shouldn't be.
Given all that, I think any of the zfc+ defined numbers here require serious research and proofs to actually be called bigger than the things in the (comparatively, given the lack of fundamental sequences named) more welldefined fast growing hierarchy. It may well be that they outgrow them, but only at arguments that themselves are outgrown by the numbers named in the fgh here.
Finally, a number from Friedman (the guy who defined TREE(3), SCG(13) and other ramsey/combinatorical numbers/functions that blow other things out of the water): In http://www.cs.nyu.edu/pipermail/fom/200 ... 10260.html we see that in a certain theory, the proof that TREE is a total function requires 2^^1000 symbols. Note that this is just a lower bound, so it says nothing about how long that actually would be. The current (as far as I know) lower bound for Graham's number (as it was originally intended as a ramsey problem) is 11. Just to illustrate that bounds mean nothing unless they converge.
So, while I certainly agree that prooftheoretically defined functions with sufficiently big strength will eventually outgrow other functions, it is in no way clear to me that the actual numbers named here are bigger than those defined by the fast growing hierarchy.
I will also note that I think the "short cuts" ignoring fundamental sequences using ordinals are weak, but most of them are convoluted, spread over dozens pages with dozens edits, so it's hard to find something specific to criticize.
To illustrate the latter, I'll quickly define a pseudoordinal "somewhat" fast growing hierarchy:
f_0[(x)=x+1
f_n+1(x)=f^f(n)_n(x)
For limits, f_\alpha(x)=f_x(x)
Grows fast, right? But it will get stuck at [imath]f_\epsilon[/imath]. At latest. Because it doesn't use fundamental sequences.
Lastly and off topic  are the math tags broken somehow? I tried a couple ways to put the last part into the intended format, but it didn't work?
 emlightened
 Posts: 36
 Joined: Sat Sep 26, 2015 9:35 pm UTC
 Location: Somewhere cosy.
Re: My number is bigger!
Yeah, the math tags have been broken since forever ago.
As far as fundamental sequences go, I don't know why people haven't constructed any properly. It's pretty simple to do:
I, personally, think that, with formalisation, the number should be fine. The problem is that this isn't formalised; while I think that the choice of metatheory likely doesn't matter, given that it is sufficiently strong with respect to our needs, the proof length is a bigger problem, due to the lack of specification of a language. I wouldn't expect any reasonably simple statement to have a proof that takes more than 2^^2^^n symbols*, for some reasonably small n, but there is obviously no way to know this. If the language allows us to define elements of a countably infinite set of variables, then that cuts down the proof lengths considerably; it is easy to show that the amount of symbols needed to make n is bounded by a linear function of n, however this still doesn't imply a very short length. I imagine that, with structures similar to the above, ridiculously large countable ordinals would be constructible in well under 10^100 symbols, but the main problem is proving that the proof length isn't ridiculously large. I expect, overall, this would depend greatly on the language (and, due to the lack of specification or choices being (presumably) equivalent, it is likely enough that the number is not well defined).
(*My expectations are based on that I would not expect some part in the proof to increase the length more than exponentially, and that the proof we would want would likely contain a subtetrational amount of these.)
As far as fundamental sequences go, I don't know why people haven't constructed any properly. It's pretty simple to do:
Spoiler:
I, personally, think that, with formalisation, the number should be fine. The problem is that this isn't formalised; while I think that the choice of metatheory likely doesn't matter, given that it is sufficiently strong with respect to our needs, the proof length is a bigger problem, due to the lack of specification of a language. I wouldn't expect any reasonably simple statement to have a proof that takes more than 2^^2^^n symbols*, for some reasonably small n, but there is obviously no way to know this. If the language allows us to define elements of a countably infinite set of variables, then that cuts down the proof lengths considerably; it is easy to show that the amount of symbols needed to make n is bounded by a linear function of n, however this still doesn't imply a very short length. I imagine that, with structures similar to the above, ridiculously large countable ordinals would be constructible in well under 10^100 symbols, but the main problem is proving that the proof length isn't ridiculously large. I expect, overall, this would depend greatly on the language (and, due to the lack of specification or choices being (presumably) equivalent, it is likely enough that the number is not well defined).
(*My expectations are based on that I would not expect some part in the proof to increase the length more than exponentially, and that the proof we would want would likely contain a subtetrational amount of these.)
Bleb. Idk when I'm returning.
Re: My number is bigger!
emlightened wrote:(*My expectations are based on that I would not expect some part in the proof to increase the length more than exponentially, and that the proof we would want would likely contain a subtetrational amount of these.)
That's just the point. I think it's not only possible, but actually likely that a proof of "this TM halts" for a sufficiently complex TM will consist of basically describing each step it runs.
That's not something I can prove, but the burden of proof is on those who claim that those proofs are much shorter than the actual output or number of steps the corresponding machine runs. Just "expecting" it not to be without actually demonstrating it in some way is not sufficient.
Re: My number is bigger!
So, just to spice this up, I submit this number:
Defined f(x) as x if x<=e, xf(ln(x)) for x>e.
Consider the series \sum_1^\infty 1/f(n).
This series is divergent (see Putnam 2008 A4).
That means that "g(n) = lowest number of terms in that series to exceed n" is a total function on the natural numbers.
I claim that with h(n)=g^g(n)(n), we win this contest with h(xkcd).
Well, no, I actually don't claim that  I believe that higher numbers have been claimed (after all, I didn't even diagonalize once, other than possibly in the base definition). What's totally been neglected in this thread, as I was trying to point out above, was that you're required to be able to conclusively *prove* that you beat previous entries. Even if the proof is not rigorous or complete, I think reasonable undisputable arguments need to be given. For those who claim to win, I don't see that right now.
So  why doesn't anyone claiming to be "winning" currently try to prove that they are? With the above, I submit what I would expect and chose to be a pretty low bar. We could work up from there. Feel challenged to prove  not "think that" or "estimate" or "expect", but prove!  that your number is bigger.
Defined f(x) as x if x<=e, xf(ln(x)) for x>e.
Consider the series \sum_1^\infty 1/f(n).
This series is divergent (see Putnam 2008 A4).
That means that "g(n) = lowest number of terms in that series to exceed n" is a total function on the natural numbers.
I claim that with h(n)=g^g(n)(n), we win this contest with h(xkcd).
Well, no, I actually don't claim that  I believe that higher numbers have been claimed (after all, I didn't even diagonalize once, other than possibly in the base definition). What's totally been neglected in this thread, as I was trying to point out above, was that you're required to be able to conclusively *prove* that you beat previous entries. Even if the proof is not rigorous or complete, I think reasonable undisputable arguments need to be given. For those who claim to win, I don't see that right now.
So  why doesn't anyone claiming to be "winning" currently try to prove that they are? With the above, I submit what I would expect and chose to be a pretty low bar. We could work up from there. Feel challenged to prove  not "think that" or "estimate" or "expect", but prove!  that your number is bigger.
Re: My number is bigger!
I dont think anyone claimed to be winning after yuds entry.
Re: My number is bigger!
Desiato wrote:So, just to spice this up, I submit this number:
Defined f(x) as x if x<=e, xf(ln(x)) for x>e.
Consider the series \sum_1^\infty 1/f(n).
This series is divergent (see Putnam 2008 A4).
That means that "g(n) = lowest number of terms in that series to exceed n" is a total function on the natural numbers.
I claim that with h(n)=g^g(n)(n), we win this contest with h(xkcd).
I know you didn't make this claim seriously, but your function was so interesting that I just had to try and estimate its growth rate.
it seems to me that g(n) ~ e^^n. And if that's correct than h(n) ~ e^^^e^^n ~ 10^^^10^^n = GFn for large n.
Re: My number is bigger!
First, I want to recommend this page by Robert Munafo to anyone interested in the subject.It provides a very understandable, yet far reaching survey of the topic (even though it goes well into the uncomputable range).
Second, I specifically find interesting a briefly mentioned reference regarding Betti Numbers that I find fascinating. I'm not really all that familiar with the topic (algebraic topology and related group theory), but the article linked there (see references  for a web view, I hope that my link works, otherwise google "view ps online" to convert the article) shows really reasonable questions about real mathematics that involve (assuming the article is correct) growth rates of Busy Beaver functions for oracle turing machines. Fascinating!
Perhaps I should rather post that over in the math forum  but either way, I'm curious if someone more familiar with the topic can provide insight on that article.
There are many other really interesting references on Robert Munafo's page, so again, I can only recommend reading it!
Second, I specifically find interesting a briefly mentioned reference regarding Betti Numbers that I find fascinating. I'm not really all that familiar with the topic (algebraic topology and related group theory), but the article linked there (see references  for a web view, I hope that my link works, otherwise google "view ps online" to convert the article) shows really reasonable questions about real mathematics that involve (assuming the article is correct) growth rates of Busy Beaver functions for oracle turing machines. Fascinating!
Perhaps I should rather post that over in the math forum  but either way, I'm curious if someone more familiar with the topic can provide insight on that article.
There are many other really interesting references on Robert Munafo's page, so again, I can only recommend reading it!
Re: My number is bigger!
Well, we have a few people who are knowledgeable about recursive googology (the kind that uses ordinals) who are regular participants on these forums (fora)?
But I don't know if there's anybody here who is an expert on Turing machines or Busy Beavers. We had a few of those back in the day, but I haven't seen any of them for quite a long time.
But I don't know if there's anybody here who is an expert on Turing machines or Busy Beavers. We had a few of those back in the day, but I haven't seen any of them for quite a long time.
Re: My number is bigger!
Desiato wrote:I've actually been reading through the whole thread, and I submit a different concern regarding numbers submitted as some sort of proof enumeration in ZFC or stronger.
So far, as I can see, I've only seen handwaving regarding proof length. Note that proof length is crucial since the limit on proof length is what separates those entries from computable to non computable.
I really have no intuition either way, but let me give arguments to consider:
* Can anyone prove any lower or upper bound on the proof lengths involved? Given the careful strictness that any operation in formal logic and set theory involves, this is absolutely not as trivial as it seems. Ignore, for instance, the wikipedia articles on this  as far as I eyeballed them, none specify the *language* used, which is rather crucial. Typically, those are restricted to really basic things  the empty set, "element of", maybe a couple other symbols such as set brackets and variables. This means that even naming finite numbers in the usual counting (i.e. successor) way will take loads and loads of symbols. Every time. Functions are far worse, especially if they are complex. This is usually handwaved over since set theory really isn't concerned with "how big is number foo" in its own terms, but it plays a major role in how large proofs grow.
* Given the above, it seems absolutely possible that even relatively low ordinals require something on the order of their own fast growing hierarchy of symbols of their own magnitude to prove them total.
* Worse, there is absolutely no way I can see how we can determine any sort of lower or upper bound of proof length of totallity of bounded busy beaver type functions. It seems, to me, entirely possible that for reasonably big n that the length of a proof of totality in any given logic system for a nproofbounded busy beaver is essentially that bounded busy beaver in length. Perhaps even more  I don't immediately see why it shouldn't be.
Given all that, I think any of the zfc+ defined numbers here require serious research and proofs to actually be called bigger than the things in the (comparatively, given the lack of fundamental sequences named) more welldefined fast growing hierarchy. It may well be that they outgrow them, but only at arguments that themselves are outgrown by the numbers named in the fgh here.
Well, any rigorous limit on the proof length of a proof length that a given Turing machine halts would require delving into the minutia of formal ZFC proofs and programming a specific Turing machine that can do what you want, which I think is beyond what anyone would really be willing to do here. So really all that you are going to get are plausibility arguments.
To see that f(googolplex) exceeds TREE(googolplex), where f is the specified function by Yudkowsky (or meur or itaibn or myself earlier in the thread), the main argument is the proof of Kruskal's Theorem, which takes less than a page. Of course, one would need to convert that to a formal ZFC proof for a Turing machine that implements TREE, but while would be difficult, but still doable. The number of steps might increase to say a million, but it wouldn't take a googol steps, much less a googolplex. I imagine you will dismiss this as handwaving, but this is really all I can do without getting into serious work that I am not really willing to do.
Similarly, if one were to define a function f based on some powerful recursive ordinal notation, one might imagine a proof of its wellordering might take some time, but still be of humanly doable length; it wouldn't require a googol characters. Converting that to a ZFC proof would not take the length to more than a googolplex. Again, this is my intuition.
Finally, a number from Friedman (the guy who defined TREE(3), SCG(13) and other ramsey/combinatorical numbers/functions that blow other things out of the water): In http://www.cs.nyu.edu/pipermail/fom/200 ... 10260.html we see that in a certain theory, the proof that TREE is a total function requires 2^^1000 symbols. Note that this is just a lower bound, so it says nothing about how long that actually would be. The current (as far as I know) lower bound for Graham's number (as it was originally intended as a ramsey problem) is 11. Just to illustrate that bounds mean nothing unless they converge.
No, what Friedman did was take a theory that was weak enough that it could _not_ prove TREE total, and showed that a proof that TREE(6) is welldefined requires at least 2^^1000 symbols in that theory. Intuitively, if a theory T has a proof theoretic ordinal alpha, then it cannot prove the function F_alpha in the FGH is total, and a proof that F_alpha (n) is welldefined will require more than n symbols in the theory. But if f is some provably recursive function of T(so bounded by F_beta for some beta < alpha), then T can prove f total, and a proof that f(n) is welldefined will not see the same blowup.
So, while I certainly agree that prooftheoretically defined functions with sufficiently big strength will eventually outgrow other functions, it is in no way clear to me that the actual numbers named here are bigger than those defined by the fast growing hierarchy.
I will also note that I think the "short cuts" ignoring fundamental sequences using ordinals are weak, but most of them are convoluted, spread over dozens pages with dozens edits, so it's hard to find something specific to criticize.
To illustrate the latter, I'll quickly define a pseudoordinal "somewhat" fast growing hierarchy:
f_0[(x)=x+1
f_n+1(x)=f^f(n)_n(x)
For limits, f_\alpha(x)=f_x(x)
Grows fast, right? But it will get stuck at [imath]f_\epsilon[/imath]. At latest. Because it doesn't use fundamental sequences.
I think you have a misunderstanding regarding the fundamental sequence "shortcut". It was equivalent to defining a set of fundamental sequences; it just defined them implicitly rather than explicitly, making the presentation a little more slick. I can go into more details if you want.
Re: My number is bigger!
Desiato wrote:First, I want to recommend this page by Robert Munafo to anyone interested in the subject.It provides a very understandable, yet far reaching survey of the topic (even though it goes well into the uncomputable range).
Second, I specifically find interesting a briefly mentioned reference regarding Betti Numbers that I find fascinating. I'm not really all that familiar with the topic (algebraic topology and related group theory), but the article linked there (see references  for a web view, I hope that my link works, otherwise google "view ps online" to convert the article) shows really reasonable questions about real mathematics that involve (assuming the article is correct) growth rates of Busy Beaver functions for oracle turing machines. Fascinating!
Perhaps I should rather post that over in the math forum  but either way, I'm curious if someone more familiar with the topic can provide insight on that article.
There are many other really interesting references on Robert Munafo's page, so again, I can only recommend reading it!
While I am familiar with algebraic topology and homology groups, it really is too complicated to explain here. The Wikipedia page seems as good an explanation as any: the 0th Betti number is the number of connected components, the 1st Betti number is the number of "holes" or "handles" in the space, and the 2nd and later Betti numbers are higher dimensional versions of that. Getting more specific requires a bunch of technical details that you may not be interested in.
The homology of groups is defined by analogy with the homology of topological spaces, but it gets even more abtract. (at least to me)
Anyway, we can classify "finitely presented" groups by the size of their finite presentations. For example, the group Z_{n} (the integers modulo n) can be constructed as the group generated by some element x with x...x (with n x's) equal to the identity. So the presentation is <x  x...x> and the length of the presentation is n + 1.
There are obviously a finite number of groups with a minimal presentation of length n. For fixed k, each of these groups will have some Betti numer b_{k} which will be either a finite number or infinity. Just like with the Busy Beaver function, toss out the groups with Betti number infinity, and take the largest of the remaining Betti numbers; this will be some finite number b_{k}(n).
What Nabutovsky and Weinburger showed was that if k is at least 3, the function b_{k}(n) grows at a rate of the third order Busy Beaver function! (The first order Busy Beaver is the regular Busy Beaver function; the second order BB_{2} is defined similarly, except you are allowed to make an instruction that computes the function BB_{1}(n); and for BB_{3}(n) you are allowed to make an instruction that computes BB_{2}(n). )
They found an even faster growing function: Again, take the groups with minimal presentation of length n. Instead of looking at b_{k} for fixed k, look at the number of b_{k} that are infinite; again, this will be either a finite number or infinity. Toss out the infinities, and take the maximum remaining finite number; call it c(n). They proved that c(n) grows at the rate of BB_{5}(n)!
It would be interested to see how far up the Busy Beaver hierarchy one could go with natural sounding statements; presumably, as you go higher and higher, the statements would have to get more and more convoluted, since they would have to be equivalent to an arithmetical statement with a higher and higher number of alternating quantifiers. But even for BB_{5}(n) the statement seemed simple enough.
And yeah, Munafo's page is quite interesting!
Re: My number is bigger!
Well, certainly  but it doesn't have to be here. I'd be satisfied with a reference as well, as long that seems conclusive for the question here.Deedlit wrote:Desiato wrote:...I've only seen handwaving regarding proof length...
Well, any rigorous limit on the proof length of a proof length that a given Turing machine halts would require delving into the minutia of formal ZFC proofs and programming a specific Turing machine that can do what you want, which I think is beyond what anyone would really be willing to do here.
Deedlit wrote:[f vs TREE]the main argument is the proof of Kruskal's Theorem, which takes less than a page. Of course, one would need to convert that to a formal ZFC proof for a Turing machine that implements TREE, but while would be difficult, but still doable. The number of steps might increase to say a million, but it wouldn't take a googol steps, much less a googolplex. I imagine you will dismiss this as handwaving, but this is really all I can do without getting into serious work that I am not really willing to do.
Indeed, I probably would dismiss it as that, though I will admit that for specific cases, I'm now leaning slightly more towards your argument, as the expansion of formulas to new symbols for each (which then actually are replaced with their actual basic form, recursively) step in some definition process doesn't appear to blow up too bad.
Deedlit wrote:Similarly, if one were to define a function f based on some powerful recursive ordinal notation, one might imagine a proof of its wellordering might take some time, but still be of humanly doable length; it wouldn't require a googol characters. Converting that to a ZFC proof would not take the length to more than a googolplex. Again, this is my intuition.
At this point I see a different problem  you'll need to ensure that not only is your theory able to show that all ordinals (including specific uncountable ones used for collapse functions) involved exist, you'll also need to prove that ordinals defined in the theory actually equal their corresponding external ordinal in the model in question. Or alternatively, you'll need to prove that the strength of the resulting fundamental sequence really has the same strength anyway. All of this seems highly nontrivial to me.
Deedlit wrote:In http://www.cs.nyu.edu/pipermail/fom/200 ... 10260.html we see that in a certain theory, the proof that TREE is a total function requires 2^^1000 symbols. Note that this is just a lower bound, so it says nothing about how long that actually would be.
No, what Friedman did was take a theory that was weak enough that it could _not_ prove TREE total, and showed that a proof that TREE(6) is welldefined requires at least 2^^1000 symbols in that theory.
You're absolutely right, but that only strengthens my point. Here, what we see is that any proof of such actually requires an already larger number of steps than a Googolplex. Granted, I think you may be right regarding TREE in stronger theories, but I've seen no argument why the same should be true for bounded busy beavers in ZFC or stronger.
Deedlit wrote: Intuitively, if a theory T has a proof theoretic ordinal alpha, then it cannot prove the function F_alpha in the FGH is total, and a proof that F_alpha (n) is welldefined will require more than n symbols in the theory. But if f is some provably recursive function of T(so bounded by F_beta for some beta < alpha), then T can prove f total, and a proof that f(n) is welldefined will not see the same blowup.
This does seem to make some sense. I'll think on that for a bit.
Deedlit wrote:I will also note that I think the "short cuts" ignoring fundamental sequences using ordinals are weak, but most of them are convoluted, spread over dozens pages with dozens edits, so it's hard to find something specific to criticize.
I think you have a misunderstanding regarding the fundamental sequence "shortcut". It was equivalent to defining a set of fundamental sequences; it just defined them implicitly rather than explicitly, making the presentation a little more slick. I can go into more details if you want.
Absolutely possible  as I said, a specific one is hard to pick out with discussions over pages, some versions edited back in their original post, some reposted with corrections etc. So if you could demonstrate a specific one in its current version, I'd appreciate it and definitely will have a look.
Edit to add: Thanks for your comments regarding Betti numbers. What you described was roughly about what I was able to understand myself. I'm really more interested in the proof details, since that once again seems rather handwavy to me at points. I can't tell if it's my lack of familiarity with the topic or not. But I definitely agreee that it's fascinating!
Re: My number is bigger!
Desiato wrote:You're absolutely right, but that only strengthens my point. Here, what we see is that any proof of such actually requires an already larger number of steps than a Googolplex. Granted, I think you may be right regarding TREE in stronger theories, but I've seen no argument why the same should be true for bounded busy beavers in ZFC or stronger.
That's a really good question.
Especially since you can write a Turing Machine that generates all possible proofs in ZFC. It would therefore seem to me, intuitively, that Busy Beavers should be stronger than prooflength base functions.
Given how convinced the experts are on this, I'm pretty sure that my intuition is wrong. But I would like to understand why it is wrong.
Re: My number is bigger!
PsiCubed wrote:Especially since you can write a Turing Machine that generates all possible proofs in ZFC. It would therefore seem to me, intuitively, that Busy Beavers should be stronger than prooflength base functions.
Given how convinced the experts are on this, I'm pretty sure that my intuition is wrong. But I would like to understand why it is wrong.
The Busy Beaver function is indeed faster growing than the prooflength functions defined by Yudkowsky et al, since the latter functions are recursive, and the Busy Beaver function outgrows all recursive functions. The prooflength functions were designed to be really fast growing functions that are still recursive, and hence allowed constructions in this thread, as opposed to the Busy Beaver function.
Re: My number is bigger!
Desiato wrote:Well, certainly  but it doesn't have to be here. I'd be satisfied with a reference as well, as long that seems conclusive for the question here.
Well, I don't know if I can find a reference  this sort of question doesn't seem to generate much interest in the general mathematical community. However, there is something related that is of great interest to the mathematical community, which is creating a database of formalized mathematical proofs, so that all theorems in the database can be rigorously proven by a computer proof checker, mostly avoiding the possibility of human error. This would be a great thing, but it is also a massive undertaking; there are a gajillion mathematical proofs out there! Automated proof checkers include Metamath, HOL light, and the Mizar system; looking at metamath, there doesn't seem to be a proof of Kruskal's Theorem. Still, Kruskal's Theorem is not all that hard, writing up the proof in Metamath should be doable, especially since very difficult proofs like the Four Color Theorem and the Kepler Conjecture have been formalized.
Rather than considering the exact function definition of Yudkowsky or meur, it would probably be more convenient to define it relative to something like Metamath; for example, let Metamath_{n} be the collection of Sigma^{0}_{1} statements (that is, statements of the form "there exists a natural number x such that" R, where R is a recursive predicate) that are of length at most n symbols in Metamath and have a proof in Metamath of at most n steps. For each such statement, consider the smallest natural number m that satisfies the existential quantifier, and let Metamath(n) be the largest such natural number over all statements in Metamath_{n}. Now we don't have to worry about converting to the language of set theory or Turing machines.
Deedlit wrote:Similarly, if one were to define a function f based on some powerful recursive ordinal notation, one might imagine a proof of its wellordering might take some time, but still be of humanly doable length; it wouldn't require a googol characters. Converting that to a ZFC proof would not take the length to more than a googolplex. Again, this is my intuition.
At this point I see a different problem  you'll need to ensure that not only is your theory able to show that all ordinals (including specific uncountable ones used for collapse functions) involved exist, you'll also need to prove that ordinals defined in the theory actually equal their corresponding external ordinal in the model in question. Or alternatively, you'll need to prove that the strength of the resulting fundamental sequence really has the same strength anyway. All of this seems highly nontrivial to me.
But, if one can prove in ZFC (or Metamath) that a given well ordering of order type alpha is indeed a well ordering, then it immediately follows that that a straightforward ordinal hierarchy like FGH will provably recursive in ZFC. In particular, if the googological notation that we are trying to beat is based on a wellfounded structure of rank alpha, then ZFC can prove that the googological notation is total. (That is, assuming standard googolical notations like FGH or BEAF, where the notation is based on some wellfounded structure, and the recursive definition decreases the rank of the notation each time.)
You're absolutely right, but that only strengthens my point. Here, what we see is that any proof of such actually requires an already larger number of steps than a Googolplex. Granted, I think you may be right regarding TREE in stronger theories, but I've seen no argument why the same should be true for bounded busy beavers in ZFC or stronger.
But, the whole point is that the TREE function lies beyond the provably recursive functions of that particular theory. No one is saying that the Yudkowsky function can dominate functions that are beyond the provably recursive functions of ZFC+I0; what is being claimed is that reasonably defined provably recursive functions of ZFC+I0 can be proven in a nonastronomical number of steps, as in less than a googolplex. Again, intuition based, and only provable once we specify what we are comparing the Yudkowsky function to.
Absolutely possible  as I said, a specific one is hard to pick out with discussions over pages, some versions edited back in their original post, some reposted with corrections etc. So if you could demonstrate a specific one in its current version, I'd appreciate it and definitely will have a look.
Well, with a quick search I wasn't able to find where in this thread we used it (the thread has gotten quite long!), but that's okay, I can just use Emlightened's notation from earlier on this page, where she defines an ordinal notation based on these Skolem hulls C_{n}. The idea is to define f(a,n) with ordinal a and natural number n to be largest finite ordinal in, say, C_{n}(a,3). Then we add f(c,n) to the definition of C_{n}(a,b), provided that c < a. Then f(a,n) will be a fast growing ordinal hierarchy over the ordinal a, and we can dispense with the definintions of the fundamental sequences and the fast growing hierarchy.
Some analysis: f(0,0) is the largest finite ordinal in C_0(0,3), which is clearly 2. Then for each next C_n(0,3) we can apply the function w^0 + x = x+1, so we increase by 1 each time, and f(0,n) = n+2.
Next, f(1,0) is the largest finite ordinal in C_1(1,3) which is also 2. Then for each next hull we can apply the function f(0,x), so we add 2 each time and get f(1,n) = 2n+2. Then for f(2,n) we start from 2 again and can apply f(1,x) each time to get 2^(n+2)  2, and so on. So we get an Ackermanntype hierarchy for finite ordinals.
Next f(w,0) is again 2, and then for f(w,n+1) we can apply the function f_x(x) to the number f(w,n). So this diagonalizes over the hierarchy of functions for the finite ordinals. Then f(w+1,n) iterates over f(w,n), and f(w+2,n) iterates over f(w+1,n), and so on. f(w2,0) is 2, f(w2,1) = f(2,2) = 14, and then for f(w2,n) we can get f(w+f(w2,n2), f(w2,n1)), since f(w2,n2) is in C_{n2}(w2,3) and therefore w+f(w2,n2) is in C_{n1}(w2,3).
So in general, at each successor ordinal we iterate over the previous ordinal, whereas for limit ordinals a, at each step we take the largest ordinal less than a that is in the previous Skolem hull, and these ordinals will keep increasing up to a. So we do get a proper ordinal hierarchy this way.
Edit to add: Thanks for your comments regarding Betti numbers. What you described was roughly about what I was able to understand myself. I'm really more interested in the proof details, since that once again seems rather handwavy to me at points. I can't tell if it's my lack of familiarity with the topic or not. But I definitely agreee that it's fascinating!
Well, I don't want to go over the whole proof; can you point to specific details that you find questionable?
Re: My number is bigger!
Deedlit wrote:Desiato wrote:Well, certainly  but it doesn't have to be here. I'd be satisfied with a reference as well, as long that seems conclusive for the question here.
[various proof checking languages] Still, Kruskal's Theorem is not all that hard, writing up the proof in Metamath should be doable, especially since very difficult proofs like the Four Color Theorem and the Kepler Conjecture have been formalized.
Oddly enough, I believe the Four Color Theorem to be much more simple  both in depth and in proof complexity than Kruskal's Tree. Consider that those "short" proofs of Kruskal's refer to numerous results before them, such as König's Lemma (we're talking about Friedman's finite version, after all, and the proof I know uses that). Those all need to be included in a proof relevant for the "big number" thing here.
Deedlit wrote:Now we don't have to worry about converting to the language of set theory or Turing machines.
Well, as mentioned in my previous post, my concern regarding Turing machines in ZFC etc is that I find it highly likely that listing all states and steps the machine goes through until it halts IS the shortest halting proof. Basically, bounded busy beavers might well be their own Kolmogorov complexity wrt ZFC.
Deedlit wrote:Deedlit wrote:Similarly, if one were to define a function f based on some powerful recursive ordinal notation, one might imagine a proof of its wellordering might take some time, but still be of humanly doable length; it wouldn't require a googol characters. Converting that to a ZFC proof would not take the length to more than a googolplex. Again, this is my intuition.
At this point I see a different problem  you'll need to ensure that not only is your theory able to show that all ordinals (including specific uncountable ones used for collapse functions) involved exist, you'll also need to prove that ordinals defined in the theory actually equal their corresponding external ordinal in the model in question. Or alternatively, you'll need to prove that the strength of the resulting fundamental sequence really has the same strength anyway. All of this seems highly nontrivial to me.
But, if one can prove in ZFC (or Metamath) that a given well ordering of order type alpha is indeed a well ordering, then it immediately follows that that a straightforward ordinal hierarchy like FGH will provably recursive in ZFC. In particular, if the googological notation that we are trying to beat is based on a wellfounded structure of rank alpha, then ZFC can prove that the googological notation is total. (That is, assuming standard googolical notations like FGH or BEAF, where the notation is based on some wellfounded structure, and the recursive definition decreases the rank of the notation each time.)
Not sure what you're saying. If you mean that "if ZFC or whatever systems proves ordinal alpha is in fact an ordinal, then all functions with growth rate below alpha can be proven total in that system"  okay, but that doesn't address my concern?
Deedlit wrote:You're absolutely right, but that only strengthens my point. Here, what we see is that any proof of such actually requires an already larger number of steps than a Googolplex. Granted, I think you may be right regarding TREE in stronger theories, but I've seen no argument why the same should be true for bounded busy beavers in ZFC or stronger.
But, the whole point is that the TREE function lies beyond the provably recursive functions of that particular theory.
Indeed, which is why it's a good example regarding my concerns about busy beavers explained above  obviously, no theory will be able to prove BB to be total, after all.
Deedlit wrote:Absolutely possible  as I said, a specific one is hard to pick out with discussions over pages, some versions edited back in their original post, some reposted with corrections etc. So if you could demonstrate a specific one in its current version, I'd appreciate it and definitely will have a look.
Well, with a quick search I wasn't able to find where in this thread we used it (the thread has gotten quite long!), but that's okay, I can just use Emlightened's notation from earlier on this page,
Sorry, but I'm unable to find that  I checked 4 pages back. Could you link to that, please? Or alternatively quote?
Deedlit wrote: where she defines an ordinal notation based on these Skolem hulls C_{n}.
Waitwhat? To my knowledge, Skolem hulls are models regarding certain theories.
Deedlit wrote:The idea is to define f(a,n) with ordinal a and natural number n to be largest finite ordinal in, say, C_{n}(a,3). Then we add f(c,n) to the definition of C_{n}(a,b), provided that c < a. Then f(a,n) will be a fast growing ordinal hierarchy over the ordinal a, and we can dispense with the definintions of the fundamental sequences and the fast growing hierarchy.
This obviously strictly depends on the exact definition of C.
Deedlit wrote:Next f(w,0) is again 2, and then for f(w,n+1) we can apply the function f_x(x) to the number f(w,n).
What is f_x(x)?
Deedlit wrote:So this diagonalizes over the hierarchy of functions for the finite ordinals. Then f(w+1,n) iterates over f(w,n), and f(w+2,n) iterates over f(w+1,n), and so on. f(w2,0) is 2, f(w2,1) = f(2,2) = 14, and then for f(w2,n) we can get f(w+f(w2,n2), f(w2,n1)), since f(w2,n2) is in C_{n2}(w2,3) and therefore w+f(w2,n2) is in C_{n1}(w2,3).
So in general, at each successor ordinal we iterate over the previous ordinal, whereas for limit ordinals a, at each step we take the largest ordinal less than a that is in the previous Skolem hull, and these ordinals will keep increasing up to a. So we do get a proper ordinal hierarchy this way.
Again, that depends on C, clearly, so I can't comment on this quite yet. I'm curious though!
Deedlit wrote:Edit to add: Thanks for your comments regarding Betti numbers...
Well, I don't want to go over the whole proof; can you point to specific details that you find questionable?
Given the number of questions I already pose regarding this thread, I'll defer that. It'll need quite some time to pinpoint my concerns.
Re: My number is bigger!
Desiato wrote:Oddly enough, I believe the Four Color Theorem to be much more simple  both in depth and in proof complexity than Kruskal's Tree. Consider that those "short" proofs of Kruskal's refer to numerous results before them, such as König's Lemma (we're talking about Friedman's finite version, after all, and the proof I know uses that). Those all need to be included in a proof relevant for the "big number" thing here.
I think that the proof of Kruskal's Theorem will be much shorter than the Four Color Theorem, even with all the needed stuff included. But, not reason to get into an argument over that right now.
Desiato wrote:Well, as mentioned in my previous post, my concern regarding Turing machines in ZFC etc is that I find it highly likely that listing all states and steps the machine goes through until it halts IS the shortest halting proof. Basically, bounded busy beavers might well be their own Kolmogorov complexity wrt ZFC.
Well, even if there are some complex Turing machines that cannot be proven to halt in fewer steps than the number of steps the Turing machine takes to halt (I don't know if that is true when the number of steps gets large, but I won't argue the point for now), the point is that we are comparing the ZFC function to some googological notation whose corresponding ordinal is smaller than the proof theoretic ordinal of ZFC, and it seems very, very likely to me that the proof of wellordering for that notation will not be astronomical (as in more than a googolplex). So that will translate to a proof that a Turing machine that implements the googological notation halts, and that is all we need  if we can find a single Turing machine that takes longer number of steps to halt than the number we are competing against, than all the other Turing machines can take a hike, since we will have already proven that the ZFC function at a googolplex is bigger than the competing number.
But, if the use of Turing machines concerns you, you can remove their use by searching through Pi^0_1 statements and taking the minimum natural number that satisfies them, like I did with my Metamath(n) example above.
Desiato wrote:Not sure what you're saying. If you mean that "if ZFC or whatever systems proves ordinal alpha is in fact an ordinal, then all functions with growth rate below alpha can be proven total in that system"  okay, but that doesn't address my concern?
Hm, then I'm not entirely sure what your concern is.
Desiato wrote:Indeed, which is why it's a good example regarding my concerns about busy beavers explained above  obviously, no theory will be able to prove BB to be total, after all.
But, we aren't trying to prove that the ZFC function beats the BB function  we are comparing the ZFC function to some competing function f that has a corresponding ordinal lower than the proof theoretic ordinal of ZFC, so there will be a proof that the competing function is total. So that will take some fixed number of steps c (which I'm claiming will be less than a googolplex for reasonable provably recursive functions of ZFC), and then for a few more steps we can prove the existence of f^googolplex (googolplex), say, and so the ZFC function will be larger at a googolplex. The reason this doesn't work for TREE(n) for a weak enough theory T is that T can't prove TREE(n) to be total, no matter how many steps. This is analogous to a function f with corresponding ordinal larger than the proof theoretic ordinal of ZFC. I'm not claiming that the ZFC function can beat that function.
Desiato wrote:Sorry, but I'm unable to find that  I checked 4 pages back. Could you link to that, please? Or alternatively quote?
It's here: http://forums.xkcd.com/viewtopic.php?f=14&t=7469&start=1480#p3973335
Desiato wrote:Waitwhat? To my knowledge, Skolem hulls are models regarding certain theories.
It's the term used in papers on ordinal collapsing functions. I've seen the term used elsewhere as well.
Desiato wrote:What is f_x(x)?
Sorry, that should be f(x,x).
Re: My number is bigger!
Deedlit wrote:Desiato wrote:Well, as mentioned in my previous post, my concern regarding Turing machines in ZFC etc is that I find it highly likely that listing all states and steps the machine goes through until it halts IS the shortest halting proof. Basically, bounded busy beavers might well be their own Kolmogorov complexity wrt ZFC.
Well, even if there are some complex Turing machines that cannot be proven to halt in fewer steps than the number of steps the Turing machine takes to halt (I don't know if that is true when the number of steps gets large, but I won't argue the point for now), the point is that we are comparing the ZFC function to some googological notation whose corresponding ordinal is smaller than the proof theoretic ordinal of ZFC, and it seems very, very likely to me that the proof of wellordering for that notation will not be astronomical (as in more than a googolplex). So that will translate to a proof that a Turing machine that implements the googological notation halts, and that is all we need  if we can find a single Turing machine that takes longer number of steps to halt than the number we are competing against, than all the other Turing machines can take a hike, since we will have already proven that the ZFC function at a googolplex is bigger than the competing number.
True enough. I agree that  correctly phrased and with all neccessary caution  such a system will generate huge numbers. I was specifically refering to the number "longest running nstate turing machine provable to be halting in n steps", and I still claim that it likely won't come close to BB(n).
Deedlit wrote:Desiato wrote:Not sure what you're saying. If you mean that "if ZFC or whatever systems proves ordinal alpha is in fact an ordinal, then all functions with growth rate below alpha can be proven total in that system"  okay, but that doesn't address my concern?
Hm, then I'm not entirely sure what your concern is.
My concern is that showing that certain big countable ordinals are in fact ordinals in a given system may be hard (and long) to prove.
Deedlit wrote:Desiato wrote:Indeed, which is why it's a good example regarding my concerns about busy beavers explained above  obviously, no theory will be able to prove BB to be total, after all.
But, we aren't trying to prove that the ZFC function beats the BB function  we are comparing the ZFC function to some competing function f that has a corresponding ordinal lower than the proof theoretic ordinal of ZFC, so there will be a proof that the competing function is total. So that will take some fixed number of steps c (which I'm claiming will be less than a googolplex for reasonable provably recursive functions of ZFC), and then for a few more steps we can prove the existence of f^googolplex (googolplex), say, and so the ZFC function will be larger at a googolplex. The reason this doesn't work for TREE(n) for a weak enough theory T is that T can't prove TREE(n) to be total, no matter how many steps. This is analogous to a function f with corresponding ordinal larger than the proof theoretic ordinal of ZFC. I'm not claiming that the ZFC function can beat that function.
Okay. My doubts about proof lengths remain, but I'll agree that *some* fast growing function will have a sufficiently short proof of totalness.
I still don't think that without giving at least one concrete example of such a function the claim to be winning is sufficiently proven for the criteria I understand to be required in this thread. And with concrete I don't mean "and this proof will be short".
Deedlit wrote:Desiato wrote:Sorry, but I'm unable to find that  I checked 4 pages back. Could you link to that, please? Or alternatively quote?
It's here: http://forums.xkcd.com/viewtopic.php?f=14&t=7469&start=1480#p3973335
Oh jeez, I entirely missed that due to the spoiler tag. That deserves a separate reply.
Deedlit wrote:Desiato wrote:What is f_x(x)?
Sorry, that should be f(x,x).
Okay. Then that's the step where you'll need fundamental sequences for fast growth if you want to get anywhere, because the second argument of your f is required to be finite, as I understand it. For omega that's trivial, clearly, but if you want to grow further, you'll need something else at that step. At least I think so  I'm still not clear on your terminology (since I don't think that lines up with emlightened's definition).
Regarding your previous post:
Deedlit wrote:So in general, at each successor ordinal we iterate over the previous ordinal, whereas for limit ordinals a, at each step we take the largest ordinal less than a that is in the previous Skolem hull, and these ordinals will keep increasing up to a. So we do get a proper ordinal hierarchy this way.
There remains much to be shown about that. If you really refer to Emlightened, then please see my following response.
Re: My number is bigger!
emlightened wrote:Spoiler:
Sorry, I completely missed this due to the spoiler tag.
Okay, where to start.
First of all, Mahlo cardinals? Why that? And how could you imagine that this could be computable? You obviously run waaay off into uncountable cofinalities, which by definition means that any fundamental sequence equivalent must compute an uncountable number of steps.
I understand that your definition does not directly imply this, but if it does not, what's the benefit?
Second: Your largest number is not really well defined. You mention omega_1, so we're well off into continuum hypothesis land. Or can you show that your definition does not depend on CH?
And now for the details  I think there's a LOT to be shown that's just merely assumed by your construction.
To get a feeling for things, looking at C_1(omega, 2). So C_0 just consists of 0, 2 and M.
Thus in C_1 we have for omega^gamma+delta: 1, 3, M, M+2, M+M.
For chi(mu) we get the minimum of all pi such that C(<omega, beta) (for any beta, presumably) intersecting with M is pi. Intersecting with M won't do much, as M obviously contains any ordinal smaller than M itself. So in fact, we're looking to get the minimum pi such that C(<omega, beta) = pi. Wait, what? Starting here, this doesn't make sense. C is always a set by definition, and for a set to be an ordinal it must by definition contain all ordinals below itself (and due to the intersection, below M). Thus, we're looking for the smallest C(finite, arbitrary) being an ordinal. Does this exist? Why?
I'm going to assume that there's some easy fix for the above, so let's see on...
We'll also get psi_n(omega) for finite n added to our set. This is the minimum pi (finite) such that C(n, beta) (presumably again for arbitrary beta) intersected with pi equals beta. So we're similarly looking for the smallest C(n, beta) to be an ordinal, only this time it needs to equal beta. Again, does this exist, and why?
And that said, the definition of lambda at very least need proof that the maximum ordinal in C_n(M,0) smaller than lambda actually converges towards lamda.
And that's just the one seemingly trivial example I'm trying to untangle. We're not even going into how limited the omega^gamma construction is (and which is why I suspect that even if the other questions can be answered, we'd still be fairly limited here) quite yet.
 emlightened
 Posts: 36
 Joined: Sat Sep 26, 2015 9:35 pm UTC
 Location: Somewhere cosy.
Re: My number is bigger!
Because of embarrassingly many mistakes (about four), I'm going to restate what I posted and note the changes below.
The changes are the separation of (γ,δ)↦ω^{γ}+δ into (γ,δ)↦γ+δ and γ↦ω^{γ}, to ensure the identity function is included (implicitly), and I changed the otherwiseundefined β in the χ function definition to π. Also, the set used in the fundamental sequences has been changed slightly so that ε_{M+1} replaces M, and Λ is defined as a shorthand for ψ_{ω_1}(ε_{M+1}).
Because a weakly Mahlo cardinal is weakly inaccessible, weakly ωinaccessible, weakly hyperinaccessible, weakly hyperhyper...hyperinaccessible, and so on. The χ function collapses down onto some layer of inaccessibility (for instance, γ↦χ(M^{2}+γ) enumerates the weakly hyperinaccessible cardinals).
No fundamental sequences of uncountable ordinals are ever used, despite featuring prevalently in the definition of the function. Λ is countable, and all of the fundamental sequences used are for ordinals less than or equal to this.
Why would it depend on CH? Power set and strong limits are never used, so CH and GCH are irrelevant.
C_{0}(ω,2) is actually {0,1,M}, as the β mentioned is unioned with {0,M}, and hence it should be treated as {0,1}. (The rest of the calculation is irrelevant anyway, due to the slight definition change.)
Intersecting with M is actually quite important. Note that, for all α,β, β⊆C(α,β) and sup(C(α,β)) = ε_{M+1}. This means that, if we didn't do the intersection, then the ordinal couldn't exist, because it would have to be ε_{M+1}, which isn't regular. To be more specific: the statement can be read that no ordinals γ (for β≤γ<M) can be made by applying the operations in the definition of C_{n+1} onto the ordinals <β and M.
Also, there is no < sign; it is the set C(ω,β). As a side note, nothing really interesting happens until we get past C(M,β) for the χ and ψ functions' definitions.
It is sufficient to prove that C(ε_{M+1},β) = C(ε_{M+1},Λ), for all β<Λ. This can, in turn, be proven from C(ε_{M+1},0)∩ω_{1} = Λ, and that in turn follows from that, by the definition of Λ, each ordinal <Λ must have a representation from only {0,M} and the functions used in the definition of C_{n+1}.
Spoiler:
Desiato wrote:First of all, Mahlo cardinals? Why that?
Because a weakly Mahlo cardinal is weakly inaccessible, weakly ωinaccessible, weakly hyperinaccessible, weakly hyperhyper...hyperinaccessible, and so on. The χ function collapses down onto some layer of inaccessibility (for instance, γ↦χ(M^{2}+γ) enumerates the weakly hyperinaccessible cardinals).
Desiato wrote:And how could you imagine that this could be computable? You obviously run waaay off into uncountable cofinalities, which by definition means that any fundamental sequence equivalent must compute an uncountable number of steps.
I understand that your definition does not directly imply this, but if it does not, what's the benefit?
No fundamental sequences of uncountable ordinals are ever used, despite featuring prevalently in the definition of the function. Λ is countable, and all of the fundamental sequences used are for ordinals less than or equal to this.
Desiato wrote:Second: Your largest number is not really well defined. You mention omega_1, so we're well off into continuum hypothesis land. Or can you show that your definition does not depend on CH?
Why would it depend on CH? Power set and strong limits are never used, so CH and GCH are irrelevant.
Desiato wrote:So C_0 just consists of 0, 2 and M.
C_{0}(ω,2) is actually {0,1,M}, as the β mentioned is unioned with {0,M}, and hence it should be treated as {0,1}. (The rest of the calculation is irrelevant anyway, due to the slight definition change.)
The issues raised here mostly don't apply to the fixed function, but I'll bring up a couple of them anyway.Desiato wrote:For chi(mu) we get the minimum of all pi such that C(<omega, beta) (for any beta, presumably) intersecting with M is pi. Intersecting with M won't do much, as M obviously contains any ordinal smaller than M itself. So in fact, we're looking to get the minimum pi such that C(<omega, beta) = pi. Wait, what? Starting here, this doesn't make sense. C is always a set by definition, and for a set to be an ordinal it must by definition contain all ordinals below itself (and due to the intersection, below M). Thus, we're looking for the smallest C(finite, arbitrary) being an ordinal. Does this exist? Why?
Intersecting with M is actually quite important. Note that, for all α,β, β⊆C(α,β) and sup(C(α,β)) = ε_{M+1}. This means that, if we didn't do the intersection, then the ordinal couldn't exist, because it would have to be ε_{M+1}, which isn't regular. To be more specific: the statement can be read that no ordinals γ (for β≤γ<M) can be made by applying the operations in the definition of C_{n+1} onto the ordinals <β and M.
Also, there is no < sign; it is the set C(ω,β). As a side note, nothing really interesting happens until we get past C(M,β) for the χ and ψ functions' definitions.
No, we won't, because π has to be an uncountable regular cardinal.Desiato wrote:We'll also get psi_n(omega) for finite n added to our set.
Desiato wrote:And that said, the definition of lambda at very least need proof that the maximum ordinal in C_n(M,0) smaller than lambda actually converges towards lamda.
It is sufficient to prove that C(ε_{M+1},β) = C(ε_{M+1},Λ), for all β<Λ. This can, in turn, be proven from C(ε_{M+1},0)∩ω_{1} = Λ, and that in turn follows from that, by the definition of Λ, each ordinal <Λ must have a representation from only {0,M} and the functions used in the definition of C_{n+1}.
ω^{γ}, despite appearances, is not any sort of great limiting factor. Would it surprise you at all that if we added the π↦π^{+} function to C_{n+1}, then the resulting ψ function's strength wouldn't change at all if we removed the γ↦ω^{γ} function, or added the Veblen function (even extended to transfinitely many variables)?Desiato wrote:And that's just the one seemingly trivial example I'm trying to untangle. We're not even going into how limited the omega^gamma construction is quite yet.
Last edited by emlightened on Wed Jun 08, 2016 10:08 pm UTC, edited 1 time in total.
Bleb. Idk when I'm returning.
Re: My number is bigger!
First of all, sorry for the late reply. I was on the road, and this sort of thing is nothing I like to type out on my phone. I did take some time to look into your definitions more deeply, though, and am pretty certain I now understand what you're trying to do. I still have doubts you succeed, though.
I think you still have two typos: You'll probably want γδ in the definition of C_{n}  otherwise, you're not getting ω2 etc and run out of generated ordinals fast. And you appear to be missing an index n in the definition of lambda[n].
But this is problematic. Even if we assume the existance of Mahlo cardinals  an assumption that is not undisputed  we would need a specific model of set theory that includes Mahlo cardinals. Alternatively, you could provide proof that all properties of Mahlo (and all other) cardinals and ordinals that you use do not depend on the specific model of set theory you're working in. That sounds like a tall task to me, but maybe I'm missing something.
Countable does not imply computability, though. Looking at even just Λ[1], I'm staring at nax(C_1(ε_{M+1},0) intersected with Λ), and that C_1 includes ψ_{M}(M). So what's that? How can it be computed?
Are they? Some ordinals may or may not be countable in some models of set theory. It's not immediately obvious to me why the results of applications of Chi and Psi are always countable no matter what arguments you plug in in whatever model you want to work in .
Okay, that makes sense. Thanks for the further comments
For C(ε_{M+1},0)∩ω_{1} = Λ to follow, we'd have to know that no countable ordinal greater Λ is generated by Chi or Psi, because otherwise that intersection can't equal Λ. How does that follow?
emlightened wrote:Because of embarrassingly many mistakes (about four), I'm going to restate what I posted and note the changes below.The changes are the separation of (γ,δ)↦ω^{γ}+δ into (γ,δ)↦γ+δ and γ↦ω^{γ}, to ensure the identity function is included (implicitly), and I changed the otherwiseundefined β in the χ function definition to π. Also, the set used in the fundamental sequences has been changed slightly so that ε_{M+1} replaces M, and Λ is defined as a shorthand for ψ_{ω_1}(ε_{M+1}).Spoiler:
I think you still have two typos: You'll probably want γδ in the definition of C_{n}  otherwise, you're not getting ω2 etc and run out of generated ordinals fast. And you appear to be missing an index n in the definition of lambda[n].
emlightened wrote:Desiato wrote:First of all, Mahlo cardinals? Why that?
Because a weakly Mahlo cardinal is weakly inaccessible, weakly ωinaccessible, weakly hyperinaccessible, weakly hyperhyper...hyperinaccessible, and so on. The χ function collapses down onto some layer of inaccessibility (for instance, γ↦χ(M^{2}+γ) enumerates the weakly hyperinaccessible cardinals).
But this is problematic. Even if we assume the existance of Mahlo cardinals  an assumption that is not undisputed  we would need a specific model of set theory that includes Mahlo cardinals. Alternatively, you could provide proof that all properties of Mahlo (and all other) cardinals and ordinals that you use do not depend on the specific model of set theory you're working in. That sounds like a tall task to me, but maybe I'm missing something.
emlightened wrote:Desiato wrote:And how could you imagine that this could be computable? You obviously run waaay off into uncountable cofinalities, which by definition means that any fundamental sequence equivalent must compute an uncountable number of steps.
I understand that your definition does not directly imply this, but if it does not, what's the benefit?
No fundamental sequences of uncountable ordinals are ever used, despite featuring prevalently in the definition of the function. Λ is countable, and all of the fundamental sequences used are for ordinals less than or equal to this.
Countable does not imply computability, though. Looking at even just Λ[1], I'm staring at nax(C_1(ε_{M+1},0) intersected with Λ), and that C_1 includes ψ_{M}(M). So what's that? How can it be computed?
emlightened wrote:Desiato wrote:Second: Your largest number is not really well defined. You mention omega_1, so we're well off into continuum hypothesis land. Or can you show that your definition does not depend on CH?
Why would it depend on CH? Power set and strong limits are never used, so CH and GCH are irrelevant.
Are they? Some ordinals may or may not be countable in some models of set theory. It's not immediately obvious to me why the results of applications of Chi and Psi are always countable no matter what arguments you plug in in whatever model you want to work in .
emlightened wrote:Desiato wrote:So C_0 just consists of 0, 2 and M.
C_{0}(ω,2) is actually {0,1,M}, as the β mentioned is unioned with {0,M}, and hence it should be treated as {0,1}. (The rest of the calculation is irrelevant anyway, due to the slight definition change.)
Okay, that makes sense. Thanks for the further comments
emlightened wrote:Desiato wrote:And that said, the definition of lambda at very least need proof that the maximum ordinal in C_n(M,0) smaller than lambda actually converges towards lamda.
It is sufficient to prove that C(ε_{M+1},β) = C(ε_{M+1},Λ), for all β<Λ. This can, in turn, be proven from C(ε_{M+1},0)∩ω_{1} = Λ, and that in turn follows from that, by the definition of Λ, each ordinal <Λ must have a representation from only {0,M} and the functions used in the definition of C_{n+1}.
For C(ε_{M+1},0)∩ω_{1} = Λ to follow, we'd have to know that no countable ordinal greater Λ is generated by Chi or Psi, because otherwise that intersection can't equal Λ. How does that follow?
 emlightened
 Posts: 36
 Joined: Sat Sep 26, 2015 9:35 pm UTC
 Location: Somewhere cosy.
Re: My number is bigger!
The first isn't an error, it's just omitting what's not needed (just use Cantor's normal form to see). Thanks for catching the second one, though.Desiato wrote:I think you still have two typos: You'll probably want γδ in the definition of C_{n}  otherwise, you're not getting ω2 etc and run out of generated ordinals fast. And you appear to be missing an index n in the definition of lambda[n].
You are. Anything which is provable in some theory T (in our case ZFC + "There exists a weakly Mahlo cardinal") is true in all models of that theory. As far as whether the existence of Mahlo cardinals is consistent (which it is widely, although not admittedly universally, regarded to be): we could replace it with a recursively Mahlo ordinal, and replace regularity with admissibility, but in practise this would only make the proofs harder.Desiato wrote:emlightened wrote:Desiato wrote:First of all, Mahlo cardinals? Why that?
Because a weakly Mahlo cardinal is weakly inaccessible, weakly ωinaccessible, weakly hyperinaccessible, weakly hyperhyper...hyperinaccessible, and so on. The χ function collapses down onto some layer of inaccessibility (for instance, γ↦χ(M^{2}+γ) enumerates the weakly hyperinaccessible cardinals).
But this is problematic. Even if we assume the existance of Mahlo cardinals  an assumption that is not undisputed  we would need a specific model of set theory that includes Mahlo cardinals. Alternatively, you could provide proof that all properties of Mahlo (and all other) cardinals and ordinals that you use do not depend on the specific model of set theory you're working in. That sounds like a tall task to me, but maybe I'm missing something.
No. Not even if we get into the semantics with the axiom schema of separation*, as far as I can tell.Desiato wrote:emlightened wrote:Desiato wrote:Second: Your largest number is not really well defined. You mention omega_1, so we're well off into continuum hypothesis land. Or can you show that your definition does not depend on CH?
Why would it depend on CH? Power set and strong limits are never used, so CH and GCH are irrelevant.
Are they?
*Given the cross product to define the functions, if you want to get down to that kind of detail.
The model itself, as long as it's consistent, is irrelevant; set theory is about what we can prove (which is all that matters here); model theory is about what's true or not (and, obviously, everything that's provable in the theory (assuming consistency) is true in all models.Desiato wrote:Some ordinals may or may not be countable in some models of set theory. It's not immediately obvious to me why the results of applications of Chi and Psi are always countable no matter what arguments you plug in in whatever model you want to work in.
Regardless, you seem to have missed an obvious point: all outputs of the χ function are uncountable regular ordinals.
[/quote] That no countable ordinal greater than Λ is generated is trivial: Λ is defined to be the least countable ordinal greater than all ordinals in C(ε_{M+1},Λ) ⊇ C(ε_{M+1},0).Desiato wrote:emlightened wrote:It is sufficient to prove that C(ε_{M+1},β) = C(ε_{M+1},Λ), for all β<Λ. This can, in turn, be proven from C(ε_{M+1},0)∩ω_{1} = Λ, and that in turn follows from that, by the definition of Λ, each ordinal <Λ must have a representation from only {0,M} and the functions used in the definition of C_{n+1}.
For C(ε_{M+1},0)∩ω_{1} = Λ to follow, we'd have to know that no countable ordinal greater Λ is generated by Chi or Psi, because otherwise that intersection can't equal Λ. How does that follow?
Desiato wrote: ... How can it be computed?
This is by far the most interesting question. I think I'll answer it in a separate post, as it will also be quite a long answer.
Bleb. Idk when I'm returning.
Re: My number is bigger!
emlightened wrote:You are. Anything which is provable in some theory T (in our case ZFC + "There exists a weakly Mahlo cardinal") is true in all models of that theory. As far as whether the existence of Mahlo cardinals is consistent (which it is widely, although not admittedly universally, regarded to be): we could replace it with a recursively Mahlo ordinal, and replace regularity with admissibility, but in practise this would only make the proofs harder.Desiato wrote:emlightened wrote:Desiato wrote:First of all, Mahlo cardinals? Why that?
Because a weakly Mahlo cardinal is weakly inaccessible, weakly ωinaccessible, weakly hyperinaccessible, weakly hyperhyper...hyperinaccessible, and so on. The χ function collapses down onto some layer of inaccessibility (for instance, γ↦χ(M^{2}+γ) enumerates the weakly hyperinaccessible cardinals).
But this is problematic. Even if we assume the existance of Mahlo cardinals  an assumption that is not undisputed  we would need a specific model of set theory that includes Mahlo cardinals. Alternatively, you could provide proof that all properties of Mahlo (and all other) cardinals and ordinals that you use do not depend on the specific model of set theory you're working in. That sounds like a tall task to me, but maybe I'm missing something.
So you can prove in ZFC+Mahlo which ordinal exactly is Chi(pi) for some uncountable regular pi? This is not clear to me. Intuitively, some set may be an ordinal or cardinal in one model and not be one in another. In essense, as soon as you cross into uncountability and beyond, whatever sets you have may have wildly different properties in different models. Surely, the construction just uses countable processes, but since you include such gigantic cardinals, I'd really need something more than handwaving to be convinced of this.
emlightened wrote:The model itself, as long as it's consistent, is irrelevant; set theory is about what we can prove (which is all that matters here); model theory is about what's true or not (and, obviously, everything that's provable in the theory (assuming consistency) is true in all models.Desiato wrote:Some ordinals may or may not be countable in some models of set theory. It's not immediately obvious to me why the results of applications of Chi and Psi are always countable no matter what arguments you plug in in whatever model you want to work in.
If you can provide proofs in ZFC+Mahlo, certainly. But so far no proofs at all have been shown, so it's hard to tell if they work in ZFCM or not. Perhaps your post on computation will shed some light on this and alleviate my concerns regarding welldefinedness. If you can show computability, I'll certainly stop arguing about things being welldefined! I'm really looking forward to this.
emlightened wrote:That no countable ordinal greater than Λ is generated is trivial
Makes sense.
emlightened wrote:Desiato wrote: ... How can it be computed?
This is by far the most interesting question. I think I'll answer it in a separate post, as it will also be quite a long answer.
Again  looking forward to this! Ideally a beginning of a computation of your submitted value, but even Λ[1] (or rather more likely a beginning of this) would be most welcome as well.
Re: My number is bigger!
There's a rather interesting recent result regarding the interaction of ZFC and BB, which seems relevant to this thread (despite computability issues I still claim): For any n, ZFC cannot prove that BB(7918)<=n (note that this refers to the definition of BB, not the actual number, since ZFC trivially proves m+1>m for any given natural number). If that's not fascinating, I don't know what is.
Re: My number is bigger!
Desiato wrote:There's a rather interesting recent result regarding the interaction of ZFC and BB, which seems relevant to this thread (despite computability issues I still claim): For any n, ZFC cannot prove that BB(7918)<=n (note that this refers to the definition of BB, not the actual number, since ZFC trivially proves m+1>m for any given natural number). If that's not fascinating, I don't know what is.
Note my recent comment there explaining howwithout essentially changing their machinethe number of states can easily be reduced from 7918 to 7767 (= 7918 – 151) by simply merging 151 pairs of complementary “halfused states”. Hence, no explicit upper bound on BB(7767) is provable in ZFC.
Re: My number is bigger!
There's actually several improvements already  see this for a summary. If it turns out to be correct, they're down to 1919 states (for a machine directly looking for contradictions in ZFC, so no more reliance on large cardinals).
Re: My number is bigger!
What implications would that have for Eleizers entry?
Re: My number is bigger!
Daggoth wrote:What implications would that have for Eleizers entry?
As far as I can tell (and if I recall the nature of his entry correctly), none.
This result basically shows something that was already known: BB is more powerful than ZFC (or in fact, any first order theory) by its very nature if we regard the thread's theme.
The remarkable thing here is not that "ZFC can't prove some n to be bigger than BB(m) for some m", but that that m can actually be shown to be remarkably small.
We can turn things around: Assuming the results in the above links are correct, then even being able to know something as "small" as BB(27) (iiirc) would resolve the Goldbach conjecture  a rather famous problem that current top mathematicians such as Terry Tao consider to be out of reach of current mathematical research for many years. If we were able to know BB(1919), then we would actually be able to prove whether or not ZFC is consistant in an absolute way. That is something most mathematicians would consider absolutely impossible.
And in fact, that really brings up the rather interesting question: "Which n is the smallest such that BB(n) cannot be known?" (where "known" is deliberately not restricted to "provabale in some specific theory"). Clearly, that number exists, since we know BB(1), BB(2), BB(3), BB(4) and have some perspective on BB(5) and BB(6), yet the mentioned result makes it rather likely that n is below 1919.
Conversely, and perhaps more relevant to your question, Eleizers refers to any Turing machine with n states. While I'm no expert, it's pretty easy to design tractable Turing machines that produce unimaginably large numbers given enough states. They would not reach BB(n), but their output would still grow extremely fast.
Re: My number is bigger!
"Which n is the smallest such that BB(n) cannot be known?" (where "known" is deliberately not restricted to "provabale in some specific theory")
I don't believe such n exists.

 Posts: 49
 Joined: Thu Jun 25, 2015 10:43 am UTC
 Location: None of your business
Re: My number is bigger!
Okay, the numbers are CRAZY... Should I try to make a function to beat this? Is it even possible?
This is a signature, in case you didn't notice.
Current tokens: 66.562
Current tokens: 66.562
Re: My number is bigger!
Daggoth wrote:"Which n is the smallest such that BB(n) cannot be known?" (where "known" is deliberately not restricted to "provabale in some specific theory")
I don't believe such n exists.
I don't think the question is even meaningful. The phrase "can be known" is hardly a mathematically welldefined concept.
Username5243 wrote:Okay, the numbers are CRAZY... Should I try to make a function to beat this? Is it even possible?
If you are referring to an ordinalbased constructive function, then I don't think it is possible.
Re: My number is bigger!
PsiCubed wrote:Daggoth wrote:"Which n is the smallest such that BB(n) cannot be known?" (where "known" is deliberately not restricted to "provabale in some specific theory")
I don't believe such n exists.
I don't think the question is even meaningful. The phrase "can be known" is hardly a mathematically welldefined concept.
Do you find the argument that "A proof of consistency of ZFC based on elementary turing machine mechanics is impossible" is likely to be incorrect?
If you do, okay. Then I would like to know why though. I suppose you could argue that the complexity of reasoning required to find some intermediate BB(n) might be even harder than understanding ZFC deeply enough to prove its consistency without making even stronger assumptions, but even then this would be rather stunning to me, given the strictly finitary nature of reasoning about BBs.
But if you buy the argument, that kind of proves the existance of such an n. There are small numbers for which BB(n) is known and a known number  1919  for which BB(1919) cannot beknown, hence out of the finite remaining set, either BB(n) can be figured out or not, and thus there must be a smallest such number. Sure, "can be known" might not be welldefined, yet that doesn't mean for each specific interpretation the statement wouldn't be true.
Who is online
Users browsing this forum: No registered users and 41 guests