Understanding Fast-Growing Functions

For the discussion of math. Duh.

Moderators: gmalivuk, Moderators General, Prelates

User avatar
Posts: 1588
Joined: Sun Mar 22, 2009 8:46 am UTC
Location: Sydney, Australia.

Understanding Fast-Growing Functions

Postby Carlington » Sat Jan 11, 2014 3:05 pm UTC

I've recently started looking at what's been dubbed "googology" - the pursuit of extremely large numbers. It seems to be quite an interesting area, with the internet and the ability to collaborate and share your work spurring lots and lots of people into it. I'm fairly new at this, and don't really understand some of the more subtle concepts as you get higher and higher up the chain, but I have a firm grasp on the basic ideas that seem to underpin most of the field.

The largest computable number (or one of the largest, anyway) listed on the Googology Wiki (reputable sources, woo!) is Loader's Number. This number is the output of loader.c, which is a program that's output is defined as D5(99), where D(k) is the sum of all possible bit strings described by the first k expressions of the calculus of constructions (encoding everything as binary numbers). What confuses me is why they've stopped there. Is it just an arbitrary stopping point, because people would rather focus on trying to prove that certain smaller numbers are bigger or smaller than other smaller numbers? Or is there some sort of limit that is being approached, in terms of computability? Why, for example, haven't we gone to D6(99)? Or D5(100), or DD(99)(99)? What if I were to define a function E(k), where E(k) = DM(k), and M = DD...(k)(k), where we iterate D(k) D(k) times over, and in turn, iterate that process D(k) times over? At what point does this become uncomputable, and how do we know that?
Kewangji: Posdy zwei tosdy osdy oady. Bork bork bork, hoppity syphilis bork.

Eebster the Great: What specifically is moving faster than light in these examples?
doogly: Hands waving furiously.

Please use he/him/his pronouns when referring to me.

Posts: 93
Joined: Sun Jan 08, 2012 9:52 pm UTC

Re: Understanding Fast-Growing Functions

Postby Moole » Sat Jan 11, 2014 4:14 pm UTC

It never becomes uncomputable. The problem here is that is that, if we define a computable function, one can always make another computable one to grow much much faster, as you are doing. Also, worse, one can consider that, if one defines a sequence of computable functions, there is always a function growing faster than all of them - if fn is the sequence, where each function grows faster than the last, F(n)=fn(n) grows faster than all of them. This gives the nice implication that you can, from any initial sequence of functions, you can define an increasing sequence of computable functions corresponding to every countable ordinal, based off of any starting sequence of computable functions. This sort of makes an annoying problem: You cannot write down a largest computable function, nor can you write down a sequence containing a computable function larger than every other (yet, simultaneously, I think it is probably trivial to define a function for each ordinal containing a computable function larger than any given other).

I suppose you could define a hierarchy of functions as follows, taking an ordinal k:

f0(x) = x+1
And for limit ordinals, choose an increasing function Ok such that, for any j<k, there exists an n such that O(n)>j.

This has the following first few terms. For finite n:
fn(x) = x+n
Then, regardless of choice of Oω, it must hold that (where we'll just write ~ to say that it's at least this, possibly minus smaller terms)
fω(x) ~ 2x
f(x) ~ 2ax
Though my definition might not be strong enough, since, for fω2(x), we basically only get that the function's slope goes to infinity, but maybe not very quickly. Though, I don't know if that will cause issues... Maybe someone can prove that, for any computable function g, there exists an ordinal k such that, for all large enough x, fk(x) > g(x), or come up with a better definition.
Mathematical hangover (n.): The feeling one gets in the morning when they realize that that short, elementary proof of the Riemann hypothesis that they came up with at midnight the night before is, in fact, nonsense.

Posts: 2181
Joined: Wed Aug 18, 2010 4:15 am UTC

Re: Understanding Fast-Growing Functions

Postby Derek » Mon Jan 13, 2014 8:11 am UTC

I'm guessing the reason loader.c stops at D5(99) is that the contest it was created for had a 512 character limit. After writing the main body of code, the author probably reckoned that 5 nested D's and a parameter of 99 (the largest 2 character parameter) gave the largest number. Obviously you can make faster and faster growing functions, but with a maximum character limit eventually defining new functions creates too much overhead, and you just need to call something.

Posts: 224
Joined: Tue Jun 17, 2008 11:04 pm UTC

Re: Understanding Fast-Growing Functions

Postby lightvector » Tue Jan 14, 2014 5:56 am UTC

Carlington wrote: Why, for example, haven't we gone to D6(99)? Or D5(100), or DD(99)(99)? What if I were to define a function E(k), where E(k) = DM(k), and M = DD...(k)(k), where we iterate D(k) D(k) times over, and in turn, iterate that process D(k) times over?

One thing to note when dealing with things like loader.c is that the numbers produced are already so large that doing anything like you suggested is completely and utterly insignificant compared to what is already being done to come up with the number in the first place. Sort of analogous to taking 10^10000 and adding 1, or multiplying 9^9^9^9^9^9^9^9 by a billion, or replacing Graham's number with 2^(Graham's number). In each case, technically the change does make the number larger, but the degree of increase is in some sense so tiny compared to the process that generated the number in the first place that you might as well not have bothered.

There is one thing to watch out for in these sorts of integer functions, which is that sometimes due to how they are defined, they might produce only very small outputs for the first few small numbers you plug in, despite being extremely fast-growing asymptotically. Plugging in a moderately large starting integer and explicitly recursing a few times can be a simple way of making it more likely that you're in the range of the function where the asymptotic behavior becomes the dominant factor. Here, it's probably just a way of making sure that this is the case while also, as Derek mentioned, burning a few extra characters.

Posts: 2758
Joined: Tue Sep 04, 2007 2:16 am UTC

Re: Understanding Fast-Growing Functions

Postby mike-l » Tue Jan 14, 2014 5:51 pm UTC

To add to this, generally the improvement from one function to the next is so great that all of these basic increments are completely dwarfed by just increasing the argument of the next guy up by 1
addams wrote:This forum has some very well educated people typing away in loops with Sourmilk. He is a lucky Sourmilk.

Posts: 52
Joined: Wed Aug 05, 2009 2:37 am UTC

Re: Understanding Fast-Growing Functions

Postby Daggoth » Sat Jan 18, 2014 10:08 am UTC

They stop because most people find it uncomfortable to keep going after a certain amount of largeness.
Some people like J. bowers and just go on and on for entire chapters, creating silly and impossible to follow chains of recursive definitions.

User avatar
Posts: 42
Joined: Wed Apr 10, 2013 10:29 am UTC
Location: Over nine thousand.

Re: Understanding Fast-Growing Functions

Postby Voekoevaka » Sat Jan 18, 2014 9:43 pm UTC

One amazing fact about numbers is that they never stop. We built the Ackermann function with recursion of the Ackermann function of the previous ordre (where the function at the ordre 0 is the incrementation), and you can get higher, making a recursion of recursion. Maybe the human brain will never imagine the size of numbers like the Graham's number, built with 64 recursions of an Ackermanian function. Maybe it is a challenge to imagine these.
I'm a dozenalist and a believer in Tau !

Return to “Mathematics”

Who is online

Users browsing this forum: No registered users and 12 guests