Once again, read through the entire thread.

I think the thing I'm most sad about is that

M never managed to get formalized. The idea of automatically generating bigger and bigger ordinals in an explosive way and then iterating and diagonalizing the process just sounds so cool. I guess it's too big to easily grasp though, huh?

As far as big computable numbers/functions (that can be proven to be well-defined) go, it seems hard to beat Buchholz Hydras (

http://googology.wikia.com/wiki/Buchholz_hydra ), Finite promise games (

http://googology.wikia.com/wiki/Finite_promise_games ) and Greedy clique sequence (

http://googology.wikia.com/wiki/Greedy_clique_sequence ) - basically, anything that would be as large as or larger than f_(Takeuti-Feferman–Buchholz ordinal)(x) in the fast growing hierarchy (

http://googology.wikia.com/wiki/Fast-growing_hierarchy ,

https://en.wikipedia.org/wiki/Large_num ... _sequences ,

https://en.wikipedia.org/wiki/Ordinal_c ... rd_ordinal ). (EDIT: Actually, I guess the fastest growing computable function is S(n), defined at the end of Beyond Nested Arrays V, a part of Chris Bird's Super Huge Numbers/Bird Array Notation, since it was designed explicitly to build on the strengths of the previous functions and outstrip them at their own game. Nice work! This series also has definitions of various extended Veblen and Ordinal Collapse functions, while covering every step up to them and beyond, which is useful for keeping track of these immense numbers and ordinals.

http://www.mrob.com/users/chrisb/index.html )

So I guess the goal of finding a bigger computable number/function would be to find a bigger, computable, countable ordinal, either by constructing it from below or relating it to a proof somehow, which is well beyond my current understanding of large numbers/ordinals.

I don't find uncomputable numbers/functions nearly as interesting, since it starts to not even be clear if they have a well-defined result, and they kind of just feel untethered from the base case of starting out with 1, successorship, addition, multiplication... etc. But as far as those goes, the winner seems to be either Little Bigeddon or Sasquatch, or possibly Oblivion/Utter Oblivion (from

http://googology.wikia.com/wiki/Largest ... oogologism ).

---

I thought about this thread (and extremely large numbers in general) again recently when I made a javascript library, break_infinity.js (

https://github.com/Patashu/break_infinity.js ), which is intended to represent numbers as big as 1e(9e15), focusing on performance over accuracy. It was based off of

https://github.com/MikeMcl/decimal.js/ with additional code from SpeedCrunch (

https://bitbucket.org/heldercorreia/spe ... ?at=master ) and of course our friend StackOverflow, but the need for total precision (and in fact the ability to represent arbitrary precision at all - the mantissa is now just a Javascript Number) is dropped so that the code can run faster. I made this library to help out an incremental game, Antimatter Dimensions (

https://www.kongregate.com/games/Hevipe ... dimensions ), which as of the latest update allows you to reach numbers as big as 1e800,000. It was finding that decimal.js was slow and hogging CPU - and coding and swapping to break_infinity.js sped time spent in scripts by 4.5x. Not bad! These performance improvements could be applied to any other incremental game that currently uses decimal.js, like Swarm Simulator and True Exponential.

So why does it stop at 1e(9e15)? Because the exponent is stored as a Javascript Number, that is a double precision floating point value, and these cannot represent every possible integer once you get bigger than 2^53-1, or a bit more than 9e15. You could try to go further, but as soon as production slows down enough that you're gaining less than e1 per tick, you'll run into mathematical glitches, either gaining e2 or e0, both of which are unideal. But if that wasn't a concern, we could get as large as 1e(1.8e308) before the exponent becomes Infinity/NaN and the system breaks.

We can go further by replacing the exponent with a big-integer library. I used

https://github.com/Yaffle/BigInteger/bl ... Integer.js and made break_break_infinity.js which supports up to 1e(1.8e308). Why not further? Because some intermediate calculations, such as in log, exp and pow cast the exponent to a Javascript Number, and if you get any larger these operations overflow. Not to mention, these operations will need to start being able to accept (and output) Decimal instead of Number. (

https://github.com/Patashu/break_infinity.js/issues/14 ) If you fixed these issues in a compelling manner, let's say that the largest number you can represent is one with GB's worth of digits in its exponent, after that it won't even fit in RAM anymore, so approximately 1e(1e(1e9)). You'd probably run into trouble long before then though, as you'd have many such numbers of that size not just the one, all of them need to fit, and I imagine the big-integer operations are getting slower and slower as they get this titanic.

How do we get even further? At some point we need to make a conscious decision that maybe the exact number of digits in the exponent no longer matters, and rewrite with new assumptions. Another person on the Antimatter Dimensions Discord, SpectralFlame, is in their spare time making a HugeNumber library for Java (

https://github.com/cyip92/HugeNumber/bl ... umber.java ), and testing it out by making a stripped down Antimatter Dimensions clone with buttons like 'square all dimension multipliers'. I haven't seen the code for HugeNumber, but from the explanations it sounds like a HugeNumber's exponent can recursively also be a HugeNumber, so that you can represent very large numbers, basically arbitrarily big power towers, by stacking HugeNumbers inside of HugeNumbers until you get something like Ae(Be(Ce(De(EeF)))). In addition to that, once the tower gets unwieldy enough, it keeps track of only the top exponent and the tower size - which in theory lets it handle numbers up to 10^^1000000. Maybe I should ask for it to get open-sourced, it sounds like a really cool way to do it. Perhaps inspiration for break_break_break_infinity.js?

Obviously numbers that aren't even as big as 3^^^3 were surpassed by the first few pages of this thread, but it's interesting to work with the intersection between huge numbers and programmatic representations. When you're working with numbers that can get as big as x^^y, what kind of game can you write? What kinds of operations does your library optimize for? What kinds of precision requirements do you have, and what kinds of precision guarantees can even be made? How do you print these numbers and present them to the player in a way that they can intuitively grasp their relative magnitudes? How do you compare them to find out if they're equal, nearly equal, bigger or smaller? And so on.