For the trolls in this thread, as well as the genuinely confused.

Is 1 = 0.999...?

Short answer: Yes.

Longer answer: To fully understand this question, you need to start with the

rational numbers, which are fractions of two whole numbers. Any fraction of two whole numbers (with nonzero denominator) is a rational number, such as 1/1, 1/2, 3/2, or 2/4. It is possible, however, for two rational numbers to be the same, for example 1/2 and 2/4 represent the same number, usually written as 1/2. To get a unique representation, we have to pass to what are called "least terms", where the numerator and denominator have no common multiple.

Using rational numbers, we can define what we mean by finite decimal expansions, which always represent rational numbers. The notation 17.243 is just a convenient shorthand for the number 1*10+7+2/10+4/100+3/1000. So .9=9/10, .99=9/10+9/100=99/100, .999=999/1000, and so on. That means .999...9 will be less than one, for any finite number of 9s.

What then do we mean by an infinite decimal expansion? To understand that, we have to use what are called

limits and

sequences. A

sequence is just what it sounds like: a list with a first item, a second item, a third item, and so on. More formally, a sequence is a function whose domain is the natural numbers (aka the counting numbers: 1, 2, 3, and so on). 10, 20, 30, 40, ... is a sequence, as is 0, 0, 0, 0, ... as is 1, 1/2, 1/3, 1/4, .... Importantly, a sequence is an

infinite list, so it has a first item, but no last item. Even though a sequence has no last item, it may have a "limit": a number that the elements of the sequence are approaching, whether or not they actually get there. We say a sequence a

_{1}, a

_{2}, a

_{3}, ... of numbers

converges to a number a if for any small positive distance ε, the terms of the sequence are

eventually all within ε of a (that is to say, all but the first N terms are within ε of a, for some number N, possibly very large). If a sequence converges to a, we say that a is the

limit of the sequence.

So for the examples above, the sequence 10, 20, 30, ... doesn't converge to anything, because if ε is small, say 1, then you can't have both the nth term (10n) and the n+1st term (10n+10) both be within ε of the same number. The sequence 0, 0, 0, ... converges to 0, as for any ε>0, all of the terms of the sequence are within ε of 0 (since they are all equal to 0). Finally, the sequence 1, 1/2, 1/3, ... also converges to 0, although this one is trickier. However, if ε>0, then there is a whole number N>1/ε, which means 1/N<ε. So after the first N terms of the sequence, all later terms are within ε of 0, since they are all smaller than 1/N, which is less than ε. So this sequence also converges to 0, even though it never actually gets there. Finally, you can show that if a sequence converges to a, it can't also converge to b, if b is different from a. This is because if ε<|a-b|/2, a number can't simultaneously be within ε of a and within ε of b. So if a sequence has a limit, that limit is unique.

We are finally prepared to show that .999...=1, because we are finally prepared to define

what we mean by an infinite decimal expansion, like .999.... The number represented by an infinite decimal expansion b.a

_{1}a

_{2}a

_{3}... is simply the limit of the finite expansions. In other words, b.a

_{1}a

_{2}a

_{3}... is,

by definition, the limit of the sequence b.a

_{1}, b.a

_{1}a

_{2}, b.a

_{1}a

_{2}a

_{3}, ... (Yes, that is really how decimal notation is defined; consult any textbook which actually has a full definition of decimal notation which includes infinite decimal expansions.*) So, in particular, .999... is, by definition, the limit of the sequence .9, .99, .999, .... And just as the sequence 1, 1/2, 1/3, ... converges to 0, this sequence converges to 1. This is because the distance between the nth term of this sequence and 1 is 1/10

^{n}, and given any ε>0, this distance is eventually less than ε.

Just as is this case with 1/2 and 2/4 in the rational numbers, these are two different representations of the same number by decimal expansions. To get unique representations, you would have to add a rule that decimal expansions can't end with an infinite sequence of 9s, just as to get unique representations using fractions, you have to add a rule that the numerator and denominator can't have a common factor.

*You may find such a book surprisingly difficult to find, but probably most college calculus textbooks will suffice; Spivak's calculus is one example.
I'm looking forward to the day when the SNES emulator on my computer works by emulating the elementary particles in an actual, physical box with Nintendo stamped on the side.

"With math, all things are possible." —Rebecca Watson