Suppose we select a real number at random.
Whatever number we pick, the probability of selecting it was 0.
How can an event with probability 0 happen?
Probability Paradox
Moderators: gmalivuk, Moderators General, Prelates

 Posts: 17
 Joined: Mon Jan 24, 2011 11:48 pm UTC
Re: Probability Paradox
The question is how can you exhibit a real number "at random" mathematically?
Re: Probability Paradox
OK, select a number uniformly at random from [0,1]. That is welldefined.
The point is that the definition of "probability zero" does not imply that the event can't happen. They just aren't the same thing.
(Of course, you could insist on a definition of "probability zero" that is synonymous with "can't happen," but then you would no longer be able to define the uniform distribution on [0,1].)
The point is that the definition of "probability zero" does not imply that the event can't happen. They just aren't the same thing.
(Of course, you could insist on a definition of "probability zero" that is synonymous with "can't happen," but then you would no longer be able to define the uniform distribution on [0,1].)
Re: Probability Paradox
It may be informative to talk a little bit about what exactly a probability is. When we ask how likely something is, we are implicitly comparing how likely whatever that something is, to a whole family of possibilities.
When you flip a coin, two things can happen, it'll land on heads or tails. If I ask "What's the probability I get head or tails?" in this simple situation clearly this is going to happen all the time, so by convention we denoted this as 1.
Why 1? The cheap answer is because it's useful to assume this.
I can come up with much more complicated random discrete systems, it isn't hard. I can talk about nsided dice for example. If I want to talk about the probability of a "fair" nsided dice, intuitively I can sort of guess that this probability is 1/n. If I decide to add up the possibility of getting any of the n sides, then I just add up their respective probabilities, and get one again n(1/n) = 1.
The idea is, if I add up the probabilities of all the most elementary possibilities, in the discrete world they should all add up to one.
Now, there is an important difference in the land real numbers. And I'm not well versed in measure theory, so I can't hit the nail on the head exactly.
But if you attempt to spread probability uniformly across some interval, you need to be careful about how you're defining what happens. If I look at the interval [0,1] as two halfs [0,1/2] and [1/2,1], and stated the probability was uniform across the interval [0,1]. Then intuitively one can sort of see that both halves of that interval are half as likely. When we break down the interval into discrete chunks, we can start playing with the probabilities. If I make these intervals both half as large, we'll have four intervals with a quarter probability each. Half them all again, and we get 1/8 each, half them again 1/16, so on and so forth. We can make the probabilities of intervals arbitrarily small, by selecting an arbitrarily large number of partitions.
The real crux of the matter is, when you decide to tie finite probabilities uniformly to an infinite amount of numbers, you don't get 1 at the end. That's the odd puzzle behind what's going on. Now if an interval doesn't happen and thus does have probability zero, then all of the values in that interval must necessarily be zero. So if something "doesn't happen" then its probability is zero, but that doesn't mean if the probability is zero that that number will never pop up in a random number generator. Another odd thought to chew on, we're never going to see every number that can appear from this random process, yet every number is equally likely.
When you flip a coin, two things can happen, it'll land on heads or tails. If I ask "What's the probability I get head or tails?" in this simple situation clearly this is going to happen all the time, so by convention we denoted this as 1.
Why 1? The cheap answer is because it's useful to assume this.
I can come up with much more complicated random discrete systems, it isn't hard. I can talk about nsided dice for example. If I want to talk about the probability of a "fair" nsided dice, intuitively I can sort of guess that this probability is 1/n. If I decide to add up the possibility of getting any of the n sides, then I just add up their respective probabilities, and get one again n(1/n) = 1.
The idea is, if I add up the probabilities of all the most elementary possibilities, in the discrete world they should all add up to one.
Now, there is an important difference in the land real numbers. And I'm not well versed in measure theory, so I can't hit the nail on the head exactly.
But if you attempt to spread probability uniformly across some interval, you need to be careful about how you're defining what happens. If I look at the interval [0,1] as two halfs [0,1/2] and [1/2,1], and stated the probability was uniform across the interval [0,1]. Then intuitively one can sort of see that both halves of that interval are half as likely. When we break down the interval into discrete chunks, we can start playing with the probabilities. If I make these intervals both half as large, we'll have four intervals with a quarter probability each. Half them all again, and we get 1/8 each, half them again 1/16, so on and so forth. We can make the probabilities of intervals arbitrarily small, by selecting an arbitrarily large number of partitions.
The real crux of the matter is, when you decide to tie finite probabilities uniformly to an infinite amount of numbers, you don't get 1 at the end. That's the odd puzzle behind what's going on. Now if an interval doesn't happen and thus does have probability zero, then all of the values in that interval must necessarily be zero. So if something "doesn't happen" then its probability is zero, but that doesn't mean if the probability is zero that that number will never pop up in a random number generator. Another odd thought to chew on, we're never going to see every number that can appear from this random process, yet every number is equally likely.
Re: Probability Paradox
An analogy I've used before:
Consider a straight or curved line in the plane. Its area is zero. That doesn't mean it doesn't exist at all; it just means it makes no quantifiable contribution to area.
Or, pick a random point on the TransCanada highway. What's the probability that your point lies on this halfacentimeterwide pebble just outside Brandon, Manitoba? Very very tiny indeed. What's the probability that your point coincides with another random point on the highway that I chose? The only numerical value that makes sense is zero.
Consider a straight or curved line in the plane. Its area is zero. That doesn't mean it doesn't exist at all; it just means it makes no quantifiable contribution to area.
Or, pick a random point on the TransCanada highway. What's the probability that your point lies on this halfacentimeterwide pebble just outside Brandon, Manitoba? Very very tiny indeed. What's the probability that your point coincides with another random point on the highway that I chose? The only numerical value that makes sense is zero.

 Posts: 17
 Joined: Mon Jan 24, 2011 11:48 pm UTC
Re: Probability Paradox
++$_ wrote:OK, select a number uniformly at random from [0,1]. That is welldefined.
The point is that the definition of "probability zero" does not imply that the event can't happen. They just aren't the same thing.
(Of course, you could insist on a definition of "probability zero" that is synonymous with "can't happen," but then you would no longer be able to define the uniform distribution on [0,1].)
How is selecting a number uniformly at random from [0,1] well defined? can you describe such a thing in the language of set theory? if we could select an element from any nonempty set ("at random" or not) this implies the axiom of choice.. and you can't use the axiom of choice to describe selecting at random. My point is that in the continuous case, probability theory is only a description of our intuitions about probability; after all, a probability measure is just a measure and does not select anything for you!
Re: Probability Paradox
The uniform probability density on [0,1] is a pretty standard thing in mathematics. But I suppose you're correct that there's something a little fishy about randomly "selecting" one real number from that interval, as though it's an action to be performed.
This is drifting from math into philosophy, so there will be different interpretations. However, I don't think it's very accurate to describe a single random selection as involving the axiom of choice. The axiom of choice is more about simultaneously making a bunch of arbitrary selections.
But I agree that although it's unproblematic to have a density function on [0,1] that integrates to 1, there is something a little "weird" about the intuitive notion of making a random or arbitrary selection of a real number from an interval.
This is drifting from math into philosophy, so there will be different interpretations. However, I don't think it's very accurate to describe a single random selection as involving the axiom of choice. The axiom of choice is more about simultaneously making a bunch of arbitrary selections.
But I agree that although it's unproblematic to have a density function on [0,1] that integrates to 1, there is something a little "weird" about the intuitive notion of making a random or arbitrary selection of a real number from an interval.
Re: Probability Paradox
Perhaps part of the problem with accepting a uniform random variable X on [0,1] (for example) is that this interval contains numbers that we could never describe since it is an uncountable set, and every way we have ever come up with for describing mathematical objects is countable in nature (We use only finite alphabets, descriptions are assumed to be finite even when they describe infinite things, etc).
It's one of the many quirks of mathematics that "we know sets before we know their elements". It's unintuitive at first, but doesn't lead to any logical contradictions. It's perfectly possible to describe the perfectly uniform random variable X well enough to perform several relevant, exact calculations. It just happens that one thing we can't describe is a specific number that it chooses. But such a description is not necessary for any applications I can think of. The specific number can always be determined with enough precision to perform any necessary calculation.
If any of this is unintuitive, or at least uncomfortable, know that most of it comes from the accepted notion in standard axiomatic mathematical theory that "there exists a Z" does not necessarily imply that "we can describe a Z". For an alternate view, see constructive mathematics.
It's one of the many quirks of mathematics that "we know sets before we know their elements". It's unintuitive at first, but doesn't lead to any logical contradictions. It's perfectly possible to describe the perfectly uniform random variable X well enough to perform several relevant, exact calculations. It just happens that one thing we can't describe is a specific number that it chooses. But such a description is not necessary for any applications I can think of. The specific number can always be determined with enough precision to perform any necessary calculation.
If any of this is unintuitive, or at least uncomfortable, know that most of it comes from the accepted notion in standard axiomatic mathematical theory that "there exists a Z" does not necessarily imply that "we can describe a Z". For an alternate view, see constructive mathematics.
 ImTestingSleeping
 Posts: 88
 Joined: Mon Dec 06, 2010 3:46 am UTC
Re: Probability Paradox
I would be interested in what the actual probability distribution is if you asked someone to pick any real number "at random". Would probably look very strange!
Re: Probability Paradox
ImTestingSleeping wrote:I would be interested in what the actual probability distribution is if you asked someone to pick any real number "at random". Would probably look very strange!
Something in a similar spirit, which wouldn't be that difficult to try:
For each of the 10000 expressions 0.0001 to 0.9999 (or each of the 1000000 expressions 0.000001 to 0.999999, or whatever), note the number of Google hits, and keep track of what proportion that is of the whole. Graph the results.
(This isn't the same as asking humans to choose "random" numbers, and I imagine a lot of the Google hits would be for numbers that occur "naturally" in physical problems, but nonetheless it might be interesting to try.)
Who is online
Users browsing this forum: No registered users and 10 guests