Probability paradox?
Moderators: gmalivuk, Moderators General, Prelates
Probability paradox?
Suppose you have a random number generator which produces random integers. You run the RNG once and get a random integer. Then you run it again. What is the probability that you get the same integer the second time that you did the first time?
It seems that the probability would be "one in infinity", since there is one integer which is the same out of an infinite number of possible integers. But isn't that probability equal to zero? And if it's zero, that means the RNG could NEVER produce the same integer twice in a row, implying it has a memory!
But the probability can't be some finite fraction greater than zero either  that would imply a bias TOWARD getting the same integer again, since the number of possible integers is greater than any finite denominator!
So, what IS the probability of getting the same integer again? It almost seems like we need a special class of "infinitesimal" numbers which are somehow greater than zero but less than the smallest possible fraction...
It seems that the probability would be "one in infinity", since there is one integer which is the same out of an infinite number of possible integers. But isn't that probability equal to zero? And if it's zero, that means the RNG could NEVER produce the same integer twice in a row, implying it has a memory!
But the probability can't be some finite fraction greater than zero either  that would imply a bias TOWARD getting the same integer again, since the number of possible integers is greater than any finite denominator!
So, what IS the probability of getting the same integer again? It almost seems like we need a special class of "infinitesimal" numbers which are somehow greater than zero but less than the smallest possible fraction...
Reading posts on the xkcd forum makes me feel stupid.
Re: Probability paradox?
What range does the random number generator operate in?
if it takes a random number in (0, inf) then it will always generate inf. So just find the upper bound on the generator and you get the number of possible answers.
if it takes a random number in (0, inf) then it will always generate inf. So just find the upper bound on the generator and you get the number of possible answers.
Re: Probability paradox?
Your problem isn't well defined, as you need to define how integers are randomly picked. There is no 'equal chance for each' on the integers.
But you can do a similar thing, picking a real number between 0 and 1, and there is an 'equal chance' distribution there, and your 'paradox' comes up, each number has probability 0 of being picked. But probability 0 doesn't mean can't happen, it just means that the chances of it happening are smaller than any positive amount, and probability 1 doesn't mean must happen, it just means the chances of it NOT happening are smaller than any positive amount. It's just a feature of continuous probability (ie, probability distributions that can take on uncountably many values)
But you can do a similar thing, picking a real number between 0 and 1, and there is an 'equal chance' distribution there, and your 'paradox' comes up, each number has probability 0 of being picked. But probability 0 doesn't mean can't happen, it just means that the chances of it happening are smaller than any positive amount, and probability 1 doesn't mean must happen, it just means the chances of it NOT happening are smaller than any positive amount. It's just a feature of continuous probability (ie, probability distributions that can take on uncountably many values)
addams wrote:This forum has some very well educated people typing away in loops with Sourmilk. He is a lucky Sourmilk.
Re: Probability paradox?
Well, this was supposed to be a hypothetical perfect RNG which can pick any negative, zero, or positive integer with equal probability... I didn't say it was a RNG that was actually able to be programmed in C or whatever
I'm intrigued by the statement "there is no equal chance for each on the integers"  why is that? Are some integers intrinsically more likely to be picked than others? Or were you just commenting on the impossibility of programming such a RNG?
I think the explanation of "less than any possible positive number, but still nonzero" is kind of helpful, though  thanks!
I'm intrigued by the statement "there is no equal chance for each on the integers"  why is that? Are some integers intrinsically more likely to be picked than others? Or were you just commenting on the impossibility of programming such a RNG?
I think the explanation of "less than any possible positive number, but still nonzero" is kind of helpful, though  thanks!
Reading posts on the xkcd forum makes me feel stupid.
Re: Probability paradox?
ekolis wrote:Well, this was supposed to be a hypothetical perfect RNG which can pick any negative, zero, or positive integer with equal probability... I didn't say it was a RNG that was actually able to be programmed in C or whatever
I'm intrigued by the statement "there is no equal chance for each on the integers"  why is that? Are some integers intrinsically more likely to be picked than others? Or were you just commenting on the impossibility of programming such a RNG?
I think the explanation of "less than any possible positive number, but still nonzero" is kind of helpful, though  thanks!
This isn't a practical restriction, it's a mathematical one. No such RNG exists, even theoretically, because probability is countably additive. That means that the chance of picking A or B or C or .... for any finite or countably infinite list of A, B, C is equal to the sum of their probabilities. So if the chance of picking EACH integers is the same, then the probability of picking ANY integer is just the infinite sum of that number, which is either infinite or 0. But the probability has to add up to 1, so it doesn't work.
This problem goes away when you get 'enough' numbers though, once you have uncountably many numbers, say the real numbers between 0 and 1, there's no countable set that adds up to the whole thing, so it's fine that the probability at each point is 0, we only need to assign probabilities to every interval*.
*or more generally, every measurable set
addams wrote:This forum has some very well educated people typing away in loops with Sourmilk. He is a lucky Sourmilk.
Re: Probability paradox?
mikel wrote:It's just a feature of continuous probability (ie, probability distributions that can take on uncountably many values)
Countably (but infinite) many values have the same problem  as shown with the integers.
@ekolis: Your RNG has to have some bias and prefer some values over others. There are a lot of options how to choose your biased distribution (uncountably many), but you have to choose one for your integers.
Averaging over several distributions does not help, it just generates another biased distribution.
Re: Probability paradox?
mfb wrote:mikel wrote:It's just a feature of continuous probability (ie, probability distributions that can take on uncountably many values)
Countably (but infinite) many values have the same problem  as shown with the integers.
I'm saying that in the uncountable case, most (all but countably many) points must have probability 0 of occurring, and there is no requirement that any point have a non zero probability. So we have to consider that events can happen even if the probability of that event is 0. (To clarify, this is the case where the discrete part, ie all the points that have probability non 0, has probability strictly less than 1)
Because of countable additivity on the other hand, all countable probability spaces are discrete. There is a (minimal) set of elements whose individual probabilities add up to 1, and we can ignore all the elements outside of that set for most purposes. There are no sets consisting of 0 probability elements that have a nonzero cumulative probability
addams wrote:This forum has some very well educated people typing away in loops with Sourmilk. He is a lucky Sourmilk.
Re: Probability paradox?
The probability of getting each integer is 0 (any number above 0 is too high).
An event with probability 0 does not mean it is impossible to happen. An impossible event has a probability of 0 though.
In your experiment, each integer has a probability of 0. Run the test once and you come up with an integer, even though you had 0% chance of getting that particular integer.
ekolis wrote:It seems that the probability would be "one in infinity", since there is one integer which is the same out of an infinite number of possible integers. But isn't that probability equal to zero? And if it's zero, that means the RNG could NEVER produce the same integer twice in a row, implying it has a memory!
An event with probability 0 does not mean it is impossible to happen. An impossible event has a probability of 0 though.
In your experiment, each integer has a probability of 0. Run the test once and you come up with an integer, even though you had 0% chance of getting that particular integer.
Re: Probability paradox?
314man wrote:The probability of getting each integer is 0 (any number above 0 is too high).
No, then you don't have a probability distribution at all.
Any distribution (Edit: On the integers, thought that was clear from context but apparently not) MUST have a bias, some integers are more likely than others.
Last edited by mikel on Fri Mar 09, 2012 10:42 pm UTC, edited 1 time in total.
addams wrote:This forum has some very well educated people typing away in loops with Sourmilk. He is a lucky Sourmilk.
Re: Probability paradox?
mikel wrote:No, then you don't have a probability distribution at all.
Any distribution MUST have a bias, some integers are more likely than others.
Any continuous pdf has the property that the probability of any particular number is 0.
In this case, we are dealing with something that is discrete, but there are still an infinite number of integers.
(I think it actually leads to infinitesimal numbers but I don't really know much about it)
Also uniform distribution. (More particular example: integers of 14, each with probability 1/4).
Re: Probability paradox?
A continuous PDF has to be on an uncountable space. Any countable probability space is automatically discrete (by countable additivity). In particular, any distribution on the integers is discrete. There's no uniform distribution on a countably infinite set, but of course you can have uniform distributions on finite subsets.
addams wrote:This forum has some very well educated people typing away in loops with Sourmilk. He is a lucky Sourmilk.
 heyitsguay
 Posts: 118
 Joined: Thu Oct 16, 2008 6:21 am UTC
Re: Probability paradox?
This isn't a mystery, this isn't a paradox, this is, quite exactly, a situation where the probability distribution you are trying to use does not exist. There is no uncertainty here, mathematicians have studied these problems, and if you were to take the time to study elementary probability theory you'd see that "a uniform distribution" on any countably infinite set simply does not exist. And it's an interesting result that may seem initially counterintuitive, and so it's definitely worthwhile to try to understand why this is so. However, to shrug off people telling you this fact and then say "I don't know much about it but it exists" is frustrating  recognize that this concept which seems intuitively meaningful to you is an artifact of your ignorance, not some deep mathematical question.
 MartianInvader
 Posts: 807
 Joined: Sat Oct 27, 2007 5:51 pm UTC
Re: Probability paradox?
Answers like mikel's are quickly becoming a pet peeve of mine. Sure, a probability distribution as defined in mathematics has to be countably additive. But that doesn't mean the question is "wrong". It just means that its not asking about what mathematicians call a probability distribution. Do you really think the OP cares about countable additivity? Just take the definition of probability distribution, throw out countable additivity, call what's left a "finitely additive probability measure", and voila! You have something that can be defined on the integers, and still matches the intuition of the question. Heck, you can even get a good approximation to the intuitive notion of "every integer has an equal chance" by stipulating that the distribution be translationinvariant. These things do exist on the integers, and they pretty much exactly match the intuition of what people like the OP often ask about.
So my answer to the OP is that yes, the probability you get the same number twice is zero, which is surprising to people who aren't familiar with how probability on infinite spaces works. The key thing to understand is that having a probability of zero is not the same thing as being impossible. It's really only useful to talk about the probability that your random number will be in some large set of numbers, for example, what's the probability that it will be even? In which case the answer is, as you might expect, 1/2.
So my answer to the OP is that yes, the probability you get the same number twice is zero, which is surprising to people who aren't familiar with how probability on infinite spaces works. The key thing to understand is that having a probability of zero is not the same thing as being impossible. It's really only useful to talk about the probability that your random number will be in some large set of numbers, for example, what's the probability that it will be even? In which case the answer is, as you might expect, 1/2.
Let's have a fervent argument, mostly over semantics, where we all claim the burden of proof is on the other side!
 heyitsguay
 Posts: 118
 Joined: Thu Oct 16, 2008 6:21 am UTC
Re: Probability paradox?
MartianInvader wrote:Answers like mikel's are quickly becoming a pet peeve of mine. Sure, a probability distribution as defined in mathematics has to be countably additive. But that doesn't mean the question is "wrong". It just means that its not asking about what mathematicians call a probability distribution. Do you really think the OP cares about countable additivity? Just take the definition of probability distribution, throw out countable additivity, call what's left a "finitely additive probability measure", and voila! You have something that can be defined on the integers, and still matches the intuition of the question. Heck, you can even get a good approximation to the intuitive notion of "every integer has an equal chance" by stipulating that the distribution be translationinvariant. These things do exist on the integers, and they pretty much exactly match the intuition of what people like the OP often ask about.
So my answer to the OP is that yes, the probability you get the same number twice is zero, which is surprising to people who aren't familiar with how probability on infinite spaces works. The key thing to understand is that having a probability of zero is not the same thing as being impossible. It's really only useful to talk about the probability that your random number will be in some large set of numbers, for example, what's the probability that it will be even? In which case the answer is, as you might expect, 1/2.
Sure, but for this thread in particular, someone's asking about a perceived paradox that directly stems from this misunderstanding. The misperception that a "uniform distribution on the integers" is a meaningful, actual probability distribution is at the heart of the OP's questions. And anyway, fine, let's look at only some finitelyadditive object on the integers that approximates the idea of "a uniform distribution". What would that look like concretely?
Re: Probability paradox?
MartianInvader wrote:Answers like mikel's are quickly becoming a pet peeve of mine. Sure, a probability distribution as defined in mathematics has to be countably additive. But that doesn't mean the question is "wrong". It just means that its not asking about what mathematicians call a probability distribution. Do you really think the OP cares about countable additivity? Just take the definition of probability distribution, throw out countable additivity, call what's left a "finitely additive probability measure", and voila! You have something that can be defined on the integers, and still matches the intuition of the question. Heck, you can even get a good approximation to the intuitive notion of "every integer has an equal chance" by stipulating that the distribution be translationinvariant. These things do exist on the integers, and they pretty much exactly match the intuition of what people like the OP often ask about.
So my answer to the OP is that yes, the probability you get the same number twice is zero, which is surprising to people who aren't familiar with how probability on infinite spaces works. The key thing to understand is that having a probability of zero is not the same thing as being impossible. It's really only useful to talk about the probability that your random number will be in some large set of numbers, for example, what's the probability that it will be even? In which case the answer is, as you might expect, 1/2.
I was responding to 314man who seemed to suggest that continuous distributions exist on countable spaces. My first response to the OP said pretty much exactly your second paragraph.
Nonetheless, while such 'finitely additive probability measures' may be an intuitive notion of size on the integers, any notion of using that to actually pick a particular integer is bound to not match much intuition at all, whereas, aside from silly things with the AoC, all traditional probability fits with the intuition of throwing a dart.
heyitsguay wrote:And anyway, fine, let's look at only some finitelyadditive object on the integers that approximates the idea of "a uniform distribution". What would that look like concretely?
Here's a simple 'linear' one, ie u(aA+b) = au(A).
Define the size of a subset to be[math]\mu(A) = \lim\inf \frac{A \cap [n,n]}{2n}[/math]
Any bounded set must be size 0, by finite additivity and translation invariance, which is why I say that actually 'picking' numbers based on this is going to be pretty intuition breaking. While with continuous distributions you can get past the 'points have 0 probability' because we can still talk about the density at that point, that notion doesn't work here.
addams wrote:This forum has some very well educated people typing away in loops with Sourmilk. He is a lucky Sourmilk.
 heyitsguay
 Posts: 118
 Joined: Thu Oct 16, 2008 6:21 am UTC
Re: Probability paradox?
mikel wrote:Here's a simple 'linear' one, ie u(aA+b) = au(A).
Define the size of a subset to be[math]\mu(A) = \lim\inf \frac{A \cap [n,n]}{2n}[/math]
Any bounded set must be size 0, by finite additivity and translation invariance, which is why I say that actually 'picking' numbers based on this is going to be pretty intuition breaking. While with continuous distributions you can get past the 'points have 0 probability' because we can still talk about the density at that point, that notion doesn't work here.
Sure, that's fine, but I feel like MartianInvader was trying to make a case for "there is an object which approximates this incorrect intuition, it's just not formally a probability distribution", and objects like the one you are describing definitely do not fit that criteria. Perhaps more pointedly I should have asked "what object do you have in mind when you say that there are objects which approximate our intuitive notions of a uniform distribution on the integers?"
Re: Probability paradox?
With his requirements that it be translation invariant and finitely additive, it pretty much has to look like this. As I said, bounded sets cannot matter (ie, they're all probability 0), so a set's size must be determined by it's asymptotic behaviour. You could do just the positive direction, or just the negative, or weight them differently. I think aside from those, the only wiggle room is in sets that oscillate in size, and how big you want to call them. I'm working on a proof that any such 'measure' must be between the lim inf I posted and the same but with lim sup (which isn't itself additive)
The main issue is there's no notion of continuity. Eg, if A_1 \subset A_2 \subset .... then u(\cup A_n) need not be lim u(A_n), nor similar statements about intersections. And by need not, I mean that no matter what u you define there will be sets that fail these properties, as they are equivalent to countable additivity (with finite additivity and a finite space). And of course the sets {0}, {1,0,1}, {2,1,0,1,2}... will always be measure 0 with their union being measure 1.
The main issue is there's no notion of continuity. Eg, if A_1 \subset A_2 \subset .... then u(\cup A_n) need not be lim u(A_n), nor similar statements about intersections. And by need not, I mean that no matter what u you define there will be sets that fail these properties, as they are equivalent to countable additivity (with finite additivity and a finite space). And of course the sets {0}, {1,0,1}, {2,1,0,1,2}... will always be measure 0 with their union being measure 1.
Last edited by mikel on Sat Mar 10, 2012 12:49 am UTC, edited 1 time in total.
addams wrote:This forum has some very well educated people typing away in loops with Sourmilk. He is a lucky Sourmilk.
 heyitsguay
 Posts: 118
 Joined: Thu Oct 16, 2008 6:21 am UTC
Re: Probability paradox?
mikel wrote:With his requirements that it be translation invariant and finitely additive, it pretty much has to look like this. As I said, bounded sets cannot matter (ie, they're all probability 0), so a set's size must be determined by it's asymptotic behaviour. You could do just the positive direction, or just the negative, or weight them differently. I think aside from those, the only wiggle room is in sets that oscillate in size, and how big you want to call them. I'm working on a proof that any such 'measure' must be between the lim inf I posted and the same but with lim sup (which isn't itself additive)
Ahh sorry, again I didn't say what I really meant, I see. I guess I was being rhetorical and trying to indicate my belief that, as you've rightly pointed out here, no such object really exists, and MartianInvader's ire is misplaced in this particular thread. Still, I appreciate your clarity, and perhaps I should have taken your approach of demonstrating that no object satisfies those conditions, instead of waiting to see what MartianInvader would come up with.
 MartianInvader
 Posts: 807
 Joined: Sat Oct 27, 2007 5:51 pm UTC
Re: Probability paradox?
I'd argue that it's pretty intuitive that the chance of picking something from a finite subset out of an infinite set is zero. It's the same as the intuitive equation "(any finite number)/infinity = 0". Does talking about the density at a point really clear things up that much when you're looking at a uniform distribution?
My point is just that it's irrelevant to the question to require countable additivity. It's like if someone came to the boards asking "Hey guys, can you have a function that's 0 for a while, then, like, suddenly jumps up to 1 without touching the values in between?" And someone answers "It sounds like you're describing a differentiable function that has a discontinuity. No such function exists." Sure, no such function exists, and differentiable functions are the most commonly considered type of function, but it doesn't match the intuition of what's being asked.
The way to construct such measures is, sadly, a bit involved. You need to use the axiom of choice and use something called a "nonprincipal ultrafilter" on the natural numbers to get one that has finite additivity.
My point is just that it's irrelevant to the question to require countable additivity. It's like if someone came to the boards asking "Hey guys, can you have a function that's 0 for a while, then, like, suddenly jumps up to 1 without touching the values in between?" And someone answers "It sounds like you're describing a differentiable function that has a discontinuity. No such function exists." Sure, no such function exists, and differentiable functions are the most commonly considered type of function, but it doesn't match the intuition of what's being asked.
The way to construct such measures is, sadly, a bit involved. You need to use the axiom of choice and use something called a "nonprincipal ultrafilter" on the natural numbers to get one that has finite additivity.
Let's have a fervent argument, mostly over semantics, where we all claim the burden of proof is on the other side!

 Posts: 475
 Joined: Wed Sep 21, 2011 3:44 am UTC
Re: Probability paradox?
Look, whatever you want to call it, this is how it is:
For a set of N elements with uniform probability, the probability of picking a particular element is 1/N. As N approaches infinity, as for example when the set is the set of all integers, then 1/N, the probability of picking a particular element, approaches 0. If such a scenario can be said to not be technically possible, then the best way to define it anyway is to define it in terms of a limit approaching that scenario, as I have done.
Making it any more complicated than that strikes me as pedantic.
For a set of N elements with uniform probability, the probability of picking a particular element is 1/N. As N approaches infinity, as for example when the set is the set of all integers, then 1/N, the probability of picking a particular element, approaches 0. If such a scenario can be said to not be technically possible, then the best way to define it anyway is to define it in terms of a limit approaching that scenario, as I have done.
Making it any more complicated than that strikes me as pedantic.
Re: Probability paradox?
MartianInvader wrote:The way to construct such measures is, sadly, a bit involved. You need to use the axiom of choice and use something called a "nonprincipal ultrafilter" on the natural numbers to get one that has finite additivity.
Yes, I realized after more work that even my example fails to be additive for oscillating subsets. I thought I took care of it with the lim inf, but that only makes it sub additive (and lim sup makes it superadditive, which somehow I realized right away but didn't realize it for inf... doh), so yeah, you'd have to essentially make a 'basis' of subsets using the AoC.
Anyway, I agree with your conclusion that the particular definition wasn't what the OP was concerned with. Which is why I addressed his actual concern in the first post.
I'd argue that it's pretty intuitive that the chance of picking something from a finite subset out of an infinite set is zero. It's the same as the intuitive equation "(any finite number)/infinity = 0". Does talking about the density at a point really clear things up that much when you're looking at a uniform distribution?
The notion of finite sets being measure 0 isn't concerning or unintuitive at all. What is unintuitive is that the measure has no locality. On a continuous distribution, density is a measure (no pun intended) on how likely you are to be 'near' a value, as opposed to how likely you are to be 'at' a value in the discrete case. On the other hand, any content (which I've learned is the name of these things) on the integers, I can't say anything meaningful about any particular point at all, as every intuitive set 'around' a number has measure 0.
My main complaint though, is that there doesn't seem to be any reasonable way to apply this to our 'real world' intuition that probability is about throwing darts or rolling dice.
Last edited by mikel on Sat Mar 10, 2012 1:34 am UTC, edited 1 time in total.
addams wrote:This forum has some very well educated people typing away in loops with Sourmilk. He is a lucky Sourmilk.

 Posts: 475
 Joined: Wed Sep 21, 2011 3:44 am UTC
Re: Probability paradox?
mikel wrote:My main complaint though, is that there doesn't seem to be any reasonable way to apply this to our 'real world' intuition that probability is about throwing darts or rolling dice.
The thing to remember is that when looking at probability in an intuitive sense, there's no relevant difference between a countable infinity and an uncountable infinity. In the intuitive sense, any given infinity simply means "arbitrarily large". So to use dice as an example, if you had an infinitesided die, it would be a sphere and each "side" would be a single point on its surface. Absent any information about the mechanics of the throw, it becomes impossible to give any meaningful number or density distribution to the probability of landing on a given "side".
Re: Probability paradox?
arbiteroftruth wrote:mikel wrote:My main complaint though, is that there doesn't seem to be any reasonable way to apply this to our 'real world' intuition that probability is about throwing darts or rolling dice.
The thing to remember is that when looking at probability in an intuitive sense, there's no relevant difference between a countable infinity and an uncountable infinity. In the intuitive sense, any given infinity simply means "arbitrarily large". So to use dice as an example, if you had an infinitesided die, it would be a sphere and each "side" would be a single point on its surface. Absent any information about the mechanics of the throw, it becomes impossible to give any meaningful number or density distribution to the probability of landing on a given "side".
My response was to MartianInvader, not to you, I'll edit in a quote.
But you're also incorrect. There's a very big difference between countable spaces and uncountable spaces, and it's not a notion that one is bigger than the other, but that uncountable spaces allow you to have a continuum of values. To deal with any probability that isn't just point masses, I need to be able to talk about limits, so I need my space to contain it's limits. This forces a space to either be discrete or uncountable. There's lots of ways to interpret rolling a ball as giving a number between 0 and 1 in a uniform way, with arbitrary precision. You can't do the same for the integers.
addams wrote:This forum has some very well educated people typing away in loops with Sourmilk. He is a lucky Sourmilk.

 Posts: 20
 Joined: Sun Jan 03, 2010 7:31 pm UTC
Re: Probability paradox?
One time, I was asked a trivia question. It was a string of questions, with numerical answers, where each answer had an operator in between it (for example Michael Jordan's jersey number, times the day of the month the war of 1812 started; except it was 7 questions long, and I didn't know any of the answers offhand), and the goal was to find the mathematical answer. I had no idea what the answer was, so I guessed 5. It was correct, so now I claim I did the impossible, seeing as the answer could have been any number.
amusing anecdote aside, it actually is impossible to get the same number twice. Assuming it would be a random number between 0 and 1, the number would have an infinite amount of digits, making it impossible to recreate.
amusing anecdote aside, it actually is impossible to get the same number twice. Assuming it would be a random number between 0 and 1, the number would have an infinite amount of digits, making it impossible to recreate.
 gmalivuk
 GNU Terry Pratchett
 Posts: 26724
 Joined: Wed Feb 28, 2007 6:02 pm UTC
 Location: Here and There
 Contact:
Re: Probability paradox?
Well if you're printing out an infinite amount of digits, it'd be impossible to create in the first place.
 Proginoskes
 Posts: 313
 Joined: Mon Nov 14, 2011 7:07 am UTC
 Location: Sitting Down
Re: Probability paradox?
gmalivuk wrote:Well if you're printing out an infinite amount of digits, it'd be impossible to create in the first place.
Not really. There are algorithms which will produce all the digits of (say) pi, one at a time. Or even 1/9: just keep printing 1s forever. (That one's easier to program, too.)
If the number of digits is uncountable, then of course you have a problem.
 gmalivuk
 GNU Terry Pratchett
 Posts: 26724
 Joined: Wed Feb 28, 2007 6:02 pm UTC
 Location: Here and There
 Contact:
Re: Probability paradox?
Yeah, but there are only countably many numbers like that, so the probability of a perfect uniform RNG picking any of them is 0.
 MartianInvader
 Posts: 807
 Joined: Sat Oct 27, 2007 5:51 pm UTC
Re: Probability paradox?
Ok mikel, I guess we're mostly in agreement. The second paragraph of your answer does indeed convey the right intuition. And I'll happily admit that continuous distributions have many nice properties that finitelyadditive ones don't. Our main disagreement just seems to be whether those properties are so nice that you should immediately bring them up when an obviously inexperienced person asks a question like the OP's.
But I don't really have any new points to make in that argument, so maybe we should just agree to disagree and not get bogged down in a 10page argument where we just repeat ourselves over and over.
And arbiteroftruth, there definitely are differences between countably infinite and uncountable probability spaces, although in both the probability of picking a given point is (usually) zero for basically the same reasons.
But I don't really have any new points to make in that argument, so maybe we should just agree to disagree and not get bogged down in a 10page argument where we just repeat ourselves over and over.
And arbiteroftruth, there definitely are differences between countably infinite and uncountable probability spaces, although in both the probability of picking a given point is (usually) zero for basically the same reasons.
Let's have a fervent argument, mostly over semantics, where we all claim the burden of proof is on the other side!
Re: Probability paradox?
MartianInvader wrote:And arbiteroftruth, there definitely are differences between countably infinite and uncountable probability spaces, although in both the probability of picking a given point is (usually) zero for basically the same reasons.
I know you qualified your answer but the two countably infinite distributions I work with the most are negative binomial and the poisson. You statement threw me off for a second there...
double epsilon = .0000001;

 Posts: 475
 Joined: Wed Sep 21, 2011 3:44 am UTC
Re: Probability paradox?
MartianInvader wrote:And arbiteroftruth, there definitely are differences between countably infinite and uncountable probability spaces, although in both the probability of picking a given point is (usually) zero for basically the same reasons.
That's pretty much exactly my point. One can talk all day about the technical mathematical differences between the two, at least until you start actually talking about the probability of a given thing. Then the differences all but vanish.
 Xanthir
 My HERO!!!
 Posts: 5400
 Joined: Tue Feb 20, 2007 12:49 am UTC
 Location: The Googleplex
 Contact:
Re: Probability paradox?
ekolis wrote:Suppose you have a random fish generator which produces random integers. You run the RNG once and get a random integer. Then you run it again. What is the probability that you get the same integer the second time that you did the first time?
It seems that the probability would be "one in infinity", since there is one integer which is the same out of a infinite fish of possible integers. But isn't that probability equal to zero? And if it's zero, that means the RNG could NEVER produce the same integer twice in a row, implying it has a memory!
But the probability won't be some finite fraction greater than zero either  that would imply a bias TOWARD getting the same integer again, since the fish of possible integers is greater than any finite denominator!
So, what IS the probability of getting the same integer again? It almost seems like we need a special class of "infinitesimal" numbers which are somehow greater than zero but less than the smallest possible fraction...
There's a difference between "never happens" and "almost never happens", but both are represented by probability 0. (Just like how "always happens" and "almost always happens" are both probability 1.)
"almost never" and "almost always" are welldefined mathematical terms, not just be being handwavey.
If you're using a different number system than the reals, such as one with infinitesimals, then you can see the difference between the two. But in the reals, they're the same.
(defun fibs (n &optional (a 1) (b 1)) (take n (unfold '+ a b)))
Re: Probability paradox?
gmalivuk wrote:Yeah, but there are only countably many numbers like that, so the probability of a perfect uniform RNG picking any of them is 0.
Actually, he did stipulate the program running infinitely and the definition of the computables (which are countable) do require a terminating algorithm. If you allow the programs to continue on indefinitely, then you can hit all real numbers with a simple ordered binary search. Thereby the space of numbers computable by an infinite algorithm is uncountable.
 gmalivuk
 GNU Terry Pratchett
 Posts: 26724
 Joined: Wed Feb 28, 2007 6:02 pm UTC
 Location: Here and There
 Contact:
Re: Probability paradox?
If the algorithm is defined with a finite string of symbols from a finite alphabet, it doesn't matter whether it runs forever or not: it can still only hit a countably infinite set. If the actual specification of the algorithm is allowed to be infinite, well then of course it can hit all the reals. Just tell it to PRINT each real, given as a "full" (i.e. infinite) decimal expansion.
Re: Probability paradox?
gmalivuk wrote:If the algorithm is defined with a finite string of symbols from a finite alphabet, it doesn't matter whether it runs forever or not: it will still only hit a countably infinite set. If the actual specification of the algorithm is allowed to be infinite, well then of course it will hit all the reals. Just tell it to PRINT each real, given as a "full" (i.e. infinite) decimal expansion.
Is this because the inevitable comparison line must have an infinite string?
 gmalivuk
 GNU Terry Pratchett
 Posts: 26724
 Joined: Wed Feb 28, 2007 6:02 pm UTC
 Location: Here and There
 Contact:
Re: Probability paradox?
Is which because that?
Finite algorithms in a finite language gives countably many algorithms. Each finite algorithm can only print out countably many outputs. A countable set of countable sets is still countable, so you can never get uncountably many results from finitelyspecified algorithms with a finite alphabet.
If you're allowed algorithms that are themselves infinitely long (i.e. the computer program to run one isn't finite), then sure, because now it's possible for each real to have an algorithm all to itself.
Finite algorithms in a finite language gives countably many algorithms. Each finite algorithm can only print out countably many outputs. A countable set of countable sets is still countable, so you can never get uncountably many results from finitelyspecified algorithms with a finite alphabet.
If you're allowed algorithms that are themselves infinitely long (i.e. the computer program to run one isn't finite), then sure, because now it's possible for each real to have an algorithm all to itself.
Re: Probability paradox?
Ok. Makes more sense.
 Yakk
 Poster with most posts but no title.
 Posts: 11115
 Joined: Sat Jan 27, 2007 7:27 pm UTC
 Location: E pur si muove
Re: Probability paradox?
A problem you might have is that formalizations of probability in infinite cases aren't physically realizable.
Finetary mathematics can have interpretation that can be checked against reality (at least up to some bound). Once you reach infinite and uncountable cases, what often happens is there are multiple possible extrapolations from finetary mathematics, and which one you pick cannot be "realitychecked" to find out which one corresponds to the "real" one.
Probability in the finetary sense is analogous to "rolling a die" where there is no practical way to know how the die will land before rolling it. Doing so when the die has an uncountable number of sides can be approximated by rolling a marble: but our ability to measure a marble's orientation is bounded (both practically and theoretically) in our reality.
Now, the probability of a given open intervalanalogous subset of the compact space we describe the probability distribution as being over (the set of coordinates that describe marble orientation is easily handled as a compact space) ends up having positive measure. Such open sets would correspond to measurements of a certain level of precision made saying "is this the orientation of the marble after we rolled it?"
And everything works perfectly fine here. Our model of the rolling marble turns out to be a good one, the abstraction holds.
Our probability abstraction only breaks down in situations where there really isn't a physical operation that would mirror the theoretical observation. "The probability of any one point being picked is zero" doesn't have a physical operation that corresponds to it  we cannot (in theory or in practice) measure the orientation of the marble perfectly. So what it means to have a probability measure of zero doesn't matter to how the abstraction maps over to the physically realizable case in some very essential sense.
We could easily say that the question  how can our random distribution be said to produce a given value when the probability of each given value is zero  to be nonsense, and state that probability measures are defined only on subsets, and then state that subsets of measure zero are subsets that one will never get a result from. In a sense, we state that the output of a random variable in this case is never a single value (because they have probability zero). Period. This lines up with the physical situation that we might be trying to model, or at least is consistent with it.
As an aside, our abstraction of probability states that there is no way to uniformly select a real number from the entire real line. The probability of each region between two adjacent integers must be equal for the random variable to be "uniform", and hence must be zero, and due to disjoint countable union the probability of the entire real line must be zero, contradiction.
Also note that there are many other ways you can abstract probability than the traditional realvaluedmeasureofnorm1 method. Almost certainly some of them deal with your question in different ways, some of which might be more satisfying to you.
Finetary mathematics can have interpretation that can be checked against reality (at least up to some bound). Once you reach infinite and uncountable cases, what often happens is there are multiple possible extrapolations from finetary mathematics, and which one you pick cannot be "realitychecked" to find out which one corresponds to the "real" one.
Probability in the finetary sense is analogous to "rolling a die" where there is no practical way to know how the die will land before rolling it. Doing so when the die has an uncountable number of sides can be approximated by rolling a marble: but our ability to measure a marble's orientation is bounded (both practically and theoretically) in our reality.
Now, the probability of a given open intervalanalogous subset of the compact space we describe the probability distribution as being over (the set of coordinates that describe marble orientation is easily handled as a compact space) ends up having positive measure. Such open sets would correspond to measurements of a certain level of precision made saying "is this the orientation of the marble after we rolled it?"
And everything works perfectly fine here. Our model of the rolling marble turns out to be a good one, the abstraction holds.
Our probability abstraction only breaks down in situations where there really isn't a physical operation that would mirror the theoretical observation. "The probability of any one point being picked is zero" doesn't have a physical operation that corresponds to it  we cannot (in theory or in practice) measure the orientation of the marble perfectly. So what it means to have a probability measure of zero doesn't matter to how the abstraction maps over to the physically realizable case in some very essential sense.
We could easily say that the question  how can our random distribution be said to produce a given value when the probability of each given value is zero  to be nonsense, and state that probability measures are defined only on subsets, and then state that subsets of measure zero are subsets that one will never get a result from. In a sense, we state that the output of a random variable in this case is never a single value (because they have probability zero). Period. This lines up with the physical situation that we might be trying to model, or at least is consistent with it.
As an aside, our abstraction of probability states that there is no way to uniformly select a real number from the entire real line. The probability of each region between two adjacent integers must be equal for the random variable to be "uniform", and hence must be zero, and due to disjoint countable union the probability of the entire real line must be zero, contradiction.
Also note that there are many other ways you can abstract probability than the traditional realvaluedmeasureofnorm1 method. Almost certainly some of them deal with your question in different ways, some of which might be more satisfying to you.
One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision  BR
Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.
Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.
Re: Probability paradox?
gmalivuk wrote:Finite algorithms in a finite language gives countably many algorithms. Each finite algorithm can only print out countably many outputs. A countable set of countable sets is still countable, so you can never get uncountably many results from finitelyspecified algorithms with a finite alphabet.
While it seems kinda nitpicky, I'm not sure if the result (*) that it's consistant with ZF that countable unions of countable sets need not be countable is irrelevant here. Now, being at least somewhat of a Platonist, I certainly don't consider assuming AoC as false (which is a consequence in the model of ZF that has an uncountable set that is countable union of countable sets) related to "reality", but I'm not so sure that every philosophic perspective necessarily agrees.
As Yakk pointed out later, dealing with any sort of "realistic" things involving infinities is hairy  anything running "infinitely long", doing "infinitely many steps" that is imagined to be directly implemented in reality can be moved into paradox situations (even though various folks don't quite believe that  see the wikipedia article on Supertasks for some mindboggling stuff).
Given that, I think the more precise answer to that question would be "it's not a valid question".
(*) See e.g. this math stockexchange reply which includes the original citation if anyone wants to go and read it. A quick search didn't show this (or a different proof of the fact) online anywhere.
 gmalivuk
 GNU Terry Pratchett
 Posts: 26724
 Joined: Wed Feb 28, 2007 6:02 pm UTC
 Location: Here and There
 Contact:
Re: Probability paradox?
I'm pretty sure it isn't, because couldn't a Godellike numbering scheme enumerate the algorithms, and then we/d just number their outputs sequentially? In that case, all the results could be identified with points in NxN, which is countable even without AoC.Desiato wrote:While it seems kinda nitpicky, I'm not sure if the result that it's consistant with ZF that countable unions of countable sets need not be countable is irrelevant here.
Re: Probability paradox?
gmalivuk wrote:I'm pretty sure it isn't, because couldn't a Godellike numbering scheme enumerate the algorithms, and then we/d just number their outputs sequentially? In that case, all the results could be identified with points in NxN, which is countable even without AoC.Desiato wrote:While it seems kinda nitpicky, I'm not sure if the result that it's consistant with ZF that countable unions of countable sets need not be countable is irrelevant here.
If you can actually define the enumeration  as you just outlined  then yeah, my comment is irrelevant.
Who is online
Users browsing this forum: No registered users and 10 guests