1132: "Frequentists vs. Bayesians"

This forum is for the individual discussion thread that goes with each new comic.

Moderators: Moderators General, Prelates, Magistrates

nitePhyyre
Posts: 1280
Joined: Mon Jul 27, 2009 10:31 am UTC

Re: 1132: "Frequentists vs. Bayesians"

Postby nitePhyyre » Sat Nov 10, 2012 3:54 am UTC

xkcdtag wrote:XKCD should be officially renamed "Me So Smart". This strip can be amusing at times but for the most part it's just incredibly pretentious. Some say it used to be better. Maybe. All I know is that now it seems like the purpose of most of the strips is for the author to let us all know how smart he is and for his readers to let others know how smart they are by emailing the strip (i.e. "See how smart I am? I think jokes about Bayesian vs Frequentist statistics are funny! Impressed? Ha! Bet you don't get it, do ya? Tee hee! Me so smart!" etc.).

This strip just isn't funny or witty. I get it; he knows about Bayesian vs Frequentist statistics. We're all mighty impressed down here I can tell you.

Now geeks across the world can email one another this to to show one another how smart they are (and what lame senses of humor they have).

I imagine (what's his name, Randall? something?) sitting up at night googling abstruse science terms he saw in a pop sci book or cable documentary to learn enough about it to name drop it the nexty day:

"Say, tomorrow I want to let people know that I know about - what wasthat thing called again? - Oh yeah! The fine structure constant! OK, so, how can I work that into a strip? Hmmm..."

Let's be real; this is what xkcd is all about now, isn't it?

:roll:
Someone sounds frustrated about being an abject moron and having a small penis.
sourmìlk wrote:Monopolies are not when a single company controls the market for a single product.

You don't become great by trying to be great. You become great by wanting to do something, and then doing it so hard you become great in the process.

kmellis
Posts: 7
Joined: Sat Jul 02, 2011 5:47 pm UTC

Re: 1132: "Frequentists vs. Bayesians"

Postby kmellis » Sat Nov 10, 2012 4:56 am UTC

This is more a strawman than a legitimate criticism (though, frankly, this is a webcomic and strawmen are necessarily part of the enterprise). Deborah G. Mayo responds to a similar joke made by Joseph B. Kadane:

Why do I go back to such a silly example one year later? Well, because readers seem to say it’s a bad test but still a valid significance test, whereas I’m trying to argue that it’s missing an “adequate fit measure”. It is missing the second of the three “steps in the original construction of tests”. Perhaps people see a significance probability as a type of conditional probability, where the improbable “event” need not have been rendered improbable by the hypotheses under test.

I am saying that a legitimate statistical test hypothesis must tell us (i.e., let us compute) how improbably far different experimental outcomes are from what would be expected under H. If H has nothing to do with the observed x, H cannot entail probabilities about x. It is correct to regard experimental results as anomalous for a hypothesis H only if, and only because, they run counter to what H tells us would occur in a universe where things are approximately as H asserts.


Andrew Gelman says he plans to blog about this tomorrow (and speculates/hopes that Randall might comment on what he had in mind when he wrote this).

Regarding misdiagnosis of low-probability illnesses:

Blindly following the most likely scenario and being trained to ignore darkhorse diagnoses is a bad thing.


That, too, is a strawman; but not in the context of a joke. No one, anywhere, would advocate "blindly following the most likely scenario" and to "ignore darkhorse diagnosis".

But to reiterate and expand upon what the previous commenter wrote, the simple fact of the matter is that medicine and biology is extremely complex. No individual human being could ever possibly have the knowledge and expertise to correctly diagnose all possible known ailments that can betheoretically diagnosed. Or, for that matter, most of them. This would make successful diagnostic medicine an impossibility in an individual practitioner context were it not for the fact that a) increasingly we don't expect the expertise to reside primarily in the individual physician but, rather, distributed throughout institutions where such expectations are more realistic; and b) a relatively small number of types of illnesses account for all cases of illness.

The implication of the first is that it is wrong to expect individual physicians to perform diagnostic miracles. Arguably, we can rightly expect them to understand their limitations and to therefore utilize the institutional resources that transcend those personal limitations, and hold them responsible when they fail to do so. Even so, the fact of the matter is that medical training and public expectations both still largely unrealistically expect individual physicians to be able to do more than they actually are able to do. I'd argue, and in fact often do argue, that we should hold medical culture and institutions responsible for misdiagnoses that should have been collectively possible.

The implication of the second is more dire. Given a relative failure of medical institutions to organize collective diagnostics such that rare conditions would be reliably diagnosed, expecting individual practitioners to accomplish this unrealistic goal means that, to the degree to which they attempt this, they will be increasingly inefficient. And physicians' time, laboratory tests, and other resources are neither limitless nor inexpensive. Meanwhile, a large portion of the population doesn't get the medical care it needs for easily diagnosed and treated conditions. Simply put, expecting individual physicians to correctly diagnose extremely rare conditions will save the lives of a few patients at the cost of the lives of many others.

I write this as someone with a congenital disease that is known to exist in only seven families in the entire world and which was correctly diagnosed (sort of) only in my own case, after two prior generations had been misdiagnosed with a non-heritable condition (amazingly) that had as its standard treatment a painful, debilitating procedure that failed to work, repeatedly. Furthermore, my father died unexpectedly of mass internal hemorrhaging ten days after pancreatic surgery, while in the hospital, and for unknown reasons. It's entirely possible that someone missed something or made an error. But medicine is not mechanical engineering. Physicians are far, far from perfect and we do them and ourselves a disservice by expecting them to be. Frankly, physicians themselves and the medical establishment is not honest with itself about this stuff and doctors are trained to present themselves to patients as far more confident and certain than they actually are. I believe this is a mistake. Not because we should pillory them after revealing them to be human, but because they are human and we have deeply unrealistic expectations about what they can do and what they cannot.

User avatar
willpellmn
Posts: 93
Joined: Wed Apr 21, 2010 11:05 am UTC

Re: 1132: "Frequentists vs. Bayesians"

Postby willpellmn » Sat Nov 10, 2012 6:42 am UTC

This comic was meh to me, but the title text made me crack up big time. Oddly, the labyrinth guards scene came up in a recent discussion on Giantitp, so I've seen it much more recently than I have the whole movie.

User avatar
Melkior
Posts: 2
Joined: Sat Nov 10, 2012 7:35 am UTC

Re: 1132: "Frequentists vs. Bayesians"

Postby Melkior » Sat Nov 10, 2012 7:43 am UTC

Nobody has yet given the Astronomer's Answer:

Since it's impossible for a star the size of our sun to ever go nova, every time the machine says the sun has gone nova, the machine must be lying.

There may be some "wiggle room" here if we redefine "exploded" to mean "reached the end of its life and is expanding into a red giant" instead of "gone nova" but the original strip explicitly stated "gone nova" and it's impossible for our sun to do that.

You may now facepalm. :D ( :lol: that nobody picked up on this before me)

fr00t
Posts: 113
Joined: Wed Jul 15, 2009 11:06 am UTC

Re: 1132: "Frequentists vs. Bayesians"

Postby fr00t » Sat Nov 10, 2012 10:03 am UTC

I guess I'm not often exposed to the world of academic mathematics all that much but I always get the impression that so called frequentists are just these make-believe strawmen that people bash to feel good about themselves.

User avatar
Melkior
Posts: 2
Joined: Sat Nov 10, 2012 7:35 am UTC

Re: 1132: "Frequentists vs. Bayesians"

Postby Melkior » Sat Nov 10, 2012 10:08 am UTC

And now I see I was a little hasty. I didn't read the whole thread. One other person posted essentially the same thing I did.

But it's still amusing that many people want to keep on arguing probabilities when the point is moot.

User avatar
Dmytry
Posts: 68
Joined: Mon Jun 01, 2009 12:39 pm UTC
Contact:

Re: 1132: "Frequentists vs. Bayesians"

Postby Dmytry » Sat Nov 10, 2012 11:24 am UTC

This is, TBH, bullshit. Using frequentist approach, you choose the significance threshold based on the cost of assuming that sun gone nova when it actually has not. You get a decision procedure when to panic, or how many of such detectors you need to build for the detectors to be useful. You can also factor in for how long the sun has shined, and the stats on other stars similar to sun (which is how we know it is unlikely to go nova), and use Bayes theorem (the common justification/explanation of which, is, by the way, frequentist! There is a Bayesian justification from internal consistency and Dutch booking, but it is seriously counter intuitive).

Using Bayesian approach, though, you can proclaim that a number you have made up for the sun exploding is truly a probability akin to the probability of a die rolling 6, and then start collecting money for sun explosion prevention fund, claiming that you save something like 10 lives per dollar by multiplying huge number of all the future humans with the low probability of sun exploding, and dividing that by however much money you are hoping to get. If you have a billionaire friend providing a starting capital, and you get enough reach, out of all the people you reach some people will be silly enough to donate. I'm not joking. This is how the abovementioned HPMOR guy is paying his bills, except it is world destroying AI rather than sun exploding. Bayesian approach is only as good as the 'priors', and any good priors come from some sort of frequentist statistics (e.g. looking at other stars, or in case of what naturally feels right, animals living and dying and you evolving some default priors for the learning processes in the brain). When you don't have any good priors, you can't really do anything other than put a bound on probability of being wrong.

Also by the way, its very hard to estimate probability of sun going nova without other stars because if there was a process by which sun can go nova, such process would necessarily not have occurred in the past or we'd be dead.

User avatar
Dmytry
Posts: 68
Joined: Mon Jun 01, 2009 12:39 pm UTC
Contact:

Re: 1132: "Frequentists vs. Bayesians"

Postby Dmytry » Sat Nov 10, 2012 11:49 am UTC

neremanth wrote:I think this is a little unfair. As others have noted, it's not as though frequentists are honour-bound to ignore Bayes' Theorem. In fact, they're very well aware of exactly the problem this comic illustrates (though the example most often used is the medical test scenario others have posted about). At least, many of them are, and certainly most (if not all) of those who are actually statisticians, although certainly you do get plenty of scientist/social scientist users of frequentist statistics who are not aware of this (as well as plenty who are). For this reason the frequentist statisticians and more expert users are careful in their phrasing never to say that the p-value is the chance of the null hypothesis being correct (which is recognised to be very much not true), but always to say that it's the chance of observing such a result given that the null hypothesis is correct. (Again there certainly are some less expert users who do fall into that trap).

I think if this were to happen in real life, the frequentist would formally reject the null hypothesis, but he/she would also be very aware that the sun going nova at this point in time is an unlikely event, so would suggest running the test again several more times. That would bring the chance of rejecting the null hypothesis given that it's actually true down from 1/36 to (1/36)^n, where n is the number of times the test is run. Of course it could still happen that the detector rolled double sixes every time, but it makes it more unlikely. Performing the test 5 times would give a one in 60466176 chance of being wrong. If that 1 in 60466176 chance does come up, well that's just too bad.

Also, I'm not sure whether P(the sun goes nova at this point in time) is something that can be calculated from the astronomical observations available to us? If it is, then that's information that should be equally available to both the frequentist and the Bayesian; the Bayesian will use it as a prior while the frequentist will perform the test and then use Bayes' Theorem to calculate the probability of the sun having gone nova given the test result, and they should get the same thing (although the Bayesian's analysis will have incorporated the uncertainty in the estimate of the prior probability of the sun going nova). Either of them could use the prior probability to calculate how many times it would be necessary to perform the test in order for there to be only a 5% chance of the sun not actually having gone nova given all "YES" results. And then they could perform the test that many times.

If that probability is not something that is available to either, then in the comic the Bayesian is only going on "well, it's much less than 1/36", and as I said, the frequentist would also be aware of the issue. If they did perform the test five times, say, the Bayesian might be less confident that the prior probability of the sun going nova now is less than 1/60455176 and not so willing to make the bet.

In real research, the difference between frequentist and Bayesian approaches is that the Bayesian will incorporate the results of previous research into the analysis as a prior while the frequentist would perform the analysis in isolation (but discuss previous results in the paper) and eventually someone will (hopefully) perform a metaanalysis including that work along with all the rest on the same question. So it's just a question of at what stage of the process it all gets put together. Of course, if there has been no previous research on a particular question, the Bayesian will have to use non-informative priors. The important point is that no good scientist would ever consider one experiment/study/analysis enough to definitively answer a question. It's not great to have a 20% chance of being wrong if you only take one stab at an important question, but it's ok to be wrong 20% of the time if you keep on attempting to answer it.


Precisely. Coincidently, I've written a post on the topic about week ago: http://dmytry.blogspot.com/2012/11/a-br ... ribed.html .

In my opinion, an empirical scientific paper should be reporting on an experiment. I.e. in this case it would report the p-value for sun going nova, it will get published (there is a practical rule not to publish crap with too high p-value because it is not worth the paper it's written on), then everyone is free to combine that p-value with their prior beliefs on the sun going nova, or not combine by accepting bounded probability of being wrong instead. There are good reasons not to put the opinions of the experimenters into conclusion. There's also this curious online phenomena of self proclaimed Bayesians that do not have sufficient mathematical or scientific background, and just use the word for the purpose of group identification and a view on out-groups (only our group takes those obvious truisms as true; everyone else is insane and we are raising the sanity waterline, that sort of cultish stuff). Google "What is Bayesianism" for a very stark example of such.

RikRaccoon
Posts: 4
Joined: Wed Oct 20, 2010 11:23 am UTC

Re: 1132: "Frequentists vs. Bayesians"

Postby RikRaccoon » Sat Nov 10, 2012 12:17 pm UTC

Worst comic ever. Usually Randall at least has some clue what he is talking about. Here, he demonstrates that he has a comic book understanding of the principles of statistics. Choosing a p-threshold of 0.05 is a subjective decision, based on the investigator's prior choice as to at what level of risk he is willing to allow a false positive finding. The Bayesian prior is also a subjective decision, based on the investigator's prior belief and the extent to which he is willing to listen to the data or listen to his own opinion. If the frequentist had chosen a more stringent threshold, and the Bayesian had chosen a less stringent prior, the comic would have been reversed. Not that statisticians really divide into Bayesians and frequentists anyway, except in comic strips.

nomadiq
Posts: 18
Joined: Wed Apr 27, 2011 8:57 pm UTC

Re: p<0.05 is very bad

Postby nomadiq » Sat Nov 10, 2012 2:24 pm UTC

I think one of the central things to take away from this cartoon is the notion that deciding on significance (or truth) with a p value of < 0.05 is all too common yet a p value of < 0.05 is only as common as rolling a double six (or any double so long as you select which double you want first). Its not that hard to come by just by accident yet many published works, especially in psychology, use this standard.

Consider the following graph:

Image

And read this for some background:

http://bps-research-digest.blogspot.com/2012/08/phew-made-it-how-uncanny-proportion-of.html

You see that not only is 0.05 just too easy to come by (just repeat experiment say 20 times until you get lucky) but also, the number of p values that just squeak in under 0.05 is unusually high in the psychological literature. This is not to say that these numbers are made up (maybe they are, who knows) but it could be the case of selective reporting of data that just so happens to meet the criterion. I personally think experiments are repeated until 'significance' is met; this is based on some exposure to the psychological research arena I have had. This happens in other fields too, and I have seen it there, but I would say it happens to a lesser extent.

Re: Frequentists vs. Bayseians:

The problem of frequentists versus bayseians is really the problem of a lot of science being based on significance testing that is purely frequentist in nature. A good frequentist statistician is not fooled by the senario in the cartoon but most people applying statistics see p values and the student t test as tools to be applied blindly. The problem does not lie with the well schooled statisticians, the problem lies with inexperienced people applying the frequentist model naively.

nomadiq
Posts: 18
Joined: Wed Apr 27, 2011 8:57 pm UTC

Re: 1132: "Frequentists vs. Bayesians"

Postby nomadiq » Sat Nov 10, 2012 3:00 pm UTC

xkcdtag wrote:XKCD should be officially renamed "Me So Smart". This strip can be amusing at times but for the most part it's just incredibly pretentious. Some say it used to be better. Maybe. All I know is that now it seems like the purpose of most of the strips is for the author to let us all know how smart he is and for his readers to let others know how smart they are by emailing the strip (i.e. "See how smart I am? I think jokes about Bayesian vs Frequentist statistics are funny! Impressed? Ha! Bet you don't get it, do ya? Tee hee! Me so smart!" etc.).

.....

Let's be real; this is what xkcd is all about now, isn't it?

:roll:


You can always not read.

But I think you are judging harshly here. I read xkcd first thing Monday, Wednesday, Friday. Randall goes in moods. Recently some posts have been numbers/statistics based. This is hardly a surprise given the US presidential election. I assume Randall has a similar fascination with electoral numbers/predictions/probabilities that I have and the consequences of studying these things - like re-appreciating the difference between frequentist and bayesian models of probability. But you only need to go back to last week to read a gag about "Fifty Shades of Grey".

I would expect in the coming months to see more gags about consumerism/holiday season. But heaven forbid Randall might point out the amazing coincidence between the birth of some peoples savior and the northern hemisphere winter solstice! How "smart of him" to understand the movement of the earths orbit around the sun and boast about it!

So long as the cartoon is more creative/funny/ironic than "solstice, birth of religious figure, coincidence not!" I will keep on reading.

J Thomas
Everyone's a jerk. You. Me. This Jerk.^
Posts: 1190
Joined: Fri Sep 23, 2011 3:18 pm UTC

Re: p<0.05 is very bad

Postby J Thomas » Sat Nov 10, 2012 3:06 pm UTC

nomadiq wrote:Re: Frequentists vs. Bayseians:

The problem of frequentists versus bayseians is really the problem of a lot of science being based on significance testing that is purely frequentist in nature. A good frequentist statistician is not fooled by the senario in the cartoon but most people applying statistics see p values and the student t test as tools to be applied blindly. The problem does not lie with the well schooled statisticians, the problem lies with inexperienced people applying the frequentist model naively.


That's a tremendously important point.

The problem isn't frequentists versus bayesians. The problem is bad frequentists versus bad bayesians.

It's easy to do either approach wrong.
The Law of Fives is true. I see it everywhere I look for it.

garaden
Posts: 18
Joined: Thu Aug 11, 2011 3:40 am UTC

Re: 1132: "Frequentists vs. Bayesians"

Postby garaden » Sat Nov 10, 2012 4:25 pm UTC

Dmytry wrote:Using Bayesian approach, though, you can proclaim that a number you have made up for the sun exploding is truly a probability akin to the probability of a die rolling 6, and then start collecting money for sun explosion prevention fund, claiming that you save something like 10 lives per dollar by multiplying huge number of all the future humans with the low probability of sun exploding, and dividing that by however much money you are hoping to get. If you have a billionaire friend providing a starting capital, and you get enough reach, out of all the people you reach some people will be silly enough to donate. I'm not joking. This is how the abovementioned HPMOR guy is paying his bills, except it is world destroying AI rather than sun exploding.

No no no, totally backwards. Yudkowsky is trying to hasten the world-destroying AI. :D

User avatar
San Fran Sam
Posts: 228
Joined: Tue Nov 15, 2011 5:54 pm UTC

Re: 1132: "Frequentists vs. Bayesians"

Postby San Fran Sam » Sat Nov 10, 2012 6:37 pm UTC

Alan wrote:No no no no.

This isn't clever because he doesn't pay off the bet if he loses.

The Bayesian statistician knows the probability of the sun going nova is very, very small. That makes the chance that the machine rolled double sixes a near certainty (either that or the machine malfunctions).

P(Nova) = 1/1000000000
P(NotNova) = 999999999/1000000000

P(YES|Nova) = 35/36
P(YES|NotNova) = 1/36

P(Nova|Yes) = ( P(Yes|Nova) * P(Nova) ) / (P(Yes|Nova) * P(Nova) + P(Yes|NotNova) * P(NotNova))

P(Nova|Yes) = (35/36000000000) / (35/36000000000 + 999999999/36000000000)

P(Nova|Yes) = (35/36000000000) / (1000000034/36000000000)

P(Nova|Yes) = 35/1000000034


Thank you. that was the explanation i was hoping to see here. In the antediluvean era, i took some statistics and vaguely recalled that Bayesian statistics had to do with conditional probabilities. this spelled out the wonky aspect of the joke.

User avatar
San Fran Sam
Posts: 228
Joined: Tue Nov 15, 2011 5:54 pm UTC

Re: 1132: "Frequentists vs. Bayesians"

Postby San Fran Sam » Sat Nov 10, 2012 6:38 pm UTC

pareidolon wrote:Best way to make money ever: bet people the world isn't going to end.
If you win, you get their money.
If you lose, meh.
On what I assure you is an entirely unrelated topic,
anyone here believe the world's going to end on December 12th?


Yes, but only as we know it. and i feel fine.

Kristopher
Posts: 10
Joined: Fri Sep 09, 2011 4:18 pm UTC

Re: 1132: "Frequentists vs. Bayesians"

Postby Kristopher » Sat Nov 10, 2012 7:23 pm UTC

Wow ... only a few folks even discussing the obvious:

Probability that the world just ended is unimportant here. Make the bet for as much as the sucker will stand.

You either take him to the cleaners, or you just died from a massive burst of radiation and just don't care anymore.

Probability and statistics are all just a math game. What is important is how you use them.

Aiwendil
Posts: 314
Joined: Thu Apr 07, 2011 8:53 pm UTC
Contact:

Re: 1132: "Frequentists vs. Bayesians"

Postby Aiwendil » Sat Nov 10, 2012 8:01 pm UTC

Nomadic wrote:I think one of the central things to take away from this cartoon is the notion that deciding on significance (or truth) with a p value of < 0.05 is all too common yet a p value of < 0.05 is only as common as rolling a double six


Actually, I don't think that's the point of the comic at all. It's not that the frequentist test is too weak, or that he should have chosen a different p-value. It's that if you want to estimate how likely it is that a given thing is true, you have to use all the information you have about it - i.e., you have to take the prior probability into account. Let's say the frequentist required a much stricter p-value - say 0.0001, instead of 0.05. And let's say the detector rolls six dice and will only lie if it gets six sixes. The chance of six sixes is 1/6^6 = 0.00002, less than 0.0001, so even with this much stricter test, the frequentist character would still reject the null hypothesis. But, since the prior probability of the sun going nova that particular night is still much less than 0.00002, the Bayesian would still correctly conclude that it probably didn't. So the problem isn't just the wrong p-value test; it's that the character is using the frequentist approach to try to solve a problem that requires the Bayesian approach.

J Thomas wrote:The problem isn't frequentists versus bayesians. The problem is bad frequentists versus bad bayesians.


I think I agree, but I think it would be more accurate to say that the problem is applying frequentism in cases where Bayesianism is required, and vice versa. The thing that should be emphasized is that frequentism and Bayesianism answer different questions.

Frequentism answers questions of the form: 'If the true value were X, how likely is it that we would get result Y?'

Bayesianism answers questions of the form: 'Given that we got result Y, how likely is it that the true value is X?'

Beltayn
Posts: 92
Joined: Sat Oct 13, 2012 12:54 am UTC

Re: 1132: "Frequentists vs. Bayesians"

Postby Beltayn » Sat Nov 10, 2012 11:45 pm UTC

Or you could just call somebody who lives on the opposite side of the earth and ask them if the sun is still there...

Maybe I'm being too much of an experimentalist.

J Thomas
Everyone's a jerk. You. Me. This Jerk.^
Posts: 1190
Joined: Fri Sep 23, 2011 3:18 pm UTC

Re: 1132: "Frequentists vs. Bayesians"

Postby J Thomas » Sun Nov 11, 2012 12:09 am UTC

Beltayn wrote:Or you could just call somebody who lives on the opposite side of the earth and ask them if the sun is still there...

Maybe I'm being too much of an experimentalist.


If you don't get an answer, by the time you're sure you're not going to get an answer it's probably too late to bet.

I guess it's a variation on an old philosophical chestnut. If you had some reason to believe that you only have 3 minutes left to live, what will you do with your 3 minutes? Spend them making tests whether it's true that you have only 3 minutes to live. Or make bets that will give you trivial sums of money if you do live....
The Law of Fives is true. I see it everywhere I look for it.

Math_Mage
Posts: 62
Joined: Sat Jul 18, 2009 8:14 am UTC

Re: 1132: "Frequentists vs. Bayesians"

Postby Math_Mage » Sun Nov 11, 2012 2:49 am UTC

I'm not sure if anyone has pointed this out yet, but it doesn't matter which side of the Earth they're on. If the Sun goes nova, all corners of the Earth will find out at about the same time. Science fiction lovers should check out Larry Niven's short story "Inconstant Moon". That was the only thing that bothered me about this comic.

pareidolon
Posts: 31
Joined: Fri Apr 08, 2011 6:59 am UTC

Re: 1132: "Frequentists vs. Bayesians"

Postby pareidolon » Sun Nov 11, 2012 3:40 am UTC

mathmannix wrote:
Pingouin7 wrote:
pareidolon wrote:On what I assure you is an entirely unrelated topic,
anyone here believe the world's going to end on December 12th?

Why would anyone believe that?


Yeah, everyone knows it's 21 December! :roll:

Naturally I want to get my money before the landmasses of the earth rotate 90 degrees.

Alan wrote:No no no no.

This isn't clever because he doesn't pay off the bet if he loses.

The Bayesian statistician knows the probability of the sun going nova is very, very small. That makes the chance that the machine rolled double sixes a near certainty (either that or the machine malfunctions).

P(Nova) = 1/1000000000
P(NotNova) = 999999999/1000000000

P(YES|Nova) = 35/36
P(YES|NotNova) = 1/36

P(Nova|Yes) = ( P(Yes|Nova) * P(Nova) ) / (P(Yes|Nova) * P(Nova) + P(Yes|NotNova) * P(NotNova))

P(Nova|Yes) = (35/36000000000) / (35/36000000000 + 999999999/36000000000)

P(Nova|Yes) = (35/36000000000) / (1000000034/36000000000)

P(Nova|Yes) = 35/1000000034

Yeah, thinking this over a couple times, you're probably right that that's the punchline. It's been a while since I last saw that theorem...

User avatar
Velexia
Posts: 196
Joined: Thu Nov 01, 2012 1:10 pm UTC

Re: 1132: "Frequentists vs. Bayesians"

Postby Velexia » Sun Nov 11, 2012 5:28 am UTC

pareidolon wrote:On what I assure you is an entirely unrelated topic,
anyone here believe the world's going to end on December 12th?


I hope aliens visit publicly. I also hope they aren't just the government pretending to be aliens.

But, everything the universe has experienced thus far which directly led up to me being here in this moment suggests that nothing especially dramatic will happen. Also, wasn't it the 21st?

SecondTalon wrote:
We're on to you, SirMustapha.

ALTERNATE JOKE

Holy shit, this forum has a webcomic!?


I'm going with Talon on this one. Still, seems Randall managed to troll a few people with this, including a troll made out of serial numbers. =)
Hail Eris!

User avatar
Coyne
Posts: 1112
Joined: Fri Dec 18, 2009 12:07 am UTC
Location: Orlando, Florida
Contact:

Re: 1132: "Frequentists vs. Bayesians"

Postby Coyne » Sun Nov 11, 2012 6:59 am UTC

Durandal_1707 wrote:
Coyne wrote:A $50 bet with a 1-in-36 chance of winning? And never having to pay off if he loses? Clever, isn't he?

The odds are far better than that; the guy on the left neglected to factor in the probability that the sun would just suddenly go nova five billion years early. I'm not sure what the odds of that are, but I'm sure they're a lot lower than 1/36.


Oh, sure. But still: Free money!
In all fairness...

User avatar
Dmytry
Posts: 68
Joined: Mon Jun 01, 2009 12:39 pm UTC
Contact:

Re: 1132: "Frequentists vs. Bayesians"

Postby Dmytry » Sun Nov 11, 2012 11:17 am UTC

garaden wrote:
Dmytry wrote:Using Bayesian approach, though, you can proclaim that a number you have made up for the sun exploding is truly a probability akin to the probability of a die rolling 6, and then start collecting money for sun explosion prevention fund, claiming that you save something like 10 lives per dollar by multiplying huge number of all the future humans with the low probability of sun exploding, and dividing that by however much money you are hoping to get. If you have a billionaire friend providing a starting capital, and you get enough reach, out of all the people you reach some people will be silly enough to donate. I'm not joking. This is how the abovementioned HPMOR guy is paying his bills, except it is world destroying AI rather than sun exploding.

No no no, totally backwards. Yudkowsky is trying to hasten the world-destroying AI. :D

He used to, my impression is that now he proclaims to be building a world saving AI that has to come before someone who's not as smart as Yudkowsky and is incapable of grasping the necessity of not destroying the world builds a world destroying AI. Either way the AI is to necessary be a nasty psychopathic utilitarian, except the new nasty psychopathic utilitarian AI is to provably have people's best interests in mind, somehow. This is considered friendly.

User avatar
Velexia
Posts: 196
Joined: Thu Nov 01, 2012 1:10 pm UTC

Re: 1132: "Frequentists vs. Bayesians"

Postby Velexia » Sun Nov 11, 2012 2:37 pm UTC

So, after reading several of SirMustapha's posts, and this:
I do, on the other hand, generally enjoy your endless tirades against Randall and the xkcd fanbase. While I still consider you wrong, your questions that are posed to the fora are generally thought-provoking and well written. Carry on, good sir, carry on!


It seems that he was always here to spark up some kind of discussion about the topic with a kind of Devil's Advocate evil twin persona...

But at what point did people realize it was Randall? (Question mostly directed at a Talon, as a somewhat protracted conversation/introduction to the forums thing)
Hail Eris!

wumpus
Posts: 546
Joined: Thu Feb 21, 2008 12:16 am UTC

Re: 1132: "Frequentists vs. Bayesians"

Postby wumpus » Sun Nov 11, 2012 5:04 pm UTC

I have it on good authority that the actual difference between "Frequentists and Bayesians" can not be determined except in the differences when handling "ugly integrals" (after reading deeper he also admitted that at least one poster has done a few of those "ugly integrals"). The basic math behind Bayes theorem isn't exactly a deep dark secret to statisticians (I first heard of the word back when reasonably good spam filters were being created).

As far as the self-proclaimed "Bayesians", all they appear to do with the self-proclaimed super rationality is to ignore their own mistakes and fall into groupthink. The most obvious one is the absolute certainty of the many worlds hypothesis of quantum physics. It might be becoming more popular, but it certainly violates relativity/casaulity (think of exactly what happens when the the universe splits and when it happens as observed in various frames of reference) just as much as any more established hypothesis ("Bayesians" simply sweep such issues under the rug). I can see those of a certain young age (the type who are in danger of falling for objectivism) falling for the idea that our Bayesian cartoon statistician has the secret of the universe, but others can get tired of such groupthink.

Still, "Harry Potter and the Methods of Rationality" is not to be missed. Careful plugging of as many plot holes and contradictions of Harry Potter's world and coming up with, well anything can be loads of fun. Just remember the fact that Harry would be better off with "Feyman-style" rationality rather than the "assume you are always right" method that the Beyesians use. Also remember that this isn't better proof than Ayn Rand's "it worked in my carefully controlled novel" evidence.

kobayashimaru3
Posts: 11
Joined: Fri Feb 26, 2010 1:06 am UTC

Re: 1132: "Frequentists vs. Bayesians"

Postby kobayashimaru3 » Sun Nov 11, 2012 7:28 pm UTC

Alan wrote:No no no no.

This isn't clever because he doesn't pay off the bet if he loses.

The Bayesian statistician knows the probability of the sun going nova is very, very small. That makes the chance that the machine rolled double sixes a near certainty (either that or the machine malfunctions).

P(Nova) = 1/1000000000
P(NotNova) = 999999999/1000000000

P(YES|Nova) = 35/36
P(YES|NotNova) = 1/36

P(Nova|Yes) = ( P(Yes|Nova) * P(Nova) ) / (P(Yes|Nova) * P(Nova) + P(Yes|NotNova) * P(NotNova))

P(Nova|Yes) = (35/36000000000) / (35/36000000000 + 999999999/36000000000)

P(Nova|Yes) = (35/36000000000) / (1000000034/36000000000)

P(Nova|Yes) = 35/1000000034


According to these numbers this is still better odds than winning the powerball. http://www.powerball.com/powerball/pb_prizes.asp

Aiwendil
Posts: 314
Joined: Thu Apr 07, 2011 8:53 pm UTC
Contact:

Re: 1132: "Frequentists vs. Bayesians"

Postby Aiwendil » Sun Nov 11, 2012 11:07 pm UTC

wumpus wrote:As far as the self-proclaimed "Bayesians", all they appear to do with the self-proclaimed super rationality is to ignore their own mistakes and fall into groupthink. The most obvious one is the absolute certainty of the many worlds hypothesis of quantum physics.


I'm confused. I've never met a self-proclaimed Bayesian who insisted with absolute certainty on the many-worlds interpretation. Actually, I don't think I've met anyone who insists with absolute certainty on the MWI. What has Bayesianism to do with MWI?

User avatar
boXd
Posts: 196
Joined: Thu Aug 09, 2012 7:05 pm UTC
Location: The Netherlands

Re: 1132: "Frequentists vs. Bayesians"

Postby boXd » Sun Nov 11, 2012 11:17 pm UTC

Aiwendil wrote:
wumpus wrote:As far as the self-proclaimed "Bayesians", all they appear to do with the self-proclaimed super rationality is to ignore their own mistakes and fall into groupthink. The most obvious one is the absolute certainty of the many worlds hypothesis of quantum physics.


I'm confused. I've never met a self-proclaimed Bayesian who insisted with absolute certainty on the many-worlds interpretation. Actually, I don't think I've met anyone who insists with absolute certainty on the MWI. What has Bayesianism to do with MWI?


Since wumpus refers to HPMOR, I think he's talking about Yudkowsky & co. This has little to do with any issues with Bayes' theorem, and more with that particular group of people who like to promote it.

User avatar
Coyne
Posts: 1112
Joined: Fri Dec 18, 2009 12:07 am UTC
Location: Orlando, Florida
Contact:

Re: 1132: "Frequentists vs. Bayesians"

Postby Coyne » Sun Nov 11, 2012 11:25 pm UTC

kobayashimaru3 wrote:
Alan wrote:No no no no.

This isn't clever because he doesn't pay off the bet if he loses.

The Bayesian statistician knows the probability of the sun going nova is very, very small. That makes the chance that the machine rolled double sixes a near certainty (either that or the machine malfunctions).

P(Nova) = 1/1000000000
P(NotNova) = 999999999/1000000000

P(YES|Nova) = 35/36
P(YES|NotNova) = 1/36

P(Nova|Yes) = ( P(Yes|Nova) * P(Nova) ) / (P(Yes|Nova) * P(Nova) + P(Yes|NotNova) * P(NotNova))

P(Nova|Yes) = (35/36000000000) / (35/36000000000 + 999999999/36000000000)

P(Nova|Yes) = (35/36000000000) / (1000000034/36000000000)

P(Nova|Yes) = 35/1000000034


According to these numbers this is still better odds than winning the powerball. http://www.powerball.com/powerball/pb_prizes.asp


All this talk-talk-talk about the probabilities. Y'all are missing the big picture here. If the Sun went nova: No Sun, no Earth, no Frequentists, no Bayesians, no debts...and no welchers. 'Course he loses everything along with everyone else, but at least his bet isn't a loser. So with respect to the bet: He either is a winner, for $50, or all bets are off.

-----

Then, of course, there's the fact that, per our best understanding, our Sun can't go nova. It's not massive enough. It'll eventually turn into a red giant and boil Earth away, but no nova. Oh, I suppose if the residual white dwarf of our Sun wound up in orbit around a red giant and picked up enough mass, it could wind up as a type 1a supernova, but (a) we'll all be long gone before then and (b) the chances of that are unbelievably miniscule, probably unlikely to happen before the Big Crunch or Big Rip (whichever is the final destiny). I'd guess (with no real basis) the chances are not better than 1 in 1018.
In all fairness...

J Thomas
Everyone's a jerk. You. Me. This Jerk.^
Posts: 1190
Joined: Fri Sep 23, 2011 3:18 pm UTC

Re: 1132: "Frequentists vs. Bayesians"

Postby J Thomas » Sun Nov 11, 2012 11:52 pm UTC

Coyne wrote:Then, of course, there's the fact that, per our best understanding, our Sun can't go nova. It's not massive enough.


Thank you for that qualifier. Every time I see somebody post "Our Sun can never go nova. Science has proven that once and for all" it makes my teeth itch.
The Law of Fives is true. I see it everywhere I look for it.

garaden
Posts: 18
Joined: Thu Aug 11, 2011 3:40 am UTC

Re: 1132: "Frequentists vs. Bayesians"

Postby garaden » Sun Nov 11, 2012 11:56 pm UTC

wumpus wrote:As far as the self-proclaimed "Bayesians", all they appear to do with the self-proclaimed super rationality is to ignore their own mistakes and fall into groupthink. The most obvious one is the absolute certainty of the many worlds hypothesis of quantum physics. It might be becoming more popular, but it certainly violates relativity/casaulity (think of exactly what happens when the the universe splits and when it happens as observed in various frames of reference) just as much as any more established hypothesis ("Bayesians" simply sweep such issues under the rug). I can see those of a certain young age (the type who are in danger of falling for objectivism) falling for the idea that our Bayesian cartoon statistician has the secret of the universe, but others can get tired of such groupthink.

Still, "Harry Potter and the Methods of Rationality" is not to be missed. Careful plugging of as many plot holes and contradictions of Harry Potter's world and coming up with, well anything can be loads of fun. Just remember the fact that Harry would be better off with "Feyman-style" rationality rather than the "assume you are always right" method that the Beyesians use. Also remember that this isn't better proof than Ayn Rand's "it worked in my carefully controlled novel" evidence.


Yeah, the "many worlds" thing is my single biggest beef with Yudkowsky: I feel like he's really underestimating how much he doesn't know about physics compared to a professional physics researcher, when he gets indignant about the physics community disagreeing with him.

And I can see why people see Less Wrong as creepy: Yudkowsky's basically in charge (though there are other prominent voices), Bayesianism is a Super Happy Thing, and he does take donations. But I have to protest the charge of groupthink. The whole point of Bayesianism is to learn about mental mistakes like groupthink and generalization from fictional evidence, so we can watch out for them and get rid of them.

Sure, obviously, we could just be paying lip service to those ideas while being blind to our own hypocrisy. But we really do try not to feel overly enthusiastic about Bayesianism/Less Wrong/Yudkowsky for that reason. It's heavily emphasized that if you fail to look for heuristics and biases in your own most precious arguments, then you're missing the point. And if, God help you, you look for heuristics and biases in other people's arguments instead...then congratulations, you managed to learn something that made you dumber.

User avatar
dudiobugtron
Posts: 1098
Joined: Mon Jul 30, 2012 9:14 am UTC
Location: The Outlier

Re: 1132: "Frequentists vs. Bayesians"

Postby dudiobugtron » Mon Nov 12, 2012 12:19 am UTC

J Thomas wrote:
Coyne wrote:Then, of course, there's the fact that, per our best understanding, our Sun can't go nova. It's not massive enough.


Thank you for that qualifier. Every time I see somebody post "Our Sun can never go nova. Science has proven that once and for all" it makes my teeth itch.

I was going to reply to this thread and make this point, but I can rest easy now that you already have.

I guess I'll just follow it up with my proposed solution to the 'it's not massive enough' problem - just aim a giant water hose at it. ;)
Image

wumpus
Posts: 546
Joined: Thu Feb 21, 2008 12:16 am UTC

Re: 1132: "Frequentists vs. Bayesians"

Postby wumpus » Mon Nov 12, 2012 1:13 am UTC

garaden wrote:
wumpus wrote:As far as the self-proclaimed "Bayesians", all they appear to do with the self-proclaimed super rationality is to ignore their own mistakes and fall into groupthink.
[rest of my blather deleted as it looks increasingly one sided]


Yeah, the "many worlds" thing is my single biggest beef with Yudkowsky: I feel like he's really underestimating how much he doesn't know about physics compared to a professional physics researcher, when he gets indignant about the physics community disagreeing with him.


Yes I meant Yudkowsky and crew. I am not exactly sure who else self-identifies with "Bayesian" enough to call themselves that, although it may sound better than "probablilitist". I like probability functions; they fit well into Boolean logic. I also have little interest in ugly integrals so I feel no need to involve myself in such disagreements.

I will also agree that my assessment was largely too harsh. Mostly I spent a great deal of time reading through their materials, read plenty of interesting articles on cognitive biases (unfortunately they have to be held as "undecided" considering how fast and loose they play with the truth in the few places I could check). I will admit that the name "less wrong" seems appropriate as they seem to fall into almost as many biases as they teach about, but the ranting about things like "many worlds" shows that they need to take the log out of their own eyes before preaching on biases.

User avatar
Coyne
Posts: 1112
Joined: Fri Dec 18, 2009 12:07 am UTC
Location: Orlando, Florida
Contact:

Re: 1132: "Frequentists vs. Bayesians"

Postby Coyne » Mon Nov 12, 2012 2:49 am UTC

J Thomas wrote:
Coyne wrote:Then, of course, there's the fact that, per our best understanding, our Sun can't go nova. It's not massive enough.


Thank you for that qualifier. Every time I see somebody post "Our Sun can never go nova. Science has proven that once and for all" it makes my teeth itch.

You're welcome. Lots of people, including scientists, forget that this stuff is still theory...excellent theory, but theory...

dudiobugtron wrote:I was going to reply to this thread and make this point, but I can rest easy now that you already have.

I guess I'll just follow it up with my proposed solution to the 'it's not massive enough' problem - just aim a giant water hose at it. ;)


Yep, that ought to heat things right up. ;)
In all fairness...

User avatar
addams
Posts: 10336
Joined: Sun Sep 12, 2010 4:44 am UTC
Location: Oregon Coast: 97444

Re: 1132: "Frequentists vs. Bayesians"

Postby addams » Mon Nov 12, 2012 3:41 am UTC

Nova, Supernova, Red Giant, Expanding Earth, Venus like weather bt 2020.

Everyone has an end myth; Or, Theroy.

Science is fun, because we have so many. Some are very real.
Not much the smartest or the bravest can do about any of it.

So; We keep records. We make guesses. We watch for change. We make jokes. We tell each other stories. Sometimes when we fortunate we hold hands and wonder.

What else can we do?

Hot enough for you?
Life is, just, an exchange of electrons; It is up to us to give it meaning.

We are all in The Gutter.
Some of us see The Gutter.
Some of us see The Stars.
by mr. Oscar Wilde.

Those that want to Know; Know.
Those that do not Know; Don't tell them.
They do terrible things to people that Tell Them.

User avatar
PM 2Ring
Posts: 3715
Joined: Mon Jan 26, 2009 3:19 pm UTC
Location: Sydney, Australia

Re: 1132: "Frequentists vs. Bayesians"

Postby PM 2Ring » Mon Nov 12, 2012 6:34 am UTC

addams wrote:Everyone has an end myth; Or, Theroy.


I love a good theroy.

User avatar
San Fran Sam
Posts: 228
Joined: Tue Nov 15, 2011 5:54 pm UTC

Re: 1132: "Frequentists vs. Bayesians"

Postby San Fran Sam » Mon Nov 12, 2012 7:12 am UTC

J Thomas wrote:
Coyne wrote:Then, of course, there's the fact that, per our best understanding, our Sun can't go nova. It's not massive enough.


Thank you for that qualifier. Every time I see somebody post "Our Sun can never go nova. Science has proven that once and for all" it makes my teeth itch.


Sure it can.

Spoiler:
Didn't you see the last episode of Babylon 5?

rmsgrey
Posts: 3655
Joined: Wed Nov 16, 2011 6:35 pm UTC

Re: 1132: "Frequentists vs. Bayesians"

Postby rmsgrey » Mon Nov 12, 2012 3:46 pm UTC

San Fran Sam wrote:
J Thomas wrote:
Coyne wrote:Then, of course, there's the fact that, per our best understanding, our Sun can't go nova. It's not massive enough.


Thank you for that qualifier. Every time I see somebody post "Our Sun can never go nova. Science has proven that once and for all" it makes my teeth itch.


Sure it can.

Spoiler:
Didn't you see the last episode of Babylon 5?


I think you meant the last episode of the penultimate season of [name of show] - the last episode would either be the final episode of the final season, or the penultimate episode of the final season (since the final episode was produced as the last episode of the production run for the penultimate season).

Anyway, causing (or threatening to cause) our sun to go nova is a well-established way for sufficiently-advanced aliens to eliminate us...

User avatar
Dmytry
Posts: 68
Joined: Mon Jun 01, 2009 12:39 pm UTC
Contact:

Re: 1132: "Frequentists vs. Bayesians"

Postby Dmytry » Mon Nov 12, 2012 5:57 pm UTC

Aiwendil wrote:
wumpus wrote:As far as the self-proclaimed "Bayesians", all they appear to do with the self-proclaimed super rationality is to ignore their own mistakes and fall into groupthink. The most obvious one is the absolute certainty of the many worlds hypothesis of quantum physics.


I'm confused. I've never met a self-proclaimed Bayesian who insisted with absolute certainty on the many-worlds interpretation. Actually, I don't think I've met anyone who insists with absolute certainty on the MWI. What has Bayesianism to do with MWI?

This mostly comes from one somewhat infamous non-mathematician non-physicist "rationalist" "bayesian" self publishing an incredible amount of nonsense such as this, and his groupthink club. If you google "what is Bayesianism", they are the top link. How the Bayes is supposed to be in favour of MWI is never quite explained but rest assured, the physicists don't believe in many worlds because they do not know of Bayes theorem, unlike the awesome autodidact. (I don't know anyone else who makes a point to describe oneself as Bayesian, given that probabilistic intuition people have is, generally, Bayesian, and the Bayes theorem is something lot of people would just re-derive because they were too lazy to study for exam)

The same guy runs ~ 1 million dollar per year 3-ish-person "research charity" that is supposedly developing friendly artificial intelligence to save the world from someone supposedly less bright than this guy, someone who would otherwise create a world destroying AI, and mostly runs the conferences that refer to this guy as "world's foremost expert in friendly artificial intelligence and recursive self improvement". Just a passably bright L. Ron Hubbard copy-cat. edit: re, the above, 'takes donations' is quite an under-statement when it comes to describing a group of people that argue they save something around 8 lives per dollar, and ask followers to work hard and donate everything they can to the cause, as well as argue that all charitable donations should be to one (their) cause because you should, obviously, donate to the cause with largest "expected utility".


Return to “Individual XKCD Comic Threads”

Who is online

Users browsing this forum: No registered users and 98 guests