Dmytry wrote:You are to assign a non-zero, not even very small probability that it is totally, completely correct, to a theory, that it may literally be the great truth of how universe works.
To any theory that is consistent and not (yet) directly contradicted by empirical data, yes. It seems perfectly reasonable to me to assign a non-zero probability to any such theory; and on the contrary it seems quite unreasonable to assign any such theory a probability of exactly zero. I'm not sure why you think the probability will be "not even very small"; it could be very small indeed.
Physics is not like that, it is about making approximations that work, there's no guarantee the mathematics even in principle allows to capture the ultimate structure of the universe perfectly and there's no justification that it would.
Here we are getting into philosophical territory, and I suspect we may have very different ideas about what the nature of science fundamentally is. For what it's worth, I hold that the aim of science is to generate an accurate description of phenomena. And in my view, an exactly correct description of the phenomena is necessarily possible in principle, though there's no a priori
guarantee that the correct description will be simple in the way that real scientific theories do, in fact, prove to be. But even if I were to grant the possibility that the universe is not exactly describable, even in principle, it still would obviously not follow that we can be sure that it is not describable. Thus, we still could not assign a zero probability to any particular theory, as long as that theory is consistent and non-contradicted.
Dogmatic crazy religious beliefs in spite of evidence, that's Bayesian. Set probability of 1 to something and you'll never rid of it!
Sure, if you pick crazy priors, you will be crazy.
But there was this solution for getting rid of wrong ideas: if you believe in something really hard, you can still commit to risk of stopping believing in it, at one chance in, say, a billion or a billion billion
Exactly. But what you've just described is perfectly Bayesian. All you're saying is 'don't set any prior to exactly 1 (or exactly 0)'.
Then, the practical matters, which are actually very important too. How good is it, actually, to have a lot of hypotheses in parallel being assigned probabilities? You want to predict anything, you have to evaluate them all, and the combinations blow up so fast it is not possible to do anything even if you were a Dyson sphere brain.
But being a Bayesian doesn't mean that you have to actually compute an update to your probabilities for every possible hypothesis every time you get a new piece of data. Nor does it mean that, in practice, to find P(x), you have to do a sum/integral over P(x|y) where y ranges over all possible hypotheses. That would be like saying that believing in the Standard Model requires you to do an enormous QFT calculation including every particle in the known universe if you simply want to know, say, how much stress a piece of rope can withstand before it breaks. It's perfectly fine to ignore terms in the probability sum if you expect them to make a negligible contribution. It's also fine to restrict yourself to a certain domain D of hypotheses and calculate P(x|D) instead of P(x), if you want. But it's well to recognize in that case that what you've calculated is P(x|D) rather than P(x).
How do you propose we set priors in physics, by the way?
Oh, don't mistake me - I think that the problem of setting priors is a very serious and a very deep one. In fact, this is the old problem of induction identified by Hume - or what's left of it once you have Bayes's theorem. But one does not solve or avoid the problem by adopting a non-Bayesian approach; one merely sweeps it under the rug.
Consider an example where we've done an experiment to try to determine some physical constant and we're now trying to set a 90% confidence limit on the value of that constant. I've encountered physicists who seem to think that by just doing a p-value test and not explicitly including priors, they are somehow on surer epistemic footing than those who take a Bayesian approach. But doing that p-value test is exactly equivalent to picking a certain set of values for the priors - namely, a uniform distribution over values of the constant we're measuring - and then using Bayes's theorem. All the frequentist has done is made the choice of priors implicit rather than explicit, but they sometimes act as though this has given them the moral high ground.
Now, I don't think this actually matters terribly much in practice, at least in the long run. Setting the priors to be a uniform distribution in some parameter of your theory is usually a pretty reasonable thing to do, so the frequentist approach is usually not crazy. And in practice, if you do the right experiments and get good statistics, then you will converge on the same result as long as your priors are not pathological.
I guess what bothers me about (some) physicists who adopt something of an anti-Bayesian stance is two things, one logical or perhaps terminological, and the other practical.
1. Some phycisists tend to talk as if P(data|hypothesis) and P(hypothesis|data) were the same thing. On a certain assumption about the priors, those two things will have the same value
, but they are different entities. And if you want to calculate P(hypothesis|data), then you have to make some
assumption about priors - even if that assumption is just a uniform distribution.
2. In a few cases - namely, when we do actually have good reason to assume some particular priors - the non-Bayesian simply gets the wrong answer. For example, there have been several experiments designed to put constraints on the mass of the neutrino where the best fit value has ended up being a negative mass-squared (i.e. an imaginary value for the mass). Some of these have, using frequentist statistics, published confidence intervals that lie partially or completely in the negative mass-squared region. But surely it's absurd in this case to assume a uniform prior distribution over positive and negative values of mass-squared, since the negative values are unphysical (i.e. if you put an imaginary value in for the mass of a particle, the theory becomes incoherent). The thing to do, surely, is to set the prior probability of an imaginary mass to zero. I mean, by all means, acknowledge that your best fit value is negative. But if you want to talk about a confidence limit on the true value for the mass - that is, if you want to ask 'in light of this data, what can I say about what mass the neutrino is likely to have?' - then don't give unphysical values a non-zero probability.
But look, if what you're saying is that there are times when it's appropriate to use Bayes's theorem and times when it's not, then I completely agree with you - even if, perhaps, we'd disagree about whether it's the appropriate thing to use in certain specific instances.