In the last lecture, we discussed the following problem.
Example 3: Inference from measurements¶
Frequentist Solution¶
Here is the standard frequentist solution to this problem. Use the model:
It then follows that which implies:
If is known, this gives the confidence interval for (here satisfies ). But this confidence interval cannot be computed as is unknown. It is natural to replace by the natural estimator:
But then the normal distribution in (2) needs to be changed to the Student distribution with degrees of freedom:
This leads to the confidence interval:
When and one plugs in the observed data with in the above interval, we obtain the interval .
Bayesian Solution¶
The Bayesian solution also leads to the same interval but with a different reasoning. We went over the calculations in the last lecture. Here are the main facts. We use the likelihood:
where and denote the observed data points.
The unknown parameters are and . The prior is given by:
for a very large positive constant . In terms of the densities, (7) is the same as
For this model, the posterior becomes:
This is the joint posterior density of and . The posterior of alone is obtained by integrating out :
Because is large, the limits of the integral can be taken to be 0 and . The integral then can be calculated precisely to obtain
where is the sum of squares term:
If is large, then the indicator can be dropped (because it will essentially be always 1) so the posterior becomes:
where is the least squares estimator (which minimizes over all ). Thus the posterior mode is the mean.
It can be shown that this distribution is closely related to the -distribution. Specifically,
where denotes the -density with degrees of freedom. Note that and .
So the Bayesian point estimate is simply (this is the posterior mean, median and mode!). A % uncertainty interval for is given by:
where is the quantile of the Student -distribution with degrees of freedom. This uncertainty interval is referred to as the Bayesian Credible Interval.
Thus, in problem 1, the standard frequentist and Bayesian solutions coincide.
Frequentist vs Bayes¶
However, it is very easy to break this coincidence. For example, consider the following problem:
Note that the observed data is exactly the same as before.
The frequentist confidence interval (5) is no longer valid, because the frequentist probability statement (4) is no longer valid. This is because the number of datapoints cannot be taken to be deterministically equal to 6. So the frequentist probability that we need to calculate is (below we denote the number of data points by and treat it as a random variable):
where are i.i.d as before and
The probability above is complicated and there is no reason for it to be exactly equal to . Constructing valid frequentist confidence intervals in the presence of stopping rules (such as the rule of stopping as soon as we observe a data point smaller than 25) is, in fact, a problem of current research (see e.g., the paper https://
In contrast to frequentist inference, the Bayesian inference procedure will not change. This is because the likelihood function in Problem 2 is the same as the likelihood function in Problem 1. To verify this, consider the following likelihood in Problem 2 (below denotes the rounding error in the observations, which is extremely small).
The likelihood in Problem 2
In other words, we are writing the probability that the first five observations are all larger than 25 while the sixth observation is smaller than 25, in addition to the exact values of the observations, in the likelihood. But it is clear that these additional constraints do not change the probability (as an example, just note that . The additional restriction does not affect the probability because it is already covered in ). Thus
The likelihood in Problem 2
which is the likelihood in Problem 1. Since the likelihood is the same in both problems, Bayesian inference for both problems will be the same (note the priors will be the same as there is no reason to use different priors). Therefore, from the Bayesian perspective, stopping rules can be ignored for inferring , because they do not affect the likelihood.
This example shows clearly that frequentist inference violates the Likelihood Principle (the likelihood principle states that “all the evidence in a sample relevant to model parameters is contained in the likelihood function”). See Likelihood principle for more information on the likelihood principle.
On the other hand, Bayesian inference always satisfies the likelihood principle (assuming that priors are the same), because data enters the Bayesian posterior calculation only through the likelihood.
Here is another example of violation of the likelihood principle in frequentist inference.
Example 4: Coin Fairness Testing¶
Frequentist Solution¶
For the usual frequentist answer to this question, we assume that the observed sequence of outcomes are the realization of random variables (with ) that are independently distributed according to the distribution for some unknown . We need to test the (null) hypothesis that against, say, the alternative . This can be done by calculating the -value which is the probability (under the assumption ) of getting 3 or lower heads. The distribution of the number of heads under the null distribution is so the -value is
which does not lead to a rejection of the null hypothesis at the usual 5% level.
In this -value calculation, we implicitly assumed that the experiment consisted of tossing the coin 12 times where 12 was a priori chosen by the coin tosser. Consider now the alternative scenario where the coin tosser wanted to toss the coin until the point where 3 heads are observed. Now for the same outcome, the -value will change. Indeed now the random variable of interest will become and the -value will equal the probability of needing to toss the coin 12 or more times to get the 3 heads (assuming fairness). This is calculated using the negative binomial distribution as:
and this leads to rejection of the null hypothesis at the level.
Note that the “likelihood function” is the same function whether the sample size was predetermined or whether the coin was tossed till 3 heads are observed. But the procedure obtained for testing has changed from the binomial to the negative binomial case. This means that -valued based frequentist inference violates the Likelihood Principle. Here is a story from the wikipedia article on the “Likelihood Principle” (see Likelihood principle) which puts an interesting context to these numbers:
Suppose a number of scientists are assessing the probability of a certain outcome (which we shall call ’success’) in experimental trials. Conventional wisdom suggests that if there is no bias towards success or failure then the success probability would be one half. Adam, a scientist, conducted 12 trials and obtains 3 successes and 9 failures. One of those successes was the 12th and last observation. Then Adam left the lab.
Bill, Adam’s boss in the same lab, continued Adam’s work and published Adam’s results, along with a significance test. He tested the null hypothesis that , the success probability, is equal to a half, versus . The probability that out of 12 trials, 3 or fewer (i.e. more extreme) were successes, if is true, is . Thus the null hypothesis is not rejected at the 5% significance level.
Adam actually stopped immediately after 3 successes, because his boss Bill had instructed him to do so. After the publication of the statistical analysis by Bill, Adam realizes that he has missed a later instruction from Bill to instead conduct 12 trials, and that Bill’s paper is based on this second instruction. Adam is very glad that he got his 3 successes after exactly 12 trials, and explains to his friend Charlotte that by coincidence he executed the second instruction. But Charlotte then explains to Adam that the -value should now be changed to and the result becomes significant at the level. Adam is astonished to hear this.
For more comments on the violation of the likelihood principle by -values, read MacKay (2003, Section 37.2).
We shall look at the Bayesian solution to this problem in the next lecture.
- MacKay, D. J. C. (2003). Information Theory, Inference, and Learning Algorithms. Cambridge University Press.