Spring 2026, UC Berkeley
In the last lecture, we started with the following simple example of Bayesian inference.
Example 1: Testing and Covid¶
Let denote the binary parameter which represents whether I truly have Covid or not ( when I have Covid and when I don’t). Let denote the binary outcome of the Covid test so that represents the positive test. We need to calculate the probability:
where test data is simply , and the background information refers to things like “I have been strictly quarantining for the past 3 weeks”, “I do not have symptoms such as fever” etc.
We used the probability model (below stands for background information):
With these probability assignments, we use the Bayes rule to compute (4) as
This probability is not very high even though the test has very good false positive and false negative rates. This is because the prior probability is very low (0.02). So, even with the positive test result, it is more likely than not that we are covid-free.
Here is an alternative method of reasoning in this problem. We can formulate this as a hypothesis testing problem with
The null hypothesis represents not having covid. The -value in the above testing problem equals:
Usage of the naive standard cutoff 0.05 on the -value would now lead to rejection of the null hypothesis and declaring that I have Covid. On the other hand, the previous argument (based on probability theory) gave a much higher probability to me not having Covid.
This -value based method does not even make use of the information given on and . It only makes use of . Note that what we are after is while the -value is . In general, and can be quite different. The correct way of relating them is via the Bayes rule. Without using Bayes rule, one cannot argue that is large or small using largeness or smallness of .
Consider, for example, the case where represents the event that a person is dead and represents the event that they were hanged. Here is quite close to one while is quite close to zero.
It is therefore quite problematic that one can say something about from only .
Methods such as testing based on -values (and putting arbitrary cutoffs on them) are not based on probability. They are examples of frequentist reasoning.
Example 2: Spots on a patient¶
Here the unknown parameter is which represents the disease status. can take the three values smallpox, chickenpox, neither of them.
The data is that the patient has spots.
We need to calculate the probability:
where again represents background information representing other symptoms (e.g., fever) that the patient has. Here is one probability assignment which allows us to calculate this probability:
and
Here “neither” refers to an underlying cause for the patient’s condition that is neither smallpox nor chickenpox.
Using this assignment, the required probability (3) can be calculated via Bayes rule and this leads to
So probability theory with the assignment (4) and (5) says that it is highly likely that the patient has chickenpox (smallpox is basically ruled out because it is extremely rare).
Alternative Solution: Maximum Likelihood¶
Here is an alternative way of solving this problem using maximum likelihood estimation. The maximum likelihood estimate in this case is because smallpox leads to a higher probability (0.9) of the observed data (spots) compared to chickenpox (0.8). Maximum Likelihood (widely used in statistics) is not based on probability theory and also seems to be based on the wrong conditional probabilities and while we really should be calculating and .
For the modeling part in Bayesian applications, for the most part, we shall use standard models based on normal distributions, more generally, exponential families. Here is a simple example to illustrate the use of the normal distribution.
Example 3: Inference from measurements¶
Suppose a scientist makes 6 numerical measurements 26.6, 38.5, 34.4, 34, 31, 23.6 on an unknown real-valued physical quantity . On the basis of these measurements, what can be inferred about ?
Here is the Bayesian solution to this problem. The first step is modeling where we have to write the likelihood and prior. The likelihood represents the probability of the observed data conditional on parameter values. Here the main parameter is . In order to write the probability of the observed data, it is helpful to introduce another parameter which represents the scale of the noise inherent in the measurement process.
So our parameter vector is . We work with the normal likelihood:
where and denote the observed data points. More formally, you can arrive at this likelihood in the following way. Denote potential measurements by . Each actual measurement will have some rounding error so the data point 26.6 should be understood as belonging to the interval for some small rounding error . So the likelihood is:
Assuming is small, we can use probability-density approximation to write
We are now assuming that:
This leads to the likelihood (6) (note that is being dropped as it is a constant of proportionality which does not affect any further calculations).
We will complete this example in the next lecture.