Linear Regression Details¶
In Lectures 15 and 16, we looked at Bayesian linear regression. Here we use the default prior:
for . Bayesian inference with this prior basically coincides with frequentist inference. Today, we shall look at some standard regression terminology and also how some of the numbers shown in the regression output are actually computed. We shall use the statsmodels library to compute linear regression output.
To keep things simple, we shall illustrate multiple linear regression with a very simple dataset. This is the US GDP dataset from FRED (https://
import pandas as pd
gdp = pd.read_csv('GDP_12Nov2025.csv')
display(gdp)The key variable is GDP. Below we plot it against time.
import matplotlib.pyplot as plt
plt.plot(gdp['GDP'])
plt.xlabel("Time (quarterly)")
plt.ylabel("Billions of Dollars")
plt.title("Gross Domestic Product (GDP) of the United States")
plt.show()
We take the GDP data to be the response variable . We shall fit a multiple linear regression to this data with three covariates: . is simply time , is the square of time, and is the cube of time. In other words, we fit a cubic model of time to the data.
Let us fit a cubic trend model to this dataset given by:
for where is the total number of observations in the dataset. In vector-matrix notation, this model can be equivalently be represented as:
where
Observe is , is , is and is . The entries of the vector are known as errors and we assume that . There are 5 parameters in this model which we need to estimate from the data.
To implement regression using statsmodels, we simply need to create and and give them as input to the sm.OLS function.
The following code creates and .
import numpy as np
y = gdp['GDP']
n = len(y)
x = np.arange(1, n + 1)
x2 = x ** 2
x3 = x ** 3
import numpy as np
X = np.column_stack([np.ones(n),x, x2, x3])
print(X)
[[1.0000000e+00 1.0000000e+00 1.0000000e+00 1.0000000e+00]
[1.0000000e+00 2.0000000e+00 4.0000000e+00 8.0000000e+00]
[1.0000000e+00 3.0000000e+00 9.0000000e+00 2.7000000e+01]
...
[1.0000000e+00 3.1200000e+02 9.7344000e+04 3.0371328e+07]
[1.0000000e+00 3.1300000e+02 9.7969000e+04 3.0664297e+07]
[1.0000000e+00 3.1400000e+02 9.8596000e+04 3.0959144e+07]]
Now we simply use sm.OLS with and :
import statsmodels.api as sm
md = sm.OLS(y, X).fit()
print(md.summary()) OLS Regression Results
==============================================================================
Dep. Variable: GDP R-squared: 0.994
Model: OLS Adj. R-squared: 0.994
Method: Least Squares F-statistic: 1.760e+04
Date: Tue, 03 Mar 2026 Prob (F-statistic): 0.00
Time: 08:10:30 Log-Likelihood: -2458.4
No. Observations: 314 AIC: 4925.
Df Residuals: 310 BIC: 4940.
Df Model: 3
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const 180.6117 139.806 1.292 0.197 -94.477 455.700
x1 2.7137 3.838 0.707 0.480 -4.837 10.265
x2 0.0252 0.028 0.893 0.373 -0.030 0.081
x3 0.0008 5.9e-05 13.395 0.000 0.001 0.001
==============================================================================
Omnibus: 50.074 Durbin-Watson: 0.076
Prob(Omnibus): 0.000 Jarque-Bera (JB): 417.551
Skew: 0.269 Prob(JB): 2.14e-91
Kurtosis: 8.624 Cond. No. 4.76e+07
==============================================================================
Notes:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
[2] The condition number is large, 4.76e+07. This might indicate that there are
strong multicollinearity or other numerical problems.
Least Squares Estimates¶
The information on the estimates along with associated uncertainties (standard errors) are given in the table near the middle of the output. Specifically, the estimates from the above table are , , and . These numbers can also be obtained using model.params as follows:
print(md.params)const 180.611745
x1 2.713731
x2 0.025245
x3 0.000791
dtype: float64
These estimates are known as the Least Squares Estimates (also the same as Maximum Likelihood Estimates) and they are computed via the formula:
Let us verify that this formula indeed gives the reported estimates.
XTX = np.dot(X.T, X)
XTX_inverse = np.linalg.inv(XTX)
XTX_inverse_XT = np.dot(XTX_inverse, X.T)
betahat = np.dot(XTX_inverse_XT, y)
print(np.column_stack([betahat, md.params]))[[1.80611745e+02 1.80611745e+02]
[2.71373086e+00 2.71373086e+00]
[2.52451671e-02 2.52451671e-02]
[7.90735172e-04 7.90735172e-04]]
In reality, sm.OLS does not use the above formula directly (even though the formula is the correct one), instead it uses some efficient linear algebra methods to solve the linear equations .
Fitted Values¶
The next quantity to understand are the fitted values. These are given by model.fittedvalues and they are obtained via the simple formula:
The entries of the vector are known as the fitted values. Let us check the correctness of this formula.
fvals = np.dot(X, betahat)
print(np.column_stack([fvals[:10], md.fittedvalues[:10]]))[[183.35151151 183.35151151]
[186.14651302 186.14651302]
[189.00149368 189.00149368]
[191.92119791 191.92119791]
[194.91037012 194.91037012]
[197.97375472 197.97375472]
[201.11609611 201.11609611]
[204.34213872 204.34213872]
[207.65662695 207.65662695]
[211.06430522 211.06430522]]
To assess how well the regression model fits the data, we plot the fitted values along with the original data.
plt.plot(y, label = "Data")
plt.plot(md.fittedvalues, label = "Fitted values")
plt.legend()
plt.xlabel("Time (quarterly)")
plt.ylabel("Billions of Dollars")
plt.title("Gross Domestic Product (GDP) of the United States")
plt.show()
Residuals¶
The residuals simply are the differences between the observed data and the fitted values. We use the notation for the vector of residuals:
The residual is simply the entry of .
residuals = y - fvals
print(np.column_stack([md.resid[:10], residuals[:10]]))[[59.81248849 59.81248849]
[59.82148698 59.82148698]
[60.58350632 60.58350632]
[67.82380209 67.82380209]
[70.83162988 70.83162988]
[74.59324528 74.59324528]
[78.07990389 78.07990389]
[76.02386128 76.02386128]
[67.37737305 67.37737305]
[60.28669478 60.28669478]]
A plot of the residuals against time is an important diagnostic tool which can tell us about the effectiveness of the fitted model.
plt.plot(md.resid)
plt.xlabel("Time (quarterly)")
plt.ylabel("Billions of Dollars")
plt.title("Residuals")
plt.show()
From this plot, we see that there are some very large residuals indicating the presence of time points where the model predictions are quite far off from the observed values. This could be because of outliers or some systematic features that the model is missing.
#the value of the last residual
md.resid[n-1]np.float64(2483.4492304471496)Residual Sum of Squares (RSS) and Residual df¶
Two other commonly used terms (in connection with residuals) are the Residual Sum of Squares (RSS) and the Residual Degrees of Freedom. The RSS is simply the sum of the squares of the residuals:
RSS is simply equal to the smallest possible of the sum of squares criterion (recall from lecture that ).
The vector of residuals has the following important property: . This can be proved as follows:
means that the dot product between every column of and equals 0. In our regression model for GDP, is equivalent to:
Even though there are residuals , they have to satisfy the above four equality constraints. Therefore, the effective number of ‘free’ residuals is . Thus the residual degrees of freedom equals .
More generally, the residual df equals the number of observations () minus the number of columns in the matrix.
Estimate of : the residual standard error¶
The estimate of is given by:
This quantity is sometimes called the Residual Standard Error. The value of the Residual Standard Error can be used to assess the size of the residuals (residuals much larger than say twice the Residual Standard Error indicate points where the model fits particularly poorly).
rss = np.sum(md.resid ** 2)
rse = np.sqrt(rss/(n - 4))
print(rse)611.9599669724937
This intuitively means that residuals with magnitude above are points where the model fits particularly poorly. From a look at the plot of the residuals against time, there are many residuals which are this large in magnitude.
Standard Errors of the coefficient estimates¶
The standard errors corresponding to are the square roots of the diagonal entries of . They are given by md.bse as verified below.
sebetahat_squared = (rse ** 2)*(np.diag(XTX_inverse))
sebetahat = np.sqrt(sebetahat_squared)
print(np.array([md.bse, sebetahat]))[[1.39805910e+02 3.83755953e+00 2.82852061e-02 5.90308579e-05]
[1.39805910e+02 3.83755953e+00 2.82852061e-02 5.90308579e-05]]
t-statistic and confidence intervals for the coefficients¶
The -statistic for each coefficient is simply the estimate divided by the standard error. The confidence interval for is given by:
where is the point beyond the -distribution with degrees of freedom gives probability mass . The above formula is the result of:
The above statement can also be interpreted in a frequentist sense (then the conditional on data will be removed and the randomness will be over the data with fixed ).
from scipy.stats import t
alpha = 0.05
qt = t.ppf(1 - (alpha/2), n-4) #area to the right equaling alpha/2 is the same as area to the left equaling 1-alpha/2
print(qt)
cilower = md.params - md.bse*qt
ciupper = md.params + md.bse*qt
print(np.column_stack([cilower, ciupper]))
print(md.summary())
1.967645928777568
[[-9.44767854e+01 4.55700275e+02]
[-4.83722752e+00 1.02646892e+01]
[-3.04101036e-02 8.09004378e-02]
[ 6.74583344e-04 9.06886999e-04]]
OLS Regression Results
==============================================================================
Dep. Variable: GDP R-squared: 0.994
Model: OLS Adj. R-squared: 0.994
Method: Least Squares F-statistic: 1.760e+04
Date: Tue, 03 Mar 2026 Prob (F-statistic): 0.00
Time: 08:10:30 Log-Likelihood: -2458.4
No. Observations: 314 AIC: 4925.
Df Residuals: 310 BIC: 4940.
Df Model: 3
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const 180.6117 139.806 1.292 0.197 -94.477 455.700
x1 2.7137 3.838 0.707 0.480 -4.837 10.265
x2 0.0252 0.028 0.893 0.373 -0.030 0.081
x3 0.0008 5.9e-05 13.395 0.000 0.001 0.001
==============================================================================
Omnibus: 50.074 Durbin-Watson: 0.076
Prob(Omnibus): 0.000 Jarque-Bera (JB): 417.551
Skew: 0.269 Prob(JB): 2.14e-91
Kurtosis: 8.624 Cond. No. 4.76e+07
==============================================================================
Notes:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
[2] The condition number is large, 4.76e+07. This might indicate that there are
strong multicollinearity or other numerical problems.
Visualizing Uncertainty¶
It was showed in Lecture 15 that the posterior distribution of (this is the vector of coefficients) is given by
From the definition of the -distribution above, we also know that the distribution above coincides with the distribution of
where and are independent. Using this, we can generate vectors from their posterior distribution and then plot the regression curves corresponding to the different generated vectors. This will give us an idea of the uncertainty underlying the coefficients.
N = 200 #this the number of posterior samples that we shall draw
Sigma_mat = (rse ** 2)*(XTX_inverse)
#print(Sigma_mat)
from scipy.stats import chi2, multivariate_normal
chi_samples = chi2(df = n-4).rvs(N)
norm_samples = multivariate_normal(cov = Sigma_mat, allow_singular = True).rvs(N)
post_samples = np.tile(betahat, (N, 1)) + norm_samples / np.sqrt(chi_samples / (n-4))[:, None]
#print(post_samples)
plt.figure(figsize = (10, 6))
plt.plot(y, linewidth = 1, color = 'black')
for k in range(N):
fvalsnew = np.dot(X, post_samples[k])
plt.plot(fvalsnew, color = 'red', linewidth = 1)
plt.xlabel('Time (quarterly)')
plt.ylabel('Billions of dollars')
plt.title('GDP data with Fitted values along with uncertainty')
plt.show()
Even though regression curves have been plotted, they are still fairly clustered together. This suggests that the uncertainty in these estimates is fairly small.