STAT 238 - Bayesian Statistics Lecture Twenty Three
Spring 2026, UC Berkeley
Last lecture: interpolation with Gaussian processes ¶ The interpolation problem is: Suppose we are given values of f f f at points x 1 , … , x n x_1, \dots, x_n x 1 , … , x n in the domain Ω \Omega Ω of f f f . Let x ∈ Ω x \in
\Omega x ∈ Ω be a new point (i.e., distinct from x 1 , … , x n x_1, \dots, x_n x 1 , … , x n ). What can we say about f ( x ) f(x) f ( x ) ?
In the last lecture, we saw how to use Gaussian processes to solve this problem. We model { f ( x ) , x ∈ Ω } \{f(x), x \in
\Omega\} { f ( x ) , x ∈ Ω } as a Gaussian process with mean zero and covariance function or kernel given by K ( x , x ′ ) K(x, x') K ( x , x ′ ) i.e.,
Cov ( f ( x ) , f ( x ′ ) ) = K ( x , x ′ ) for all x , x ′ ∈ Ω . \text{Cov}(f(x), f(x')) = K(x, x') ~~\text{for all $x, x' \in
\Omega$}. Cov ( f ( x ) , f ( x ′ )) = K ( x , x ′ ) for all x , x ′ ∈ Ω . Under this modeling assumption, the answer to the interpolation question is given by the conditional distribution of f ( x ) f(x) f ( x ) given f ( x 1 ) , … , f ( x n ) f(x_1), \dots, f(x_n) f ( x 1 ) , … , f ( x n ) which is calculated as follows. Note that:
( f ( x 1 ) , … , f ( x n ) , f ( x ) ) ∼ N ( 0 , ( ( K ( x i , x j ) ) n × n ( K ( x i , x ) ) n × 1 ( K ( x , x i ) ) 1 × n K ( x , x ) ) ) . \begin{align*}
(f(x_1), \dots, f(x_n), f(x)) \sim N \left(0, \begin{pmatrix} (K(x_i,
x_j))_{n \times n} & (K(x_i,
x))_{n
\times 1}
\\ (K(x, x_i))_{1
\times n} & K(x,
x) \end{pmatrix}
\right).
\end{align*} ( f ( x 1 ) , … , f ( x n ) , f ( x )) ∼ N ( 0 , ( ( K ( x i , x j ) ) n × n ( K ( x , x i ) ) 1 × n ( K ( x i , x ) ) n × 1 K ( x , x ) ) ) . Using the notation K = ( K ( x i , x j ) ) n × n K = (K(x_i,
x_j))_{n \times n} K = ( K ( x i , x j ) ) n × n and k = ( K ( x i , x ) ) n × 1 \mathbf{k} = (K(x_i,
x))_{n
\times
1} k = ( K ( x i , x ) ) n × 1 , we can write the conditional distribution of f ( x ) f(x) f ( x ) given f ( x 1 ) , … , f ( x n ) f(x_1),
\dots,
f(x_n) f ( x 1 ) , … , f ( x n ) as
f ( x ) ∣ f ( x 1 ) , … , f ( x n ) ∼ N ( k T K − 1 ( f ( x 1 ) , … , f ( x n ) ) T , K ( x , x ) − k T K − 1 k ) . \begin{align*}
f(x) \mid f(x_1), \dots, f(x_n) \sim N\left(\mathbf{k}^TK^{-1} (f(x_1), \dots,
f(x_n))^T, K(x, x) -
\mathbf{k}^T K^{-1} \mathbf{k} \right).
\end{align*} f ( x ) ∣ f ( x 1 ) , … , f ( x n ) ∼ N ( k T K − 1 ( f ( x 1 ) , … , f ( x n ) ) T , K ( x , x ) − k T K − 1 k ) . Thus the posterior mean (or mode) estimate of f ( x ) f(x) f ( x ) is given by
f ( x ) ^ = k T K − 1 ( f ( x 1 ) , … , f ( x n ) ) T . \begin{align}
\widehat{f(x)} = \mathbf{k}^TK^{-1} (f(x_1), \dots, f(x_n))^T.
\end{align} f ( x ) = k T K − 1 ( f ( x 1 ) , … , f ( x n ) ) T . Regression with Gaussian Processes ¶ Today, we will see how to perform regression using Gaussian processes. The key difference between regression and interpolation is that, in regression, the values f ( x 1 ) , … , f ( x n ) f(x_1), \dots, f(x_n) f ( x 1 ) , … , f ( x n ) are not observed exactly; instead, they are observed with noise.
More precisely, we are given observations y 1 , … , y n y_1, \dots, y_n y 1 , … , y n modeled as
y i = f ( x i ) + ϵ i , where ϵ i ∼ i.i.d. N ( 0 , σ 2 ) . y_i = f(x_i) + \epsilon_i, \qquad \text{where } \epsilon_i \overset{\text{i.i.d.}}{\sim} N(0, \sigma^2). y i = f ( x i ) + ϵ i , where ϵ i ∼ i.i.d. N ( 0 , σ 2 ) . The parameter σ \sigma σ controls the level of noise, i.e., how much each observation y i y_i y i deviates from the true value f ( x i ) f(x_i) f ( x i ) .
As in the interpolation setting, our goal is to estimate f ( x ) f(x) f ( x ) at a test point x x x . This test point may be different from the observed inputs x 1 , … , x n x_1, \dots, x_n x 1 , … , x n , or it may coincide with one of them.
Importantly, because the observations are noisy, it is meaningful to estimate f ( x i ) f(x_i) f ( x i ) even at observed points. In fact, by combining information from all observations, we can often obtain a better estimate of f ( x i ) f(x_i) f ( x i ) than the raw observation y i y_i y i .
Another key difference from interpolation is that the input points x 1 , … , x n x_1, \dots, x_n x 1 , … , x n need not necessarily be distinct. Since each observation contains noise, having repeated measurements at the same input can help improve the overall estimate.
The solution to the regression problem is very similar to that of the interpolation problem with only one difference. The goal is to estimate f ( x ) f(x) f ( x ) at the test point x x x given the data ( x 1 , y 1 ) , … , ( x n , y n ) (x_1, y_1),
\dots, (x_n, y_n) ( x 1 , y 1 ) , … , ( x n , y n ) . We will use the conditional distribution of f ( x ) f(x) f ( x ) given y 1 , … , y n y_1, \dots, y_n y 1 , … , y n (we will assume that x 1 , … , x n , x x_1, \dots, x_n, x x 1 , … , x n , x are deterministic). To calculate this conditional distribution, first note that the marginal distribution of ( y 1 , … , y n , f ( x ∗ ) ) (y_1, \dots, y_n,
f(x_*)) ( y 1 , … , y n , f ( x ∗ )) is given by
( y 1 , … , y n , f ( x ) ) ∼ N ( 0 , ( ( K ( x i , x j ) ) n × n + σ 2 I n ( K ( x i , x ) ) n × 1 ( K ( x , x i ) ) 1 × n K ( x , x ) ) ) . \begin{align*}
(y_1, \dots, y_n, f(x)) \sim N \left(0, \begin{pmatrix} (K(x_i,
x_j))_{n \times n} +
\sigma^2 I_n & (K(x_i,
x))_{n
\times 1}
\\ (K(x, x_i))_{1
\times n} & K(x,
x) \end{pmatrix}
\right).
\end{align*} ( y 1 , … , y n , f ( x )) ∼ N ( 0 , ( ( K ( x i , x j ) ) n × n + σ 2 I n ( K ( x , x i ) ) 1 × n ( K ( x i , x ) ) n × 1 K ( x , x ) ) ) . Using the notation K = ( K ( x i , x j ) ) n × n K = (K(x_i,
x_j))_{n \times n} K = ( K ( x i , x j ) ) n × n and k = ( K ( x i , x ) ) n × 1 \mathbf{k} = (K(x_i,
x))_{n
\times
1} k = ( K ( x i , x ) ) n × 1 , we can write the conditional distribution of f ( x ) f(x) f ( x ) given y 1 , … , y n y_1,
\dots,
y_n y 1 , … , y n as
f ( x ) ∣ data ∼ N ( k T ( K + σ 2 I n ) − 1 Y , K ( x , x ) − k T ( K + σ 2 I n ) − 1 k ) . \begin{align*}
f(x) \mid \text{data} \sim N\left(\mathbf{k}^T \left(K +
\sigma^2 I_n \right)^{-1} Y, K(x, x) - \mathbf{k}^T \left(K +
\sigma^2 I_n
\right)^{-1} \mathbf{k} \right).
\end{align*} f ( x ) ∣ data ∼ N ( k T ( K + σ 2 I n ) − 1 Y , K ( x , x ) − k T ( K + σ 2 I n ) − 1 k ) . Thus the posterior mean (or mode) estimate of f ( x ) f(x) f ( x ) is given by
f ( x ) ^ = k T ( K + σ 2 I n ) − 1 y , \begin{align}
\widehat{f(x)} = \mathbf{k}^T \left(K +
\sigma^2 I_n \right)^{-1} y,
\end{align} f ( x ) = k T ( K + σ 2 I n ) − 1 y , where y y y is the n × 1 n \times 1 n × 1 vector with entries y 1 , … , y n y_1, \dots, y_n y 1 , … , y n .
So the only difference between (6) and (3) is the presence of the additional σ 2 I n \sigma^2 I_n σ 2 I n term in case of regression.
Often, we would need to estimate σ \sigma σ from the observed data (and also additional hyperparameters present in the kernel K K K ). For these, the marginal likelihood of y 1 , … , y n y_1, \dots, y_n y 1 , … , y n is important. This is simply the multivariate normal distribution with mean vector 0 and covariance K + σ 2 I n K + \sigma^2 I_n K + σ 2 I n .
To illustrate the calculations, let us take the special case of the Integrated Brownian Motion prior.
Calculations for the IBM prior ¶ We take the prior
f ( x ) = β 0 + β 1 x + τ I ( x ) \begin{align*}
f(x) = \beta_0 + \beta_1 x + \tau I(x)
\end{align*} f ( x ) = β 0 + β 1 x + τ I ( x ) where β 0 , β 1 \beta_0, \beta_1 β 0 , β 1 are i.i.d N ( 0 , C ) N(0, C) N ( 0 , C ) and I ( x ) I(x) I ( x ) is integrated Brownian Motion.
This is a Gaussian process prior with mean zero and covariance kernel:
K ( u , v ) = C ( 1 + u v ) + τ 2 K I ( u , v ) \begin{align*}
K(u, v) = C(1 + uv) + \tau^2 K_I(u, v)
\end{align*} K ( u , v ) = C ( 1 + uv ) + τ 2 K I ( u , v ) where K I ( u , v ) K_I(u, v) K I ( u , v ) is the kernel corresponding to IBM, which is:
K I ( u , v ) = 1 2 ( min ( u , v ) ) 2 max ( u , v ) − 1 6 ( min ( u , v ) ) 3 = 1 6 ( min ( u , v ) ) 2 ( 3 max ( u , v ) − min ( u , v ) ) = u v ( min ( u , v ) ) − u + v 2 ( min ( u , v ) ) 2 + 1 3 ( min ( u , v ) ) 3 . \begin{align*}
K_I(u, v) &= \frac{1}{2} \left(\min(u, v) \right)^2\max(u, v) -
\frac{1}{6} \left(\min(u, v)\right)^3 \\
&= \frac{1}{6} \left(\min(u, v) \right)^2 \left(3 \max(u, v) - \min(u,
v) \right) \\
&= u v (\min(u, v)) - \frac{u+v}{2} \left(\min(u, v) \right)^2 +
\frac{1}{3} \left(\min(u, v) \right)^3.
\end{align*} K I ( u , v ) = 2 1 ( min ( u , v ) ) 2 max ( u , v ) − 6 1 ( min ( u , v ) ) 3 = 6 1 ( min ( u , v ) ) 2 ( 3 max ( u , v ) − min ( u , v ) ) = uv ( min ( u , v )) − 2 u + v ( min ( u , v ) ) 2 + 3 1 ( min ( u , v ) ) 3 . Note that the kernel has the unknown parameter τ \tau τ (which we write as τ = γ σ \tau = \gamma \sigma τ = γσ ). The constant C C C is assumed to be large and it is not to be estimated (ideally we want to take C = + ∞ C =
+\infty C = + ∞ ).
Given data ( x 1 , y 1 ) , … , ( x n , y n ) (x_1, y_1), \dots, (x_n, y_n) ( x 1 , y 1 ) , … , ( x n , y n ) , what is the posterior of f ( x ) f(x) f ( x ) ? First let us assume that τ , σ \tau, \sigma τ , σ are given. The estimate is simply (6) . We simplify this expression below. Let X X X denote the n × 2 n \times 2 n × 2 matrix with columns 1 and x i x_i x i (this is the usual X X X matrix in the simple linear regression of y y y on x x x based on the data ( x 1 , y 1 ) , … , ( x n , y n ) (x_1, y_1), \dots, (x_n, y_n) ( x 1 , y 1 ) , … , ( x n , y n ) ).
Note that
k T = ( K ( x , x 1 ) , … , K ( x , x n ) ) = ( C ( 1 + x x i ) + τ 2 K I ( x , x i ) , i = 1 , … , n ) = C ( 1 , x ) X T + τ 2 ( K I ( x , x 1 ) , … , K I ( x , x n ) ) = C ( 1 , x ) X T + τ 2 k I T . \begin{align*}
\mathbf{k}^T &= (K(x, x_1), \dots, K(x, x_n)) \\
&= \left(C(1 + x x_i) + \tau^2 K_I(x, x_i), i = 1, \dots, n \right)
\\
&= C (1, x) X^T + \tau^2 (K_I(x, x_1), \dots, K_I(x, x_n)) \\
&= C (1, x) X^T + \tau^2 \mathbf{k}_I^T.
\end{align*} k T = ( K ( x , x 1 ) , … , K ( x , x n )) = ( C ( 1 + x x i ) + τ 2 K I ( x , x i ) , i = 1 , … , n ) = C ( 1 , x ) X T + τ 2 ( K I ( x , x 1 ) , … , K I ( x , x n )) = C ( 1 , x ) X T + τ 2 k I T . where
k I T : = ( K I ( x , x 1 ) , … , K I ( x , x n ) ) , \begin{align*}
\mathbf{k}_I^T := (K_I(x, x_1), \dots, K_I(x, x_n)),
\end{align*} k I T := ( K I ( x , x 1 ) , … , K I ( x , x n )) , and ( 1 , x ) (1, x) ( 1 , x ) denotes the row vector with entries 1 and x x x .
Further the n × n n \times n n × n matrix K K K has the ( i , j ) (i, j) ( i , j ) -th entry:
C ( 1 + x i x j ) + γ 2 σ 2 K I ( x i , x j ) \begin{align*}
C(1 + x_i x_j) + \gamma^2 \sigma^2 K_I(x_i, x_j)
\end{align*} C ( 1 + x i x j ) + γ 2 σ 2 K I ( x i , x j ) so that
K = C X X T + τ 2 K I \begin{align}
K = C X X^T + \tau^2 K_I
\end{align} K = CX X T + τ 2 K I where K I K_I K I is the n × n n \times n n × n matrix with ( i , j ) (i, j) ( i , j ) -th entry K I ( x i , x j ) K_I(x_i, x_j) K I ( x i , x j ) .
As a result,
f ( x ) ^ = k T ( K + σ 2 I n ) − 1 y = ( C ( 1 , x ) X T + τ 2 k I T ) ( C X X T + τ 2 K I + σ 2 I n ) − 1 y . \begin{align*}
\widehat{f(x)} =
\mathbf{k}^T \left(K + \sigma^2 I_n \right)^{-1} y = \left(C (1, x)
X^T + \tau^2 \mathbf{k}_I^T \right) \left(C X X^T + \tau^2 K_I +
\sigma^2 I_n \right)^{-1} y.
\end{align*} f ( x ) = k T ( K + σ 2 I n ) − 1 y = ( C ( 1 , x ) X T + τ 2 k I T ) ( CX X T + τ 2 K I + σ 2 I n ) − 1 y . This expression depends on the large constant C C C . Direct computation with a large C C C might make it unstable. It is therefore natural to compute the limit as C → ∞ C \rightarrow \infty C → ∞ . Using the Sherman-Morrison-Woodbury identity,
( K + σ 2 I n ) − 1 = ( C X X T + τ 2 K I + σ 2 I n ) − 1 = ( C X X T + Σ ) − 1 = Σ − 1 − Σ − 1 X ( C − 1 I 2 + X T Σ − 1 X ) − 1 X T Σ − 1 \begin{align}
(K + \sigma^2 I_n)^{-1} &= (C X X^T + \tau^2 K_I + \sigma^2 I_n)^{-1}
\nonumber \\ &= \left(C X X^T + \Sigma
\right)^{-1} \nonumber \\ &=
\Sigma^{-1}
-
\Sigma^{-1}
X
\left(C^{-1}
I_2 + X^T \Sigma^{-1} X \right)^{-1} X^T
\Sigma^{-1}
\end{align} ( K + σ 2 I n ) − 1 = ( CX X T + τ 2 K I + σ 2 I n ) − 1 = ( CX X T + Σ ) − 1 = Σ − 1 − Σ − 1 X ( C − 1 I 2 + X T Σ − 1 X ) − 1 X T Σ − 1 where
Σ = τ 2 K I + σ 2 I n . \begin{align}
\Sigma = \tau^2 K_I + \sigma^2 I_n.
\end{align} Σ = τ 2 K I + σ 2 I n . Further
k = C ( 1 , x ) X T + τ 2 k I T . \begin{align*}
\mathbf{k} = C (1, x) X^T + \tau^2 \mathbf{k}_I^T.
\end{align*} k = C ( 1 , x ) X T + τ 2 k I T . We thus get
f ( x ) ^ = ( C ( 1 , x ) X T + τ 2 k I T ) ( Σ − 1 − Σ − 1 X ( C − 1 I 2 + X T Σ − 1 X ) − 1 X T Σ − 1 ) y . \begin{align*}
\widehat{f(x)} &= \left(C (1, x) X^T + \tau^2 \mathbf{k}_I^T \right) \left(\Sigma^{-1} - \Sigma^{-1} X \left(C^{-1}
I_2 + X^T \Sigma^{-1} X \right)^{-1} X^T
\Sigma^{-1} \right) y.
\end{align*} f ( x ) = ( C ( 1 , x ) X T + τ 2 k I T ) ( Σ − 1 − Σ − 1 X ( C − 1 I 2 + X T Σ − 1 X ) − 1 X T Σ − 1 ) y . Write τ = γ σ \tau = \gamma \sigma τ = γσ . In Fact 1 , it is proved that, as C → ∞ C \rightarrow \infty C → ∞ the above converges to:
f ( x ) ^ : = ( 1 , x ) ( X T A γ − 1 X ) − 1 X T A γ − 1 y + γ 2 k I T ( A γ − 1 − A γ − 1 X ( X T A γ − 1 X ) − 1 X T A γ − 1 ) y \begin{align*}
\widehat{f(x)} := (1, x) \left(X^T A_{\gamma}^{-1} X \right)^{-1} X^T
A_{\gamma}^{-1} y + \gamma^2 \mathbf{k}_I^T \left(A_{\gamma}^{-1} -
A_{\gamma}^{-1} X (X^T A_{\gamma}^{-1} X)^{-1} X^T A_{\gamma}^{-1}
\right) y
\end{align*} f ( x ) := ( 1 , x ) ( X T A γ − 1 X ) − 1 X T A γ − 1 y + γ 2 k I T ( A γ − 1 − A γ − 1 X ( X T A γ − 1 X ) − 1 X T A γ − 1 ) y where
A γ = I n × n + γ 2 K I . \begin{align*}
A_{\gamma} = I_{n \times n} + \gamma^2 K_I.
\end{align*} A γ = I n × n + γ 2 K I . This expression for f ( x ) ^ \widehat{f(x)} f ( x ) , which only depends on γ = τ / σ \gamma = \tau/\sigma γ = τ / σ and not on τ \tau τ and σ \sigma σ individually, can be used for computation.
Estimation of γ \gamma γ and σ \sigma σ ¶ The hyperparameters γ \gamma γ and σ \sigma σ also need to be estimated from the observed data ( x 1 , y 1 ) , … , ( x n , y n ) (x_1, y_1), \dots, (x_n, y_n) ( x 1 , y 1 ) , … , ( x n , y n ) (recall that τ = γ σ \tau = \gamma \sigma τ = γσ ). For this, the marginal likelihood of the data given σ , γ \sigma, \gamma σ , γ is important. This is calculated using:
y ∣ σ , γ ∼ N ( 0 , K + σ 2 I n ) \begin{align*}
y \mid \sigma, \gamma \sim N(0, K + \sigma^2 I_n)
\end{align*} y ∣ σ , γ ∼ N ( 0 , K + σ 2 I n ) where K K K is given by (14) . In other words,
f y ∣ σ , γ ( y ) ∝ 1 det ( K + σ 2 I n ) exp ( − 1 2 y T ( K + σ 2 I n ) − 1 y ) . \begin{align*}
f_{y \mid \sigma, \gamma}(y) \propto \frac{1}{\sqrt{\det(K +
\sigma^2 I_n)}} \exp \left(-\frac{1}{2} y^T (K + \sigma^2 I_n)^{-1}
y \right).
\end{align*} f y ∣ σ , γ ( y ) ∝ det ( K + σ 2 I n ) 1 exp ( − 2 1 y T ( K + σ 2 I n ) − 1 y ) . For ( K + σ 2 I n ) − 1 (K + \sigma^2 I_n)^{-1} ( K + σ 2 I n ) − 1 , we use (16) and let C → ∞ C
\rightarrow \infty C → ∞ to get
( K + σ 2 I n ) − 1 → Σ − 1 − Σ − 1 X ( X T Σ − 1 X ) − 1 X T Σ − 1 . \begin{align*}
(K + \sigma^2 I_n)^{-1} \rightarrow \Sigma^{-1} - \Sigma^{-1} X
\left(X^T \Sigma^{-1} X \right)^{-1} X^T \Sigma^{-1}.
\end{align*} ( K + σ 2 I n ) − 1 → Σ − 1 − Σ − 1 X ( X T Σ − 1 X ) − 1 X T Σ − 1 . Further, using the matrix determinant lemma, we get
∣ K + σ 2 I n ∣ = ∣ Σ + C X X T ∣ = ∣ Σ ∣ ∣ I + C X T Σ − 1 X ∣ ≈ ∣ Σ ∣ ∣ C X T Σ − 1 X ∣ when C is large = ∣ Σ ∣ C 2 ∣ X T Σ − 1 X ∣ ∝ ∣ Σ ∣ ∣ X T Σ − 1 X ∣ . \begin{align*}
|K + \sigma^2 I_n| &= |\Sigma + C X X^T| \\
&= |\Sigma| |I + C X^T \Sigma^{-1} X| \\
&\approx |\Sigma| |C X^T \Sigma^{-1} X| \text{ when $C$ is large} \\
&= |\Sigma| C^2 |X^T \Sigma^{-1} X| \propto |\Sigma| |X^T
\Sigma^{-1} X|.
\end{align*} ∣ K + σ 2 I n ∣ = ∣Σ + CX X T ∣ = ∣Σ∣∣ I + C X T Σ − 1 X ∣ ≈ ∣Σ∣∣ C X T Σ − 1 X ∣ when C is large = ∣Σ∣ C 2 ∣ X T Σ − 1 X ∣ ∝ ∣Σ∣∣ X T Σ − 1 X ∣. So the marginal likelihood of y y y given τ , σ \tau, \sigma τ , σ becomes:
f y ∣ τ , σ ( y ) ∝ ∣ Σ ∣ − 1 / 2 ∣ X T Σ − 1 X ∣ − 1 / 2 exp ( − 1 2 y T [ Σ − 1 − Σ − 1 X ( X T Σ − 1 X ) − 1 X T Σ − 1 ] y ) \begin{align*}
f_{y \mid \tau, \sigma}(y) \propto |\Sigma|^{-1/2} |X^T \Sigma^{-1}
X|^{-1/2} \exp \left(-\frac{1}{2} y^T \left[\Sigma^{-1} -
\Sigma^{-1} X \left(X^T \Sigma^{-1} X \right)^{-1} X^T
\Sigma^{-1}\right] y \right)
\end{align*} f y ∣ τ , σ ( y ) ∝ ∣Σ ∣ − 1/2 ∣ X T Σ − 1 X ∣ − 1/2 exp ( − 2 1 y T [ Σ − 1 − Σ − 1 X ( X T Σ − 1 X ) − 1 X T Σ − 1 ] y ) with Σ \Sigma Σ defined in (17) .
We now take τ = γ σ \tau = \gamma \sigma τ = γσ so that
Σ = σ 2 ( I n + γ 2 K I ) = σ 2 A γ where A γ : = I n + γ 2 K I . \begin{align*}
\Sigma = \sigma^2 \left(I_n + \gamma^2 K_I \right) = \sigma^2
A_{\gamma} ~~ \text{ where $A_{\gamma} := I_n + \gamma^2 K_I$}.
\end{align*} Σ = σ 2 ( I n + γ 2 K I ) = σ 2 A γ where A γ := I n + γ 2 K I . This gives
f y ∣ γ , σ ( y ) ∝ σ − ( n − 2 ) ∣ A γ ∣ − 1 / 2 ∣ X T A γ − 1 X ∣ − 1 / 2 exp ( − y T [ A γ − 1 − A γ − 1 X ( X T A γ − 1 X ) − 1 X T A γ − 1 ] y 2 σ 2 ) . \begin{align*}
f_{y \mid \gamma, \sigma}(y) \propto \sigma^{-(n-2)}
|A_{\gamma}|^{-1/2} |X^T A_{\gamma}^{-1} X|^{-1/2} \exp
\left(-\frac{y^T \left[A_{\gamma}^{-1} - A_{\gamma}^{-1} X (X^T
A_{\gamma}^{-1} X)^{-1} X^T A_{\gamma}^{-1} \right] y}{2 \sigma^2}
\right).
\end{align*} f y ∣ γ , σ ( y ) ∝ σ − ( n − 2 ) ∣ A γ ∣ − 1/2 ∣ X T A γ − 1 X ∣ − 1/2 exp ( − 2 σ 2 y T [ A γ − 1 − A γ − 1 X ( X T A γ − 1 X ) − 1 X T A γ − 1 ] y ) . Combining this with the prior log σ ∣ γ ∼ uniform ( − ∞ , ∞ ) \log \sigma \mid \gamma \sim
\text{uniform}(-\infty, \infty) log σ ∣ γ ∼ uniform ( − ∞ , ∞ ) gives
1 σ 2 ∣ y , γ ∼ Gamma ( n 2 − 1 , y T [ A γ − 1 − A γ − 1 X ( X T A γ − 1 X ) − 1 X T A γ − 1 ] y 2 ) . \begin{align*}
\frac{1}{\sigma^2} \mid y, \gamma \sim \text{Gamma}
\left(\frac{n}{2} - 1, \frac{y^T \left[A_{\gamma}^{-1} - A_{\gamma}^{-1} X (X^T
A_{\gamma}^{-1} X)^{-1} X^T A_{\gamma}^{-1} \right] y}{2} \right).
\end{align*} σ 2 1 ∣ y , γ ∼ Gamma ( 2 n − 1 , 2 y T [ A γ − 1 − A γ − 1 X ( X T A γ − 1 X ) − 1 X T A γ − 1 ] y ) . Finally if we take a prior p ( γ ) p(\gamma) p ( γ ) for γ \gamma γ , then the posterior of γ \gamma γ becomes:
p ( γ ∣ y ) ∝ p ( γ ) ∣ A γ ∣ − 1 / 2 ∣ X T A γ − 1 X ∣ − 1 / 2 ( y T [ A γ − 1 − A γ − 1 X ( X T A γ − 1 X ) − 1 X T A γ − 1 ] y ) ( n / 2 ) − 1 . \begin{align*}
p(\gamma \mid y) \propto \frac{p(\gamma) |A_{\gamma}|^{-1/2} |X^T
A_{\gamma}^{-1} X|^{-1/2}}{\left(y^T \left[A_{\gamma}^{-1} - A_{\gamma}^{-1} X (X^T
A_{\gamma}^{-1} X)^{-1} X^T A_{\gamma}^{-1} \right] y \right)^{(n/2)
- 1}}.
\end{align*} p ( γ ∣ y ) ∝ ( y T [ A γ − 1 − A γ − 1 X ( X T A γ − 1 X ) − 1 X T A γ − 1 ] y ) ( n /2 ) − 1 p ( γ ) ∣ A γ ∣ − 1/2 ∣ X T A γ − 1 X ∣ − 1/2 . Proof of the C → ∞ C \rightarrow \infty C → ∞ fact ¶ The quantity
f ^ ( x ) = ( C ( 1 , x ) X T + τ 2 k I T ) ( Σ − 1 − Σ − 1 X ( C − 1 I 2 + X T Σ − 1 X ) − 1 X T Σ − 1 ) y \begin{align*}
\hat{f}(x) &= \left(C (1, x) X^T + \tau^2 \mathbf{k}_I^T \right)
\left(\Sigma^{-1} - \Sigma^{-1} X \left(C^{-1}
I_2 + X^T \Sigma^{-1} X \right)^{-1} X^T
\Sigma^{-1} \right) y
\end{align*} f ^ ( x ) = ( C ( 1 , x ) X T + τ 2 k I T ) ( Σ − 1 − Σ − 1 X ( C − 1 I 2 + X T Σ − 1 X ) − 1 X T Σ − 1 ) y converges, as C → ∞ C \rightarrow \infty C → ∞ , to:
f ^ ( x ) : = ( 1 , x ) ( X T A γ − 1 X ) − 1 X T A γ − 1 y + γ 2 k I T ( A γ − 1 − A γ − 1 X ( X T A γ − 1 X ) − 1 X T A γ − 1 ) y . \begin{align*}
\hat{f}(x) := (1, x) \left(X^T A_{\gamma}^{-1} X \right)^{-1} X^T
A_{\gamma}^{-1} y + \gamma^2 \mathbf{k}_I^T \left(A_{\gamma}^{-1} -
A_{\gamma}^{-1} X (X^T A_{\gamma}^{-1} X)^{-1} X^T A_{\gamma}^{-1}
\right) y.
\end{align*} f ^ ( x ) := ( 1 , x ) ( X T A γ − 1 X ) − 1 X T A γ − 1 y + γ 2 k I T ( A γ − 1 − A γ − 1 X ( X T A γ − 1 X ) − 1 X T A γ − 1 ) y . Here τ = γ σ \tau = \gamma \sigma τ = γσ and Σ = τ 2 K I + σ 2 I n \Sigma = \tau^2 K_I + \sigma^2 I_n Σ = τ 2 K I + σ 2 I n .
Σ = τ 2 K I + σ 2 I n = σ 2 γ 2 K I + σ 2 I n = σ 2 A γ \begin{align*}
\Sigma &= \tau^2 K_I + \sigma^2 I_n = \sigma^2 \gamma^2 K_I +
\sigma^2 I_n = \sigma^2 A_{\gamma}
\end{align*} Σ = τ 2 K I + σ 2 I n = σ 2 γ 2 K I + σ 2 I n = σ 2 A γ where A γ = I + γ 2 K I A_{\gamma} = I + \gamma^2 K_I A γ = I + γ 2 K I .