Prof. Matthieu Bloch
Wednesday, November 20, 2024
Consider a Gaussian random vector \(\bfX\sim\calN(\mathbf{0},\matR)\), i.e., \[ p(\vecx) = \frac{1}{(2\pi)^{n/2}\sqrt{\det{\matR}}}\exp\left(-\bfx^T\matR^{-1}\bfx\right) \]
Assume that we we write \[ \bfX = \left[\begin{array}{c}\bfX_o\\\bfX_h\end{array}\right]\qquad\matR = \left[\begin{array}{cc}\bfR_o&\matR_{oh}\\ \matR_{oh}^T&\matR_{h}\end{array}\right] \]
The conditional density of \(\matX_h|\matX_o=\vecx_o\) is a Normal distribution with mean and covariance matrix \[ \bfmu = \matR_{oh}^T\matR_o^{-1}\vecx_o \] \[ \mathbf{\Sigma} = \matR_h - \matR_{oh}^T\matR_o^{-1}\matR_{oh} \]
A Gaussian random process is a collection \(\{X(\vect):\vect\in\calT\}\) where \(\calT\subset\bbR^d\) characterized by
such that for any \(\vect=\set{\vect_i}_{i=1}^n\), \[ \set{X(\vect_i)}_{i=1}^n\sim\calN(\mathbf{\underline{\mu}},\matR)\quad\underline{\mu}=\left[\begin{array}{c}\mu(\vect_1)\\|\\\mu(\vect_n)\end{array}\right]\quad \matR=\left[r(\vect_i,\vect_j)\right]_{1\leq i,j\leq n} \]
For a probabilistic model governing the distributions of samples \(\set{x_i}_{i=1}^n\), the likelihood function is \[ L(\theta;x_1,\cdots,x_n)\eqdef p_{X_1\cdots X_n}(x_1,\cdots,x_n;\theta). \] It is often convenient to work with a log-likelihood \(\ell(\theta;x_1,\cdots,x_n)=\log L(\theta;x_1,\cdots,x_n)\).
The maximum likelihood estimate of \(\theta\) is \[ \hat{\theta}_{\textnormal{MLE}} \eqdef \argmax_{\theta}L(\theta;x_1,\cdots,x_n) \]
An estimator \(\hat{\theta}\) of \(\theta_0\in\calT\) has bias \(\E{\hat{\theta}}-\theta_0\). The estimator is unbiased if the bias is zero for all \(\theta_0\in\calT\)
An estimator \(\hat{\theta}_n\) of \(\theta_0\in\calT\) using \(n\) observations \(x_1,\cdots,x_n\) is consistent if for every \(\epsilon>0\) and \(\delta\in(0;1)\) \[ \lim_{n\to\infty}\P{\abs{\hat{\theta}_n-\theta_0}>\epsilon}\leq \delta. \]