Probability & Statistics - Lecture 2
Monday, September 12, 2022
The expectation \(\E[\cdot]{\cdot}\) of a function \(g\) of a random variable \(X\) is defined by it’s distribution \(\E[X]{g(X)} = \int_x f_X(x) g(x) dx\).
Properties:
The mean \(\mu_X\) of a random variable \(X\) is defined as \(\mu_X = \E{X}\).
The variance \(\sigma^2_X\) of a random variable \(X\) is defined as \(\sigma^2_X = \Var[X]{X} = \E[X]{(X - \mu_X)^2}\).
Drill: Prove that the variance may also be represented as \(\E{X^2} - \E{X}^2\).
Drill: Recall the uniform distribution has pdf \(f\) defined on \([0,1]\) by \(f(x) = 1\). For a uniformly distributed random variable \(X\), compute \(\mu_X = \E[X]{X}\) and \(\Var[X]{X}\).
The mean squared error (MSE) of a deterministic estimator \(\hat{x} : \bbR^m \to \bbR^n\) of random variable \(X \in \bbR^n\) from observation \(Y \in \bbR^m\) is defined as \[ \text{MSE}(\hat{x}) = \E[X,Y]{||X - \hat{x}(Y)||_2^2} \] The minimum mean square estimator (MMSE) \(\hat{x}_{\text{MMSE}}\) is then defined as \[ \hat{x}_{\text{MMSE}} = \underset{\hat{x}}{\text{argmin}} \text{MSE}(\hat{x}) \] with solution \[ \hat{x}_{\text{MMSE}}(y) = \E[X|Y]{X | y} \]
A linear estimator \(\hat{x} : \bbR^m \to \bbR^n\) is one of the form \[ \hat{x}(y) = K y + b \] The parameters of the linear minimum mean square estimator (LMMSE) \(\hat{x}_{\text{LMMSE}}\) is then defined as \[ K_{\text{LMMSE}}, b_{\text{LMMSE}} = \underset{K, b}{\text{argmin}} \text{MSE}(\hat{x}) \] with solution satisfying \[ K_{\text{LMMSE}} R_{YY} = R_{XY} \] \[ b_{\text{LMMSE}} = \mu_X - K \mu_Y \] where \(R_{XY} = \E[X,Y]{X Y^T}\) and \(R_{YY} = \E[Y]{Y Y^T}\)