Matthieu Bloch
Thursday September 08, 2022
Assume \(\vecx\in\bbR^n\) is to be estimated from \(p\) observations \(\set{\vecy_i}_{i=1}^p\) and \(\vecy_i\in\bbR^m\). Define \[ \vecy^T = \left[\begin{array}{ccc}\vecy_1&\cdots&\vecy_p\end{array}\right]\in\bbR^{mp} \]
The linear least mean square estimate (LLMSE) of \(\vecx\) given \(\set{\vecy_i}_{i=1}^p\) is given by any solution of the normal equation \[ \matK_0\matR_\vecy = \matR_{\vecx\vecy} \]
The corresponding error covariance matrix is \(P(\matK_0)=\matR_\vecx-\matK_0\matR_{\vecy\vecx}\)
We have performed sensor fusion: we have optimally combined observations from multiple sensors
What happens if random vectors are not centered?
The solution of the LLMSE is \(\matK_0\) solution of \(\matK_0\matR_\vecy=\matR_{\vecx\vecy}\), i.e. \[ \E{(\vecx-\matK_0\vecy)\vecy^T} = \boldsymbol{0} \]
Can this be interpreted again as an inner product?
The LLMSE of \(\vecx\) given \(\set{\vecy_i}_{i=1}^p\) is the projection of \(\vecx\) onto the linear space spanned by \(\set{\vecy_i}_{i=1}^p\)
We can't say much more about LMSE without knowing more
Fortunately, many engineering problems problems impose more structure between \(\vecy\) and \(\vecx\)
The LLMSE of \(\vecx\) given \(\vecy\) in a linear mode (assuming \(\matR_\vecx\) and \(\matR_\vecv\) non singular) is \(\hat{\vecx}=\matK_0\vecy\) with \[ \matK_0=\matR_\vecx\matH^T(\matH\matR_\vecx\matH^\intercal + \matR_\vecv)^{-1} = (\matR_\vecx^{-1}+\matH^T\matR_\vecv^{-1}\matH)^{-1}\matH^T\matR_\vecv^{-1} \] \[ \matP_\vecx = \matR_\vecx -\matR_\vecx\matH^T(\matH\matR_\vecx\matH^T+\matR_\vecv)^{-1}\matH\matR_\vecx = (\matR_\vecx^{-1}+\matH^T\matR_\vecv^{-1}\matH)^{-1} \]