Matthieu Bloch
Tuesday September 13, 2022
We can't say much more about LMSE without knowing more
Fortunately, many engineering problems problems impose more structure between \(\vecy\) and \(\vecx\)
The LLSE of \(\vecx\) given \(\vecy\) in a linear mode (assuming \(\matR_\vecx\) and \(\matR_\vecv\) non singular) is \(\hat{\vecx}=\matK_0\vecy\) with \[ \matK_0=\matR_\vecx\matH^T(\matH\matR_\vecx\matH^\intercal + \matR_\vecv)^{-1} = (\matR_\vecx^{-1}+\matH^T\matR_\vecv^{-1}\matH)^{-1}\matH^T\matR_\vecv^{-1} \] \[ \matP_\vecx = \matR_\vecx -\matR_\vecx\matH^T(\matH\matR_\vecx\matH^T+\matR_\vecv)^{-1}\matH\matR_\vecx = (\matR_\vecx^{-1}+\matH^T\matR_\vecv^{-1}\matH)^{-1} \]
Sometimes we are interested in characterizing a deterministic quantity \(\vecx\)
Then \(\hat{\vecx}_\infty\eqdef (\matH^T\matH)^{-1}\matH^T\vecy\) is the optimal unbiased llmse of \(\vecx\)
Notes:
Then \(\matP^{-1}\vecx = \matP_1^{-1}\hat{\vecx}_1+\matP_2^{-1}\hat{\vecx}_2\) with \(\matP^{-1} = \matP_1^{-1}+\matP_2^{-1}-\matR_\vecx^{-1}\)
Consider the deterministic least square optimization \[ \min_{\vecx} (\vecx-\vecx_0)^T\Pi_0^{-1}(\vecx-\vecx_0) + \norm[W]{\vecy-\matH\vecx}^2 \]
Consider the linear stochastic least square optimization for the linear model \(\vecy=\matH\vecx+\vecv\)