H∞ filter
Matthieu Bloch
Tuesday, December 6, 2022
Today in ECE 6555
- Announcements
- Last lecture
- Bonus homework posted
- Look it up, it's not that hard but will make you think about
importance sampling
- can replace up to 33% of a past homework grade
- One final exam take-home next week
- Missing materials?
- Office hours Wednesday December 07, 2022 (instead of today)
- Last time
- Today
- Questions?
\(H_\infty\) filter
- Motivation
- Kalman filter requires knowledge of all system parameters
- What do we do when we don't know anything about the noise corrupting
the system?
- Solution: filter that controls the worst case
error
- Why do we care about the Kalman filter?
- It's the optimal filter for linear state space model with gaussian
noise
- It's not optimal anymore the moment we change assumptions (see EKF,
UKF, etc.)
- \(H_\infty\)
filter
- This is minimax filter (more on this soon)
- Assumes nature is playing against you to choose the most damaging
noise
- Still requries knowledge of the system matrices
- For what to do without knowledge of the system, check out Sample Complexity of Kalman
Filtering for Unknown Systems
Adversarial state-space model
- We shall attempt to design a filter that optimizes the cost function
(over horizon \(n-1\)) \[
J_1\eqdef
\frac{\sum_{i=0}^{n-1}\norm[\matS_k]{\hat{\vecx}_k-\vecx_k}}{\norm[\matP_0^{-1}]{\hat{\vecx}_0-\vecx_0}^2+\sum_{i=0}^{n-1}\left(\norm[\matQ_k^{-1}]{\vecw_k}^2+\norm[\matR_k^{-1}]{\vecv_k}^2\right)}
\]
- We attempt to minize \(J_1\) (over
the choice of the linear filter for \(\vecx_k\))
- Nature attempts to maximize \(J_1\)
(over the choice of the noise)
- The matrices \(\matP_0\), \(\matQ_k\) and \(\matR_k\) are design parameters
Modified cost-function
- Direct minimization is intractable
- We only require instead that \(J_1<\frac{1}{\theta}\) or alternatively
\[
J \eqdef
-\frac{1}{\theta}\norm[\matP_0^{-1}]{\hat{\vecx}_0-\vecx_0}^2 +
\sum_{i=0}^{n-1}\left(\norm[\matS_k]{\hat{\vecx}_k-\vecx_k}-\frac{1}{\theta}\left((\norm[\matQ_k^{-1}]{\vecw_k}^2+\norm[\matR_k^{-1}]{\vecv_k}^2\right)\right)
<1
\]
- We will attempt to solve for \[
J^* = \min_{\hat{\vecx}^k}\max_{\vecw_k,\vecy_k,\vecx_0} J
\]
- Dynamic constrained obtimization
- The function to optimize is of the form \[
J = \psi(\vecx_0) + \sum_{k=0}^{n-1}\calL_k
\] where the minimization is subject to the constraints of
the dynamic state space model
Dynamic constrained optimization
Use Lagrange multipliers \[
J = \psi(\vecx_0) +
\sum_{k=0}^{n-1}\left[\calL_k+\frac{2\lambda_{k+1}^T}{\theta}(\matF_k
\vecx_k+\vecw_k-\vecx_{k+1})\right]
\]
This is a bit technical, but instructive \[
\matK_k =
\matP_k[\matI-\theta\matP_k+\matH_k^T\matR_k^{-1}\matH_k\matP_k]^{-1}\matH_k^{T}\matR_k^{-1}
\] \[
\hat{\vecx}_{k+1} =
\matF_k\hat{\vecx}_k+\matF_k\matK_k(\vecy_k-\matH_k\hat{\vecx}_k)
\] \[
\matP_{k+1}=\matF_k\matP_k[\matI-\theta\matP_k+\matH_k^T\matR_k^{-1}\matH_k\matP_k]^{-1}\matF_k^{T}+\matQ_k
\]
The Kalman filter is obtained for \(\theta=0\) and using the known covariance
matrices of the noise