Innovation Processes / State Space Models

Matthieu Bloch

Tuesday September 27, 2022

Today in ECE 6555

  • Don't forget
    • Problem set 2 due Wednesday September 22, 2022 on Gradescope (hard deadline extended)
    • Office hours today at 12pm in TSRB (live and recorded)
    • Make sure you start homework early
    • Don't spend 30 hours on a homework
  • Last time
    • Innovation process
  • Today's plan
    • Innovation process
    • State space model
  • Questions?

Recall: Innovation processes

  • Back to normal equations \(\matK_0\matR_y = \matR_y\)
    • A key difficulty we create is that \(\matR_y\) need to be inverted (esp. for causal filtering)
    • Would be easier if \(\matR_y\) were diagonal (which in general it has no reason to do)
  • Geometric approach to simplify dealing with normal equations
    • The normal equations are obtained by projecting onto a subspace
    • We are not bound to use \(\set{\vecy_i}_{i=0}^m\): we can orthogonalize!
    • Gram-Schmidt orthogonalization for random variables \[ \vece_0 = \vecy_0\qquad\qquad \forall i \geq 1\quad \vece_i = \vecy_i-\underbrace{\sum_{j=0}^{i-1}\dotp{\vecy_i}{\vece_j}\norm{\vece_j}^{-2}\vece_j}_{\hat{\vecy}_i} \]
  • The random variable \(\vece_i\eqdef \vecy_i-\hat{\vecy}_i\) is called the innovation

  • There is an invertible causal relation ship between \(\set{\vecy_i}\) and \(\set{\vece_i}\)

Applications of innovation processes

  • Causal filtering with innovation
  • Example: innovations for exponentially correlated process

State Space Models: Motivation

  • Consider the linear stochastic differential equations \(\vecy_{i+1}-a \vecy_i=\vecu_i\) for \(i\geq 0\), \(a\in\bbR\)

    • Initial conditions \(\vecy_0\)
    • \(\vecu_i\) such that \(\dotp{\vecu_i}{\vecu_j}=\matQ_i\delta_{ij}\), \(\norm{\vecy_0}^2=\Pi_0\), \(\dotp{\vecu_i}{\vecy_0}=0\)
    • The process may be non-stationary
  • \(\forall i\geq j\) we have \(\dotp{\vecu_i}{\vecy_j}=0\)

    Let \(\Pi_i\eqdef \norm{\vecy_i}^2\); then \(\Pi_{i+1}=a^2\Pi_i+\matQ_i\)

  • In general the process is not stationary

  • If \(\matQ_i=\matQ\), \(\Pi_0 = \frac{\matQ}{1-a^2}\) with \(\abs{a}<1\), the process is stationary

  • Innovations can be computed regardless of stationarity

  • State space models allow us quite a bit of generality

  • Does the slimplicity extend beyond that simple example?

Higher order models

  • An autoregressive process is defined by the stochastic equation \[ y_{i+1} = a_{0,i}y_i + a_{1,i} y_{i-1}+\cdots + a_{n-1,i}y_{i-n+1} + u_i\qquad \forall i\geq 0 \] where:
    • \(u_i\) is zero mean with \(\dotp{u_i}{u_j}=Q_i\delta_{i,j}\) \(\forall i \geq 0\)
    • \(y_{0},\cdots,y_{-n+1}\) zero mean with know covariance matrix \(\Pi_0\)
  • The autoregressive model can be represented as a simple order 1 recursion of the form \[ \vecx_{i+1} = \matF_i\vecx_i + \matG_i\vecu_i\qquad y_i =\matH_i\vecx_i \]

  • An autoregressive moving-average process is defined by the stochastic equation \[ y_{i+1} = a_{0}y_i + a_{1} y_{i-1}+\cdots + a_{n-1}y_{i-n+1} + b_0u_i + \cdots+b_{n-1}u_{i-n+1}\qquad \forall i\geq 0 \] where:
    • \(u_i\) is zero mean with \(\dotp{u_i}{u_j}=Q_i\delta_{i,j}\) \(\forall i \geq 0\)
    • \(y_{0},\cdots,y_{-n+1}\) zero mean with know covariance matrix \(\Pi_0\)
  • The autoregressive moving-average model can also be represented as a simple order 1 recursion

Standard state space model