Innovation Processes

Matthieu Bloch

Thursday, September 22, 2022

Today in ECE 6555

  • Don't forget
    • Problem set 2 due Thursday September 22, 2022 on Gradescope
    • Hard deadline extended to Monday September 26, 2022
    • Mathematics of ECE workshops (website)
    • Extra office hours today at 2pm in TSRB (live and recorded)
  • Last time
    • Stochastic processes: smoothing, causal filtering, prediction
    • Wiener Hopf solution to causal filtering
  • Today's plan
    • Clarification of causal filtering proof
    • Innovation process
  • Questions?

Causal filtering

  • Geometry strikes backā€¦
  • For \(\matR_\vecy\succ 0\) decomposed as \(\matR_\vecy=\matL\matD\matL^T\) (\(\matL\) lower triangular) \[ \hat{\vecx}_{f} = \mathcal{L}\left[\matR_{\vecx\vecy}\matL^T\matD^{-1}\right]\matL^{-1}\vecy \] where \[ \hat{\vecx}_{f}\eqdef\left[\begin{array}{c}\hat{x}_{0|0}\\\hat{x}_{1|1}\\\vdots\\\hat{x}_{m|m}\end{array}\right]\quad \matR_{\vecy}\eqdef\left[\matR_y(i,j)\right]\quad \matR_{\vecx\vecy}\eqdef\left[\matR_{xy}(i,j)\right] \] and \(\mathcal{L}[\cdot]\) is the operator that makes a matrix lower triangular.

Smoothing vs. Causal filtering

  • Example: Linear model \(\vecy = \vecx+\vecv\) with \(\E{\vecx\vecx^T}\eqdef \matR_x\), \(\E{\vecv\vecv^T}\eqdef \matR_v\), \(\E{\vecx\vecv^T}\eqdef 0\)
    • Can we compare the smoothing and causal filtering filters?
  • Let \(\matK_s\) denote the smoothing linear estimator, let \(\matK_f\) denote the causal filtering linear estimator. \[ \matK_s = \matK_f + \mathcal{SU}[\matR_y L^{-T}D^{-1}]L^{-1} \] where \(\mathcal{SU}\) denote the strict upper triangularization operator.

Innovation processes

  • Back to normal equations \(\matK_0\matR_y = \matR_y\)
    • A key difficulty we create is that \(\matR_y\) need to be inverted (esp. for causal filtering)
    • Would be easier if \(\matR_y\) were diagonal (which in general it has no reason to do)
  • Geometric approach to simplify dealing with normal equations
    • The normal equations are obtained by projecting onto a subspace
    • We are not bound to use \(\set{\vecy_i}_{i=0}^m\): we can orthogonalize!
    • Gram-Schmidt orthogonalization for random variables \[ \vece_0 = \vecy_0\qquad\qquad \forall i \geq 1\quad \vece_i = \vecy_i-\underbrace{\sum_{j=0}^{i-1}\dotp{\vecy_i}{\vece_j}\norm{\vece_j}^{-2}\vece_j}_{\hat{\vecy}_i} \]
  • The random variable \(\vece_i\eqdef \vecy_i-\hat{\vecy}_i\) is called the innovation

  • There is an invertible causal relation ship between \(\set{\vecy_i}\) and \(\set{\vece_i}\)

Innovation processes

  • Algebraic approach to simplify dealing with normal equations
    • \(\matR_y\) has not reason to be diagonal, but can we "whiten" it with a linear operation? \[ \textsf{Find } \matA\textsf{ such that } \epsilon=\matA\vect\textsf{ has covariance } \matR_\epsilon=A\matR_y\matA^T=\matD \]

    • \(\matA\) should be non singular to avoid losing information

    • Many solutions unless we impose more constraints

    • Required causal relationship: \(\matA=\matL\), lower triangular, and impose \(\matR_y=\matL\matD\matL^{T}\)

    • LDL decomposition is unique for positive semi definite matrices

  • Compare geometric and algebraic approaches

Applications of innovation processes

  • Estimation with innovation
  • Causal filtering with innovation
  • Example: innovations for exponentially correlated process