The Mathematics of ECE

Linear Algebra - More on matrices

Friday, September 10, 2021

Linear algebra: roadmap

  • Last time: review of foundations of linear algebra
    • Key concepts: vector spaces, span, bases, dimension
    • Key results: bases are cool
    • Useful in signal processing, machine learning
  • Today: More on matrices
    • Key concepts: rank, determinant, eigenvector, eigenvalues
    • Key result:
    • Useful in signal processing, telecom, controls, machine learning, pretty much every thing
  • Website for slides: https://bloch.ece.gatech.edu/teaching/MofECEfa21/

Lats time: Matrices

  • A matrix \(\matA\in\bbR^{m\times n}\) is a collection of vectors concatenated together \[ \matA \eqdef\mat{c}{-\vecr_1-\\-\vecr_2-\\\vdots\\-\vecr_m-} \eqdef \mat{cccc}{\vert&\vert&&\vert\\\vecc_1&\vecc_2&\cdots&\vecc_n\\\vert&\vert&&\vert} \]

  • Let \(\matA\in\bbR^{m\times n}\) be a matrix with columns \(\set{\vecc_j}_{j=1}^n\) and rows \(\set{\vecr_i}_{i=1}^m\). The column (image) space \(\text{col}(\matA)\) of \(\matA\) is \(\text{span}\left(\set{\bfc_i}_{i=1}^n\right)\). The row space \(\text{row}(\matA)\)of \(\matA\) is \(\text{span}\left(\set{\bfr_i}_{i=1}^m\right)\). The null (kernel) space of \(\matA\) is \(\text{null}\eqdef(\matA)\set{\bfx\in\bbR^n:\matA\vecx=\mathbf{0}}\)
  • Note that \(\text{col}(\matA)=\text{row}(\matA^\intercal)\)

  • These spaces play an important role, in particular to solve systems of equations \(\matA\bfx=\bfy\) (You’ll be surprised how often this shows up in ECE!)

  • \(\text{null}(\matA)\) is a subvector space.

Last time: Rank

  • In the following \(\matA\) is an \(m\times n\) real-valued matrix

  • \(\text{dim}(\text{row}(\matA))=\text{dim}(\text{col}(\matA))\), which is called the rank of \(\matA\) denoted \(\text{rank}(\matA)\)
  • \(\text{row}(\matA)\) and \(\text{null}(\matA)\) are orthogonal complements, i.e.,

    • for any \(\bfu\in\text{row}(\matA)\), \(\bfv\in\text{null}(\matA)\) \(\bfu^\intercal\bfv=0\)
    • \(\text{dim}(\text{row}(\matA)) = n-\text{dim}(\text{null}(\matA))\)
    • for any \(\bfx\in\bbR^n\), there exists \(\bfx_2\in\text{row}(\matA)\), \(\bfx_2\in\text{null}(\matA)\) such that \(\bfx=\bfx_1+\bfx_2\)

Inverse

  • Let \(\bfA\in\bbR^{n\times n}\) be square real-valued matrix. If there exists \(\bfB\in\bbR^{n\times n}\) such that \(\bfB\bfA=\bfI\), then \(\bfB\) is the inverse of \(\bfA\), denoted \(\bfA^{-1}\) and \(\bfA\) is invertible.
  • Let \(\bfA\in\bbR^{n\times n}\) be square real-valued matrix. If \(\bfA^{-1}\) exists, then \(\bfA\bfA^{-1}=\bfI\) and \(\bfA^{-1}\) is unique.
  • There are many equivalent interpretations to having an invertible matrix \(\bfA\in\bbR^{n\times n}\):
    • \(\bfA^{-1}\) exists
    • \(\text{rank}(A)=n\)
    • the rows of \(\bfA\) are linearly independent
    • the columns of \(\bfA\) are linearly independent
    • the dimension of the row space of \(\bfA\) is \(n\)
    • the dimension of the null space of \(\bfA\) is \(0\)
    • the system \(\matA\vecx=\vecb\) has a unique solution for every \(\vecb\in\bbR^n\)
  • Let \(\bfA,\matB\in\bbR^{n\times n}\) be square real-valued invertible matrices. Then \((\matA\matB)^{-1} = \matB^{-1}\matA^{-1}\).

Determinant

  • Given \(\matA\in\bbR^{n\times n}\), the matrix \(\bfA_{ij}\in\bbR^{(n-1)\times(n-1)}\) is the matrix obtained by removing the \(i\)th row and \(j\)th column of \(\matA\)

  • If \(\matA\in\bbR^{n\times n}\) \[ \forall i,j\in\intseq{1}{n}\quad \det\bfA = \sum_{j=1}^n(-1)^{i+j}a_{ij}\det\bfA_{ij}=\sum_{i=1}^n(-1)^{i+j}a_{ij}\det\bfA_{ij} \]
  • This definition is indeed correct (but equality between the quantities requires some proof)

  • If \(\matA,\matB\in\bbR^{n\times n}\) then \(\det \bfA\bfB = \det\matA\det\matB\)
  • Vandermonde matrices \[ \bfA = \mat{ccccc}{1& x_1 &x_1^2&\cdots&x_1^{n-1}\\ 1& x_2 &x_2^2&\cdots&x_2^{n-1}\\\vdots& \vdots &\vdots&\vdots&\vdots\\1& x_n &x_n^2&\cdots&x_n^{n-1}} \]

Special matrices

  • Diagonal matrices \[ \bfA = \mat{cccc}{a_{11}&0&\cdots&0\\0&\ddots&\ddots&\vdots\\\vdots&\ddots&\ddots&0\\0&\cdots&0&a_{nn}} \]

    • Diagonal matrices can be used like scalar multiplication on components!
  • Other important matrices to recognize

    • Triangular matrices
    • Permutation matrices
    • Circulant matrices
    • Toeplitz matrices

Change of basis

  • Consider the canonical basis \(\set{e_i}_{i=1}^n\) for \(\bbR^n\); every vector can be viewed as a vector of coefficients \(\set{\alpha_i}_{i=1}^n\), \[ \bfx = \sum_{i=1}^n \alpha_i e_i = \mat{cccc}{\alpha_1&\alpha_2&\cdots&\alpha_n}^\intercal \]
  • How do we find the representation of \(\bfx\) in another basis \(\set{v_i}_{i=1}^n\)? Write \(e_i=\sum_{j=1}^n\beta_{ij}v_j\)
    • Regroup the coefficients \[ \bfx = \cdots + \left(\sum_{i=1}^n\beta_{ij}\alpha_i\right) v_j + \cdots \]

    • In matrix form

      \[ \bfx_{\text{new}} = \mat{cccc}{\beta_{11}&\beta_{21}&\cdots&\beta_{n1}\\ \beta_{12}&\beta_{22}&\cdots&\beta_{n2}\\\vdots&\vdots&\vdots&\vdots\\\beta_{1n}&\beta_{2n}&\cdots&\beta_{nn}}\bfx \]

Similarity

  • A change of basis matrix \(\matS\) is full rank (basis vectors are linearly independent)

  • Any full rank matrix \(\matS\) can be viewed as a change of basis

  • \(\matS^{-1}\) takes you back to the original basis

  • Warning: the columns of \(\bfS\) describe the old coordinates as a function of the new ones

  • If \(\matA,\bfB\in\bbR^{n\times n}\) then \(\bfB\) is similar to \(\bfA\) if there exists an invertible matrix \(\bfS\in\bbR^{n\times n}\) such that \(\bfB=\bfS^{-1}\bfA\bfS\)
  • Intuition: similar matrices are the same up to a change of basis

  • \(\matA\in\bbR^{n\times n}\) is diagonalizatble if it is similar to a diagonal matrix, i.e., there exists an invertible matrix \(\bfS\in\bbR^{n\times n}\) such that \(\bfA=\bfS^{-1}\bfD\bfS\) with \(\matD\) diagonal

Eivenvalues and eigenvectors

  • Let \(\matA\in\bbR^{n\times n}\) and \(\vecx\in\bbR^n\setminus{0}\). If \(\matA\bfx=\lambda\bfx\), then \(\lambda\) is an eigenvalue and \(\vecx\) is an eigenvector of \(\matA\) associated to \(\lambda\).
  • Let \(\matA\in\bbR^{n\times n}\) and \(\vecx\in\bbR^n\setminus{0}\) be an eigenvector for eigenvalue \(\lambda\). Let \(p\) be a polynomial. Then \(p(\matA)\) is a matrix for which \(\bfx\) is an eigenvector with eigenvalue \(p(\lambda)\).