Dr. Matthieu R Bloch
Wednesday October 27, 2021
Drop date: October 30, 2021
My office hours on Tuesdays
Midterm 2:
An inner product kernel is a mapping \(k:\bbR^d\times\bbR^d\to\bbR\) for which there exists a Hilbert space \(\calH\) and a mapping \(\Phi:\bbR^d\to\calH\) such that \[\forall \bfu,\bfv\in\bbR^d\quad k(\bfu,\bfv)=\langle\Phi(\bfu),\Phi(\bfv)\rangle_\calH\]
A function \(k:\bbR^d\times\bbR^d\to\bbR\) is an inner product kernel if and only if \(k\) is a positive semidefinite kernel.
Examples of kernels
Least square problems involved the normal equations \(\bfX^\intercal\bfX \bftheta=\bfX^\intercal\bfy\)
This is a system of symmetric equations \(\bfA\bfx=\bfy\) with \(\bfA^\intercal=\bfA\)
A real-valued matrix \(\bfA\) is symmetric if \(\bfA^\intercal=\bfA\) A complex-valued matrix \(\bfA\) is Hermitian if \(\bfA^\dagger=\bfA\) (also written \(\bfA^H=\bfA\))
Given a matrix \(\matA\in\bbC^{n\times n}\), if a vector \(\vecv\in\bbC^n\) satisfies \(\matA\bfv=\lambda\bfv\) for some \(\lambda\in\bbC\), then \(\lambda\) is an eigenvalue associated to the eigenvector \(\bfv\).
If \(\lambda\) is an eigenvalue, there are infinitely many eigenvectors associated to it
Given a matrix \(\matA\in\bbC^{n\times n}\), if a vector \(\vecv\in\bbC^n\) satisfies \(\matA\bfv=\lambda\bfv\) for some \(\lambda\in\bbC\), then \(\lambda\) is an eigenvalue associated to the eigenvector \(\bfv\).
Consider the canonical basis \(\set{e_i}_{i=1}^n\) for \(\bbR^n\); every vector can be viewed as a vector of coefficients \(\set{\alpha_i}_{i=1}^n\), \[ \bfx = \sum_{i=1}^n \alpha_i e_i = \mat{cccc}{\alpha_1&\alpha_2&\cdots&\alpha_n}^\intercal \]
How do we find the representation of \(\bfx\) in another basis \(\set{v_i}_{i=1}^n\)? Write \(e_i=\sum_{j=1}^n\beta_{ij}v_j\)
Regroup the coefficients \[ \bfx = \cdots + \left(\sum_{i=1}^n\beta_{ij}\alpha_i\right) v_j + \cdots \]
In matrix form \[ \bfx_{\text{new}} = \mat{cccc}{\beta_{11}&\beta_{21}&\cdots&\beta_{n1}\\ \beta_{12}&\beta_{22}&\cdots&\beta_{n2}\\\vdots&\vdots&\vdots&\vdots\\\beta_{1n}&\beta_{2n}&\cdots&\beta_{nn}}\bfx \]
A change of basis matrix \(\matP\) is full rank (basis vectors are linearly independent)
Any full rank matrix \(\matP\) can be viewed as a change of basis
\(\matP^{-1}\) takes you back to the original basis
Warning: the columns of \(\bfP\) describe the old coordinates as a function of the new ones
If \(\matA,\bfB\in\bbR^{n\times n}\) then \(\bfB\) is similar to \(\bfA\) if there exists an invertible matrix \(\bfS\in\bbR^{n\times n}\) such that \(\bfB=\bfP^{-1}\bfA\bfP\)
Intuition: similar matrices are the same up to a change of basis
\(\matA\in\bbR^{n\times n}\) is diagonalizable if it is similar to a diagonal matrix, i.e., there exists an invertible matrix \(\bfS\in\bbR^{n\times n}\) such that \(\bfD=\bfP^{-1}\bfA\bfP\) with \(\matD\) diagonal
Not all matrices are diagonalizable!