Prof. Matthieu Bloch
Tuesday, September 12, 2023
Let \(X\in\calX\) be a discrete random variable. The Renyi entropy of order \(\alpha\in\mathbb{R}\cup\{\infty\}\) of \(X\) is
\[\begin{align*} H_{\alpha}(X)\eqdef \frac{1}{1-\alpha}\log \sum_{x\in\calX} p_X(x)^\alpha, \end{align*}\]
with the convention that \(H_1(X)=\lim_{\alpha\rightarrow 1} H_\alpha(X)\) and \(H_\infty(X)=\lim_{\alpha\rightarrow \infty} H_\alpha(X)\).
Let \(X\in\calX\) be a discrete random variable. Then,
Let \(X\in\calX\) and \(Y\in\calY\) be discrete random variables. Then,
Let \(X,Y,Z\) be three discrete random variables. Then,
Let \(X,Y,Z\) be three discrete random variables. Then, \(\forall \alpha\in\mathbb{R}^+\cup\{\infty\}\) \(H^A_{\alpha}(X|YZ)\leq H^A_{\alpha}(X|Z)\). If \(\alpha\notin\{0,\infty\}\), equality holds if and only if \(X-Z-Y\) forms a Markov chain.
Let \(X,Y,Z\) be three discrete random variables such that \(X-Y-Z\) forms a Markov chain. Then, \(\forall\alpha\in\mathbb{R}^+\cup\{\infty\}\) \(H_\alpha^A(X|Y)\leq H_\alpha^A(X|Z)\).
Let \(X,Y,Z\) be three discrete random variables. Then, \(\forall \alpha\in\mathbb{R}^+\cup\{\infty\}\)
\[\begin{align*} H^A_{\alpha}(XY|Z)\geq H^A_\alpha(XY|Z)-H^A_0(Y|Z) \geq H^A_\alpha(X|Z)-H^A_0(Y|Z)\geq H^A_\alpha(X|Z)-\log\card{\calY}. \end{align*}\]
Let \(X\in\calX\) be a discrete random variable and let \(\epsilon\geq 0\). The \(ε\)-smooth Renyi entropy of order \(\alpha\in\mathbb{R}\cup\{\infty\}\) of \(X\) is
\[\begin{align*} H^\epsilon_{\alpha}(X)\eqdef \frac{1}{1-\alpha}\inf_{Q\in\calB^\epsilon(P_X)}\log \sum_{x\in\calX} Q(x)^\alpha, \end{align*}\]
with the convention that \(H^\epsilon_1(X)=\lim_{\alpha\rightarrow 1} H^\epsilon_\alpha(X)\) and \(H^\epsilon_\infty(X)=\lim_{\alpha\rightarrow \infty} H^\epsilon_\alpha(X)\).
This may look very complicated
We will get back to this quantity later, it is actually quite natural in some cases
Let \(\set{X_i}_{i=1}^n\) be i.i.d. random variables \(\sim P_X\). Then, \(\frac{1}{n}\sum_{i=1}^nX_i\) converges in probability to \(\E{X}\). Specifically,
\[\begin{align*} \forall\epsilon>0\qquad \lim_{n\to \infty}\P{\abs{\frac{1}{n}\sum_{i=1}^nX_i-\E{X}}>\epsilon}=0. \end{align*}\]
Let \(\set{X_i}_{i=1}^n\) be i.i.d. random variables \(\sim P_X\). Then,
\[\begin{align*} \frac{-1}{n}\log P_{X}^{\otimes n}(X_1,\cdots,X_n)\mathop{\longrightarrow}_{n\to\infty}\bbH(X)\textsf{ in probability.} \end{align*}\]
The typical set \(\calA_\epsilon^n(P_X)\) is
\[\begin{align*} \calA_\epsilon^n(P_X) \eqdef \set{x^n\in\calX^n:2^{-n(\bbH(X)+\epsilon)}\leq P_X^{\otimes n}(x^n)\leq2^{-n(\bbH(X)-\epsilon)}} \end{align*}\]
For \(x^n\in\calA_\epsilon^n(P_X)\)
\[\begin{align*} \bbH(X)-\epsilon \leq -\frac{1}{n}\log P_X^{\otimes n}(x^n) \leq \bbH(X)+\epsilon \end{align*}\]
\(\P{\calA_\epsilon^n(P_X)}\to 1\) as \(n\to\infty\)
\(\card{\calA_\epsilon^n(P_X)} \leq 2^{n(\bbH(X)+\epsilon)}\)
\(\card{\calA_\epsilon^n(P_X)} \geq (1-\epsilon)2^{n(\bbH(X)-\epsilon)}\) for \(n\) large enough