# Uncertainty principles and compressed sensing

Let ${Z}_{N}=\mathbb{Z}/N\mathbb{Z}$. Let ${\mathrm{\ell}}^{p}({Z}_{N})$ be the set of functions from ${Z}_{N}$ to $\u2102$, and for $T\subseteq {Z}_{N}$ let ${\mathrm{\ell}}^{p}(T)$ be the set of functions ${Z}_{N}\to \u2102$ whose support is a subset of $T$.

${\parallel f\parallel}_{{\mathrm{\ell}}^{p}(T)}={\left({\sum}_{x\in T}|f(x)|\right)}^{1/p}$. We also define ${\parallel f\parallel}_{{\mathrm{\ell}}^{0}({Z}_{N})}={\sum}_{x\in \mathrm{supp}(f)}1$.

${\mathrm{\ell}}^{p}(T)$ is a $|T|$-dimensional vector space, for each $p$. In particular, ${\mathrm{\ell}}^{p}({Z}_{N})$ is an $N$-dimensional vector space and thus ${\mathrm{\ell}}^{p}({Z}_{N})\cong {\u2102}^{N}$. We speak interchangeably about a signal of length $N$, a function from ${Z}_{N}$ to $\u2102$, and a vector of length $N$; sometimes one of these is more convenient.

For $f\in {\mathrm{\ell}}^{1}({Z}_{N})$, the Fourier transform $\widehat{f}$ of $f$ is defined by

$$\widehat{f}(\xi )=\frac{1}{\sqrt{N}}\sum _{x\in {Z}_{N}}f(x){e}^{-2\pi ix\xi /N}.$$ |

$\widehat{f}\in {\mathrm{\ell}}^{1}({Z}_{N})$.

The Fourier inversion formula is

$$f(x)=\frac{1}{\sqrt{N}}\sum _{\xi \in {Z}_{N}}\widehat{f}(\xi ){e}^{2\pi ix\xi /N}.$$ |

Define $F:{\mathrm{\ell}}^{1}({Z}_{N})\to {\mathrm{\ell}}^{1}({Z}_{N})$ by $F(f)=\widehat{f}$. By the Fourier inversion formula, $F$ is an isomorphism of vector spaces.

In this note we shall state and explain the main points of the proofs of several uncertainty principles for ${Z}_{N}$ that are given in Terence Tao’s paper An uncertainty principle for cyclic groups of prime order, Math. Res. Lett. 12 (2005), no. 1, 121–127 and the April 15, 2007 entry, “Ostrowski lecture: The uniform uncertainty principle and compressed sensing”, in Terence Tao’s blog (terrytao.wordpress.com). Also see the PDF on Tao’s website from the talk by Candès and Tao, “The uniform uncertainty principle and compressed sensing”, Harmonic Analysis and Related Topics, Seville, December 5, 2008.

We shall also state and prove a theorem from Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information, IEEE Trans. Inform. Theory 52 (2006), no. 2, 489–509 of Candès, Romberg and Tao which gives unique reconstruction. We shall explain why unique reconstruction (even in the special case of the theorem) fails when there is noise. Finally, we’ll give some motivation for the Uniform Uncertainty Principle.

For $f\in {\mathrm{\ell}}^{1}({Z}_{N})$, given $\mathrm{\Lambda}\subseteq {Z}_{N}$ and ${\{\widehat{f}(\xi )\}}_{\xi \in \mathrm{\Lambda}}$, can we uniquely reconstruct $f$? We can do this if and only if $\mathrm{\Lambda}={Z}_{N}$; this is because by the Fourier inversion formula we can define a function by defining its Fourier transform.

Now say that $f\in {\mathrm{\ell}}^{1}({Z}_{N})$ is $S$-sparse. Given $\mathrm{\Lambda}\subset {Z}_{N}$ and ${\{\widehat{f}(\xi )\}}_{\xi \in \mathrm{\Lambda}}$, can we uniquely reconstruct $f$? i.e., is there a unique $S$-sparse function (namely $f$ itself) which agrees with the measurements ${\{\widehat{f}(\xi )\}}_{\xi \in \mathrm{\Lambda}}$ we have?

Certainly $|\mathrm{\Lambda}|\ge S$ is necessary. Actually it is necessary that $|\mathrm{\Lambda}|\ge 2S$ (if $2S\le N$). Suppose $$. The subspace $F({\mathrm{\ell}}^{1}(\{1,\mathrm{\dots},2S\}))$ of ${\mathrm{\ell}}^{1}({Z}_{N})$ has dimension $2S$, and the subspace ${\mathrm{\ell}}^{1}({Z}_{N}-\mathrm{\Lambda})$ has dimension $N-|\mathrm{\Lambda}|$. But $2S+N-|\mathrm{\Lambda}|>N$, and thus there is a nonzero element in their intersection, say there is a nonzero $f\in {\mathrm{\ell}}^{1}(\{1,\mathrm{\dots},2S\})$ such that ${\widehat{f}|}_{\mathrm{\Lambda}}=0$. Let $f={f}_{1}-{f}_{2}$ where ${f}_{1}$ is supported on $\{1,\mathrm{\dots},S\}$ and ${f}_{2}$ is supported on $\{S+1,\mathrm{\dots},2S\}$. $f\ne 0$ so ${f}_{1}$ and ${f}_{2}$ are distinct $S$-sparse functions such that ${\widehat{{f}_{1}}|}_{\mathrm{\Lambda}}={\widehat{{f}_{2}}|}_{\mathrm{\Lambda}}$.

Unique reconstructions fails if and only if there is a nonzero $2S$-sparse function whose Fourier transform vanishes on all of $\mathrm{\Lambda}$. In other words, unique reconstruction of $S$-sparse functions measured on $\mathrm{\Lambda}$ holds if and only if for all $2S$-sparse $g\in {\mathrm{\ell}}^{1}({Z}_{N})$ there is some $\xi \in \mathrm{\Lambda}$ such that $\widehat{g}(\xi )\ne 0$.

The uncertainty principle suggests that the Fourier transform of a sparse function will have large support, and if the Fourier transform has large support then for most sets $\mathrm{\Lambda}$ the Fourier transform of the function will not be identically zero on $\mathrm{\Lambda}$. Generally, the support of a function and the support of its Fourier transform cannot both be small: $f$ and $\widehat{f}$ cannot both be compactly supported (Paley-Wiener), and even $f$ and $\widehat{f}$ cannot both be supported on sets of finite measure (Benedicks).

Discrete uncertainty principle: If $f\in {\mathrm{\ell}}^{1}({Z}_{N})$ is not identically zero, then $|\mathrm{supp}(f)|\cdot |\mathrm{supp}(\widehat{f})|\ge N$.

The Plancherel identity tells us that

$$\sum _{x\in {Z}_{N}}{|f(x)|}^{2}=\sum _{\xi \in {Z}_{N}}{|\widehat{f}(\xi )|}^{2}.$$ |

By the Fourier inversion formula,

$$\underset{x\in \mathbb{Z}/N}{sup}|f(x)|\le \frac{1}{\sqrt{N}}\sum _{\xi \in {Z}_{N}}{|\widehat{f}(\xi )|}^{2}.$$ |

By the Cauchy-Schwarz inequality,

$$\sum _{\xi \in {Z}_{N}}{|\widehat{f}(\xi )|}^{2}\le {|\mathrm{supp}(\widehat{f})|}^{1/2}\cdot {\left(\sum _{\xi \in {Z}_{N}}{|\widehat{f}(\xi )|}^{2}\right)}^{1/2}.$$ |

Therefore

$$N{\left(\underset{x\in {Z}_{N}}{sup}|f(x)|\right)}^{2}\le |\mathrm{supp}(\widehat{f})|\cdot \sum _{\xi \in {Z}_{N}}{|\widehat{f}(\xi )|}^{2}.$$ |

By the Plancherel identity we have

$$N{\left(\underset{x\in {Z}_{N}}{sup}|f(x)|\right)}^{2}\le |\mathrm{supp}(\widehat{f})|\cdot \sum _{x\in {Z}_{N}}{|f(x)|}^{2}.$$ |

Also,

$$\sum _{x\in {Z}_{N}}{|f(x)|}^{2}\le |\mathrm{supp}(f)|\cdot {\left(\underset{x\in {Z}_{N}}{sup}|f(x)|\right)}^{2}.$$ |

Thus

$$N{\left(\underset{x\in {Z}_{N}}{sup}|f(x)|\right)}^{2}\le |\mathrm{supp}(\widehat{f})|\cdot |\mathrm{supp}(f)|\cdot {\left(\underset{x\in {Z}_{N}}{sup}|f(x)|\right)}^{2}.$$ |

Since $f$ is not identically $0$ this gives

$$N\le |\mathrm{supp}(\widehat{f})|\cdot |\mathrm{supp}(f)|.$$ |

Now, we have unique recoverability if and only if for every $2S$-sparse function $g$ has a frequency in $\mathrm{\Lambda}$. This will happen in particular if $|\mathrm{\Lambda}|>N-|\mathrm{supp}(\widehat{g})|$. If $|\mathrm{\Lambda}|>N-\frac{N}{2S}$ then $|\mathrm{\Lambda}|>N-|\mathrm{supp}(\widehat{g})|$. Therefore, if $|\mathrm{\Lambda}|>N-\frac{N}{2S}$ then by the Discrete Uncertainty Principle we will have unique reconstruction.

The bound $|\mathrm{\Lambda}|>N-\frac{N}{2S}$ cannot not be improved in general: When $N$ is a square, let $f$ be the indicator function for $T=\{0,\sqrt{N},2\sqrt{N},\mathrm{\dots},N-\sqrt{N}\}$ and let $\mathrm{\Lambda}={Z}_{N}-T$. Then $f$ is $\sqrt{N}$-sparse so $N-\frac{N}{2S}=N-\frac{\sqrt{N}}{2}$, and $|\mathrm{\Lambda}|=N-\sqrt{N}$. We know that $\widehat{f}=f$. The zero function and $f$ are both $\sqrt{N}$-sparse functions whose Fourier transforms agree on $\mathrm{\Lambda}$, so unique reconstruction does not occur. (Here we showed that one can’t do better in general than $|\mathrm{\Lambda}|>N-\frac{N}{S}$.)

If $N=ab$ and $f$ is the indicator function for the multiples of $b$, then $|\mathrm{supp}(f)|=a$, and one can show that $|\mathrm{supp}(\widehat{f})|=b$. See p. 226 of Audrey Terras, Fourier analysis on finite groups and applications, 1999. We can take $|\mathrm{\Lambda}|=ab-b$ measurements and still not be able to distinguish $f$ from the zero function.

When $N$ is a prime then these types of counterexamples do not exist.

Uncertainty principle for cyclic groups of prime order: If $N$ is prime and $f\in {\mathrm{\ell}}^{1}({Z}_{N})$ is not identically zero, then $|\mathrm{supp}(f)|+|\mathrm{supp}(\widehat{f})|\ge N+1$. This demands far fewer measurements than the Discrete Uncertainty Principle.

We have unique reconstruction if for every $2S$-sparse $g\in {\mathrm{\ell}}^{1}({Z}_{N})$, $\widehat{g}$ is not identically zero on $\mathrm{\Lambda}$. It would suffice that for every $2S$-sparse $g$, $\mathrm{supp}(\widehat{g})>N-|\mathrm{\Lambda}|$. By this uncertainty principle we know that $\mathrm{supp}(\widehat{g})>N-\mathrm{supp}(g)\ge N-2S$, so if $|\mathrm{\Lambda}|\ge 2S$ then we have unique reconstruction.

If we take a bump function on $\mathbb{R}$ that is $1$ on $\{-\lfloor \sqrt{N}\rfloor ,-\lfloor \sqrt{N}\rfloor +1,\mathrm{\dots},\lfloor \sqrt{N}\rfloor \}$ then the Fourier transform will be concentrated on the same set. This yields a function $f$ on ${Z}_{N}$ defined by the values of the bump function on the integers $0,1,\mathrm{\dots},N-1$. If there is roundoff error, then $f$ is $\sqrt{N}$-sparse, since it is very small outside of $\{0,1,\mathrm{\dots},\lfloor \sqrt{N}\rfloor \}$. If we take $\mathrm{\Lambda}=\{\lfloor N/3\rfloor ,\lfloor N/3\rfloor +1,\mathrm{\dots},\lfloor 2N/3\rfloor \}$ then $\widehat{f}$ will be very small on $\mathrm{\Lambda}$ and so by roundoff error in fact zero on $\mathrm{\Lambda}$. This examples applies whether or not $N$ is prime, so if there is error then we don’t have unique reconstruction for all sets satisfying $|\mathrm{\Lambda}|\ge 2S$ when $N$ is prime.

These obstructions to unique reconstruction all involved sets of observed frequencies $\mathrm{\Lambda}$ that were arithmetic progressions. More generally, the problem $\mathrm{\Lambda}$ were structured. It turns out that if we select a set of frequencies uniformly at random among all $\left(\genfrac{}{}{0pt}{}{N}{|\mathrm{\Lambda}|}\right)$ sets of size $|\mathrm{\Lambda}$ we will usually not have these obstructions. (“Most sets” don’t have the properties that make unique reconstruction fail.)

Uniform Uncertainty Principle: If $\mathrm{\Lambda}$ is a random set with $|\mathrm{\Lambda}|\gg S{\mathrm{log}}^{4}N$, then with probability $1-O({N}^{-A})$ for any fixed $A$,

$$\sum _{\xi \in \mathrm{\Lambda}}{|\widehat{f}(\xi )|}^{2}\approx \frac{|\mathrm{\Lambda}|}{N}\sum _{x\in {Z}_{N}}{|f(x)|}^{2}$$ |

for all $4S$-sparse functions $f\in {\mathrm{\ell}}^{1}({Z}_{N})$. $X\approx Y$ means that $X$ and $Y$ differ by at most 10%.

For a single function $f$ we cans show that $\mathrm{\Lambda}$ gets its “fair share” of the ${\mathrm{\ell}}^{2}$ norm using the Chernoff inequality, but it is much harder to prove for all functions since there are so many functions…We need some way of keeping a handle on the set of all $S$-sparse functions.

Chernoff inequality: Let ${X}_{1},\mathrm{\dots},{X}_{n}$ be independent random variables with $|{X}_{i}|\le K$, mean ${\mu}_{i}$ and variance ${\sigma}_{i}^{2}$. Let ${S}_{n}={\sum}_{i=1}^{n}{X}_{i},\mu ={\sum}_{i=1}^{n}{\mu}_{i},{\sigma}^{2}={\sum}_{i=1}^{n}{\sigma}_{i}^{2}$. Then for any $\lambda >0$ one has

$$\begin{array}{cc}& P(|{S}_{n}-\mu |\ge \lambda \sigma )\le \hfill \\ & C\mathrm{max}(\mathrm{exp}(-c{\lambda}^{2}),\mathrm{exp}(-c\lambda \sigma /K))\hfill \end{array}$$ |

for some absolute constants $C,c>0$.

Let $n=|\mathrm{\Lambda}|$ and take ${X}_{i}={|\widehat{f}(\xi )|}^{2}$ with probability $\frac{1}{N}$. Then ${\mu}_{i}=\frac{1}{N}{\mathrm{\ell}}^{2}(\widehat{f})$ and $\mu =\frac{|\mathrm{\Lambda}|}{N}{\mathrm{\ell}}^{2}(\widehat{f})$. When $|\mathrm{\Lambda}|$ is small relative to $N$, then ${\sum}_{i=1}^{n}{X}_{i}$ and ${\sum}_{\xi \in \mathrm{\Lambda}}{|\widehat{f}(\xi )|}^{2}$ have nearly the same distribution, because most multisets of size $|\mathrm{\Lambda}|$ will not have repeated entries.

To use the Chernoff inequality we want $P(|{S}_{n}-\mu |>\delta \mu )\le P(|{S}_{n}-\mu |>\lambda \sigma )$, where $\delta =0.1$. So let $\lambda =\frac{\delta \mu}{\sigma}$.

The paper Near-optimal signal recovery from random projections: Universal encoding strategies?, IEEE Trans. Inform. Theory 52 (2006), no. 12, 5406–5425 by Candès and Tao deals with signals that are not sparse but that obey a “power law”, where there are $S$ big components in the signal and then for the rest of the components, the $n$th component decays $O({n}^{-1/p})$ for some $p>0$. This paper is a tour de force. The “main result of the paper” which states that if a measurement process obeys UUP and ERP then we can reconstruct something which has a very close ${\mathrm{\ell}}^{2}$ norm to the original signal. But to actually show that ERP and UUP hold for the measurement processes (=ensembles=random matrices) they deal with in the paper is a whole other barrel of monkeys. (In this presentation we talked about the Fourier ensemble, which is the hardest ensemble to prove ERP and UUP for.)

Proof of the uncertainty principle for cyclic groups of prime order. First assume what we’ll call Chebotarev’s lemma: Let $p$ be prime and $1\le n\le p$. Let ${x}_{1},\mathrm{\dots},{x}_{n}$ be distinct elements of ${Z}_{p}$ and let ${\xi}_{1},\mathrm{\dots},{x}_{n}$ be distinct elements of ${Z}_{p}$. Then the matrix ${({e}^{2\pi i{x}_{i}{\xi}_{k}})}_{1\le j,k\le n}$ has nonzero determinant.

Now, let $A,\stackrel{~}{A}$ be nonempty subsets of ${Z}_{p}$ with $|A|=|\stackrel{~}{A}|$. Define $T:{\mathrm{\ell}}^{2}(A)\to {\mathrm{\ell}}^{2}(\stackrel{~}{A})$ by $Tf={\widehat{f}|}_{\stackrel{~}{A}}$. Thinking of the left-hand side (physical space) as having a basis of Dirac deltas and the right-hand side (frequency space) as having a basis of exponentials, the matrix representation of $T$ is ${({e}^{2\pi i{x}_{i}{\xi}_{k}})}_{1\le j,k\le n}$ for some choice of ${x}_{i}$ and ${\xi}_{i}$. Therefore $T$ is an isomorphism.

Suppose by contradiction that there is a nonzero $f\in {\mathrm{\ell}}^{2}({Z}_{p})$ such that $|\mathrm{supp}(f)|+|\mathrm{supp}(\widehat{f})|\le p$. Let $A=\mathrm{supp}(f)$. Then there some $\stackrel{~}{A}\subset {\mathrm{\ell}}^{2}({Z}_{p})$ that is disjoint from $\mathrm{supp}(\widehat{f})$ and such that $|A|=|\stackrel{~}{A}|$. $T:{\mathrm{\ell}}^{2}(A)\to {\mathrm{\ell}}^{2}(\stackrel{~}{A})$ defined by $Tf={\widehat{f}|}_{\stackrel{~}{A}}$ is an isomorphism, but $f$ is a nonzero vector that it sends to zero, a contradiction. Therefore for all nonzero $f\in {\mathrm{\ell}}^{2}({Z}_{p})$, $|\mathrm{supp}(f)|+|\mathrm{supp}(\widehat{f})|\ge p+1$. We’ve seen too that this bound is sharp. Thus the uncertainty principle for cyclic groups of prime order is proved.

Now we’ll turn to proving Chebotarev’s lemma.

The proof of Chebotarev’s lemma itself requires another rather easy lemma (from the “Galois theory of the cyclotomic integers”): Let $p$ is prime, $n$ a positive integer and $P({z}_{1},\mathrm{\dots},{z}_{n})$ a polynomial with integer coefficients. If ${\omega}_{1},\mathrm{\dots},{\omega}_{n}$ are $p$th roots of unity such that $P({\omega}_{1},\mathrm{\dots},{\omega}_{n})=0$, then $P(1,\mathrm{\dots},1)$ is a multiple of $p$.

To prove this, let $\omega ={e}^{2\pi i/p}$ and let ${\omega}_{j}={\omega}^{{k}_{j}}$ for some integers $$. Now define

$$Q(z)=P({z}^{{k}_{1}},\mathrm{\dots},{z}^{{k}_{n}})\mathit{\hspace{1em}}mod{z}^{p}-1.$$ |

Here we are *not* talking about quotient rings, just the result of polynomial long division.
So $Q(\omega )=0$ and $Q(1)=P(1,\mathrm{\dots},1)$ (since the quotient from the long division will be equal to $0$ when evaluated at $1$). Since
$p$ is prime the minimal polynomial of $\omega $ is the cyclotomic polynomial
${\mathrm{\Phi}}_{p}(z)=1+z+\mathrm{\cdots}+{z}^{p-1}$; $Q(z)$ must be the product of this with a polynomial
over the integers, but since $Q(z)$ has degree at most $p-1$, it is an integer
multiple of the minimal polynomial.
Therefore $Q(1)=P(1,\mathrm{\dots},1)$ is an integer multiple of ${\mathrm{\Phi}}_{p}(1)=p$.

Now we have everything we need for the proof of Chebotarev’s lemma. Let ${\omega}_{j}={e}^{2\pi i{x}_{j}/p}$. We have to show that

$$det{({\omega}_{j}^{{\xi}_{k}})}_{1\le j,k\le n}$$ |

is not a multiple of $p$. To do this, define

$$D({z}_{1},\mathrm{\dots},{z}_{n})=det{({z}_{j}^{{\xi}_{k}})}_{1\le j,k\le n}$$ |

and put

$$ |

Consider

$${({z}_{2}\frac{d}{d{z}_{2}})}^{1}{({z}_{3}\frac{d}{d{z}_{3}})}^{2}\mathrm{\cdots}{({z}_{n}\frac{d}{d{z}_{n}})}^{n-1}D({z}_{1},\mathrm{\dots},{z}_{n})$$ | (1) |

evaluated at ${z}_{1}=\mathrm{\dots}={z}_{n}=1$.

This is equal to

$$(n-1)!(n-2)!\mathrm{\cdots}1P(1,\mathrm{\dots},1).$$ |

None of the factorials are multiples of $p$ since $n\le p$. So to show that $P(1,\mathrm{\dots},1)$ is not a multiple of $p$ it suffices to show that (1) is not a multiple of $p$. But (1) is equal to

$$det{({\xi}_{k}^{j-1})}_{1\le j,k\le n}$$ |

which is a Vandermonde determinant and is equal to

$$ |

These factors are all distinct modulo $p$, hence the product is not a multiple of $p$.