# Tauber’s theorem and Karamata’s proof of the Hardy-Littlewood tauberian theorem

Jordan Bell
November 11, 2017

The following lemma is attributed to Kronecker by Knopp.11 1 Konrad Knopp, Theory and Application of Infinite Series, p. 129, Theorem 3.

###### Lemma 1 (Kronecker’s lemma).

If $b_{n}\to 0$ then

 $\frac{b_{0}+b_{1}+\cdots+b_{n}}{n+1}\to 0.$
###### Proof.

Suppose that $|b_{n}|\leq K$ for all $n$, and let $\epsilon>0$. As $b_{n}\to 0$ there is some $n_{0}$ such that $n\geq n_{0}$ implies that $|b_{n}|<\epsilon$. If $n\geq\frac{(n_{0}+1)K}{\epsilon}$, then

 $\displaystyle\left|\frac{b_{0}+b_{1}+\cdots+b_{n}}{n+1}\right|$ $\displaystyle\leq\left|\frac{b_{0}+b_{1}+\cdots+b_{n_{0}}}{n+1}\right|+\left|% \frac{b_{n_{0}}+\cdots+b_{n}}{n+1}\right|$ $\displaystyle\leq\frac{(n_{0}+1)K}{n+1}+\frac{(n-n_{0})\epsilon}{n+1}$ $\displaystyle\leq\epsilon+\epsilon.$

We now use the above lemma to prove Tauber’s theorem.22 2 cf. E. C. Titchmarsh, The Theory of Functions, second ed., p. 10, §1.23.

###### Theorem 2 (Tauber’s theorem).

If $a_{n}=o(1/n)$ and $\sum_{n=0}^{\infty}a_{n}x^{n}\to s$ as $x\to 1^{-}$, then

 $\sum_{n=0}^{\infty}a_{n}=s.$
###### Proof.

Let $\epsilon>0$. Because $\sum_{n=0}^{\infty}a_{n}x^{n}\to s$ as $x\to 1^{-}$, there is some $\delta>0$ such that $x>1-\delta$ implies that

 $\left|\sum_{n=0}^{\infty}a_{n}x^{n}-s\right|<\epsilon.$

Next, because $n|a_{n}|\to 0$, there is some $N>\frac{1}{\delta}$ such that (i) if $n\geq N$ then $n|a_{n}|<\epsilon$ and by Lemma 1, (ii) $\frac{1}{N+1}\sum_{n=0}^{N}n|a_{n}|<\epsilon$.

Take $x=1-\frac{1}{N}$, so $N=\frac{1}{1-x}$ and $1-x=\frac{1}{N}$. We have

 $\displaystyle\left|\sum_{n=N+1}^{\infty}a_{n}x^{n}\right|$ $\displaystyle=\left|\sum_{n=N+1}^{\infty}na_{n}\cdot\frac{x^{n}}{n}\right|$ $\displaystyle<\sum_{n=N+1}^{\infty}\epsilon\cdot\frac{x^{n}}{N+1}$ $\displaystyle<\frac{\epsilon}{N+1}\cdot\frac{1}{1-x}$ $\displaystyle=\epsilon\cdot\frac{N}{N+1}$ $\displaystyle<\epsilon.$

Also, using

 $1-x^{n}=(1-x)(1+x+\cdots+x^{n-1})<(1-x)n$

we have

 $\displaystyle\left|\sum_{n=0}^{N}a_{n}(1-x^{n})\right|$ $\displaystyle\leq\sum_{n=0}^{N}|a_{n}|(1-x^{n})$ $\displaystyle<\sum_{n=0}^{N}|a_{n}|(1-x)n$ $\displaystyle=\sum_{n=0}^{N}\frac{|a_{n}|n}{N}$ $\displaystyle=\frac{N+1}{N}\cdot\frac{1}{N+1}\sum_{n=0}^{N}n|a_{n}|$ $\displaystyle<\frac{N+1}{N}\cdot\epsilon$ $\displaystyle<2\epsilon.$

Now,

 $\displaystyle\sum_{n=0}^{N}a_{n}-s$ $\displaystyle=\sum_{n=0}^{N}a_{n}-\sum_{n=0}^{N}a_{n}x^{n}+\sum_{n=0}^{N}a_{n}% x^{n}-s$ $\displaystyle=\sum_{n=0}^{N}a_{n}(1-x^{n})+\sum_{n=0}^{N}a_{n}x^{n}-s$ $\displaystyle=\sum_{n=0}^{N}a_{n}(1-x^{n})+\sum_{n=0}^{N}a_{n}x^{n}+\sum_{n=N+% 1}^{\infty}a_{n}x^{n}-\sum_{n=N+1}^{\infty}a_{n}x^{n}-s$ $\displaystyle=\sum_{n=0}^{N}a_{n}(1-x^{n})+\sum_{n=0}^{\infty}a_{n}x^{n}-s-% \sum_{n=N+1}^{\infty}a_{n}x^{n}$

and then

 $\displaystyle\left|\sum_{n=0}^{N}a_{n}-s\right|$ $\displaystyle\leq\left|\sum_{n=0}^{N}a_{n}(1-x^{n})\right|+\left|\sum_{n=0}^{% \infty}a_{n}x^{n}-s\right|+\left|\sum_{n=N+1}^{\infty}a_{n}x^{n}\right|$ $\displaystyle<2\epsilon+\epsilon+\epsilon,$

proving the claim. ∎

###### Lemma 3.

Let $g:[0,1]\to\mathbb{R}$ and $0. Suppose that the restrictions of $g$ to $[0,c)$ and $[c,1]$ are continuous and that

 $g(c-0)=\lim_{x\to c^{-}}g(x)\leq g(c).$

For $\epsilon>0$, there are are polynomials $p(x)$ and $P(x)$ such that

 $p(x)\leq g(x)\leq P(x),\qquad 0\leq x\leq 1$

and

 $\|g-p\|_{1}\leq\epsilon,\quad\|g-P\|_{1}\leq\epsilon.$
###### Proof.

There is some $\delta>0$ such that $c-\delta\leq x implies that

 $g(c-0)-\frac{\epsilon}{2}\leq g(x)\leq g(c-0)+\frac{\epsilon}{2};$

further, take $\delta<\frac{\epsilon}{g(c)-g(c-0)}$ and $\delta<\frac{1}{2}$.

Take $L$ to be the linear function satisfying

 $L(c-\delta)=g(c-\delta)+\frac{\epsilon}{2},\qquad L(c)=g(c)+\frac{\epsilon}{2}.$

For $c-\delta\leq x,

 $\displaystyle L(x)-g(x)$ $\displaystyle=L(x)-g(c-\delta)+g(c-\delta)-g(c-0)+g(c-0)-g(x)$ $\displaystyle=L(x)-L(c-\delta)+\frac{\epsilon}{2}+g(c-\delta)-g(c-0)+g(c-0)-g(x)$ $\displaystyle\leq L(c)-L(c-\delta)+\frac{\epsilon}{2}+\frac{\epsilon}{2}+\frac% {\epsilon}{2}$ $\displaystyle=g(c)-g(c-\delta)+\frac{3\epsilon}{2}$ $\displaystyle=g(c)-g(c-0)+g(c-0)-g(c-\delta)+\frac{3\epsilon}{2}$ $\displaystyle<\frac{\epsilon}{\delta}+\frac{\epsilon}{2}+\frac{3\epsilon}{2}$ $\displaystyle<\frac{2\epsilon}{\delta}.$

Define $\Phi:[0,1]\to\mathbb{R}$ by

 $\Phi(x)=\begin{cases}g(x)+\frac{\epsilon}{2}&0\leq x

$\Phi$ is continuous and $\Phi\geq g+\frac{\epsilon}{2}$. We have

 $\displaystyle\|g-\Phi\|_{1}$ $\displaystyle=\int_{0}^{1}(\Phi(x)-g(x))dx$ $\displaystyle=\int_{0}^{c-\delta}\frac{\epsilon}{2}dx+\int_{c-\delta}^{c}(\Phi% (x)-g(x))dx+\int_{c}^{1}\frac{\epsilon}{2}dx$ $\displaystyle<\frac{\epsilon}{2}+\int_{c-\delta}^{c}(\Phi(x)-g(x))dx$ $\displaystyle\leq\frac{\epsilon}{2}+\int_{c-\delta}^{c}\max\left\{L(x)-g(x),% \frac{\epsilon}{2}\right\}dx$ $\displaystyle\leq\frac{\epsilon}{2}+\int_{c-\delta}^{c}\max\left\{\frac{2% \epsilon}{\delta},\frac{\epsilon}{2}\right\}dx$ $\displaystyle=\frac{\epsilon}{2}+\delta\cdot\frac{2\epsilon}{\delta}$ $\displaystyle=\frac{5\epsilon}{2}.$

Because $\Phi$ is continuous, by the Weierstrass approximation theorem there is a polynomial $P(x)$ such that $\|\Phi-P\|_{\infty}\leq\frac{\epsilon}{2}$. Then,

 $g(x)\leq P(x),\qquad 0\leq x\leq 1,$

and

 $\|g-P\|_{1}\leq\|g-\Phi\|_{1}+\|\Phi-P\|_{1}<\frac{5\epsilon}{2}+\|\Phi-P\|_{% \infty}\\ \leq\frac{5\epsilon}{2}+\frac{\epsilon}{2}=3\epsilon.$

On the other hand, take $l$ to be the linear function satisfying

 $l(c-\delta)=g(c-\delta)-\frac{\epsilon}{2},\qquad l(c)=g(c)-\frac{\epsilon}{2}.$

One checks that for $c-\delta\leq x.

 $g(x)-l(x)<\frac{2\epsilon}{\delta},$

Define $\phi:[0,1]\to\mathbb{R}$ by

 $\phi(x)=\begin{cases}g(x)-\frac{\epsilon}{2}&0\leq x

which is continuous and satisfies $\phi\leq g-\frac{\epsilon}{2}$. One checks that

 $\|g-\phi\|_{1}<\frac{5\epsilon}{2}.$

Because $\phi$ is continuous, there is a polynomial $p(x)$ such that $\|\phi-p\|_{\infty}\leq\frac{\epsilon}{2}$. Then,

 $p(x)\leq g(x),\qquad 0\leq x\leq 1,$

and

 $\|g-p\|_{1}\leq\|g-\phi\|_{1}+\|\phi-p\|_{1}<\frac{5\epsilon}{2}+\|\phi-p\|_{% \infty}\leq\frac{5\epsilon}{2}+\frac{\epsilon}{2}=3\epsilon.$

The following is the Hardy-Littlewood tauberian theorem.33 3 E. C. Titchmarsh, The Theory of Functions, second ed., p. 227, §7.53, attributed to Karamata.

###### Theorem 4 (Hardy-Littlewood tauberian theorem).

If $a_{n}\geq 0$ for all $n$ and

 $\sum_{n=0}^{\infty}a_{n}x^{n}\sim\frac{1}{1-x},\qquad x\to 1^{-},$

then

 $s_{n}=\sum_{\nu=0}^{n}a_{\nu}\sim n.$
###### Proof.

For any $k\geq 0$,

 $\displaystyle(1-x)\sum_{n=0}^{\infty}a_{n}x^{n}(x^{n})^{k}$ $\displaystyle=\frac{1-x}{1-x^{k+1}}(1-x^{k+1})\sum_{n=0}^{\infty}a_{n}(x^{k+1}% )^{n}$ $\displaystyle=\frac{1}{1+x+\cdots+x^{k}}(1-x^{k+1})\sum_{n=0}^{\infty}a_{n}(x^% {k+1})^{n}$ $\displaystyle\to\frac{1}{k+1}\cdot 1$ $\displaystyle=\int_{0}^{1}t^{k}dt,$

as $x\to 1^{-}$. Hence if $P(x)$ is a polynomial, then

 $\lim_{x\to 1^{-}}(1-x)\sum_{n=0}^{\infty}a_{n}x^{n}P(x^{n})=\int_{0}^{1}P(t)dt.$ (1)

Define $g:[0,1]\to\mathbb{R}$ by

 $g(t)=\begin{cases}0&0\leq t

Let $\epsilon>0$. By Lemma 3, there are polynomials $p(x),P(x)$ such that

 $p(x)\leq g(x)\leq P(x),\qquad 0\leq x\leq 1$

and

 $\|g-p\|_{1}\leq\epsilon,\qquad\|P-g\|_{1}\leq\epsilon.$

Because the coefficients $a_{n}$ are nonnegative, taking upper limits and then using (1) we obtain

 $\displaystyle\limsup_{x\to 1^{-}}(1-x)\sum_{n=0}^{\infty}a_{n}x^{n}g(x^{n})$ $\displaystyle\leq\limsup_{x\to 1^{-}}(1-x)\sum_{n=0}^{\infty}a_{n}x^{n}P(x^{n})$ $\displaystyle=\lim_{x\to 1^{-}}(1-x)\sum_{n=0}^{\infty}a_{n}x^{n}P(x^{n})$ $\displaystyle=\int_{0}^{1}P(t)dt$ $\displaystyle<\int_{0}^{1}g(t)dt+\epsilon.$

Taking lower limits and then using (1) we obtain

 $\displaystyle\liminf_{x\to 1^{-}}(1-x)\sum_{n=0}^{\infty}a_{n}x^{n}g(x^{n})$ $\displaystyle\geq\liminf_{x\to 1^{-}}(1-x)\sum_{n=0}^{\infty}a_{n}x^{n}p(x^{n})$ $\displaystyle=\lim_{x\to 1^{-}}(1-x)\sum_{n=0}^{\infty}a_{n}x^{n}p(x^{n})$ $\displaystyle=\int_{0}^{1}p(t)dt$ $\displaystyle>\int_{0}^{1}g(t)dt-\epsilon.$

The above two inequalities do not depend on the polynomials $p(x),P(x)$ but only on $\epsilon$, and taking $\epsilon\to 0$ yields

 $\limsup_{x\to 1^{-}}(1-x)\sum_{n=0}^{\infty}a_{n}x^{n}g(x^{n})\leq\int_{0}^{1}% g(t)dt$

and

 $\liminf_{x\to 1^{-}}(1-x)\sum_{n=0}^{\infty}a_{n}x^{n}g(x^{n})\geq\int_{0}^{1}% g(t)dt.$

Thus

 $\lim_{x\to 1^{-}}(1-x)\sum_{n=0}^{\infty}a_{n}x^{n}g(x^{n})=\int_{0}^{1}g(t)dt% =\int_{e^{-1}}^{1}t^{-1}dt=1.$ (2)

For $x=e^{-1/N}$ we have

 $\displaystyle\sum_{n=0}^{\infty}a_{n}x^{n}g(x^{n})$ $\displaystyle=\sum_{n=0}^{\infty}a_{n}e^{-n/N}g(e^{-n/N})$ $\displaystyle=\sum_{n=0}^{N}a_{n}e^{-n/N}e^{n/N}$ $\displaystyle=s_{N}.$

Thus, (2) tells us that

 $\lim_{N\to\infty}(1-e^{-1/N})s_{N}=1.$

That is,

 $s_{N}\sim\frac{1}{1-e^{-1/N}},$

and using

 $\frac{1}{1-e^{-1/N}}=N+\frac{1}{2}+O(N^{-1})$

we get

 $s_{N}\sim N,$

completing the proof. ∎