Many mathematical constant are calculated as limit of series. In this
chapter, we recall some definitions and results related to series and
acceleration of their convergence. For any series åa_{n}, we denote by
s_{n} its partial sums, that is
s_{0}
=
a_{0},
s_{1}
=
a_{0}+a_{1},
s_{n}
=
a_{0}+a_{1}+...+a_{n} =
n å
k = 0
a_{k}.
We are interested in the behavior of s_{n} as n tends to infinite
(infinite series). The series whose terms have alternative signs are said to
be alternating series, we write them as
s_{n} = a_{0}a_{1}+...+(1)^{n}a_{n} =
n å
k = 0
(1)^{k}a_{k},
where the (a_{k}) are positive numbers.
Definition 1
An infinite series åa_{n} is said to be convergent if its partial sums
(s_{n}) tends to a limit S (called the sum of the series) as n
tends to infinite. Otherwise the series is said to be divergent.
Example 1
For the geometric series is defined by a_{k} = x^{k}, the partial
sum s_{n} is then given by
s_{n} = 1+x+x^{2}+...+x^{n} =
1x^{n+1}1x
,
hence it converges for  x < 1 to S = 1/(1x) and diverges
otherwise.
Example 2
The harmonic series is defined for k > 0 by a_{k} = 1/k, and the
partial sum s_{n} (sometimes denoted H_{n} and called Harmonic
number) is
s_{n} = 1+
12
+...+
1n
.
We have for n = 2^{p}
s_{n}
=
æ ç
è
1+
12
ö ÷
ø
+
æ ç
è
13
+
14
ö ÷
ø
+
æ ç
è
15
+..+
18
ö ÷
ø
+...+
æ ç
è
12^{p1}+1
+...+
12^{p}
ö ÷
ø
s_{n}
>
1.
12
+2.
14
+4.
18
+...+2^{p1}.
12^{p}
=
p2
,
therefore the partial sums are unbounded and the harmonic series is
divergent (this important result was known to Jakob Bernoulli (16541705) ).
Ernst Kummer's (18101893) method is elementary and may be used to
accelerate the convergence of many series. The idea is to subtract from a
given convergent series åa_{n} another equivalent series åb_{n} whose sum å_{n ³ 0}b_{n} is well known. More precisely
suppose that
lim
n® ¥
a_{n}b_{n}
= r ¹ 0,
then the transformation is given by
¥ å
n = 0
a_{n} = r
¥ å
n = 0
b_{n} +
¥ å
n = 0
æ ç
è
1r
b_{n}a_{n}
ö ÷
ø
a_{n}.
The convergence of the latest series is faster because 1rb_{n}/a_{n}
tends to 0 as k tends to infinity.
Example 3
Consider
S
=
¥ å
k = 1
1k^{2}
C
=
¥ å
k = 1
1k(k+1)
=
¥ å
k = 1
æ ç
è
1k

1k+1
ö ÷
ø
= 1,
we have r = C = 1 and Kummer's transformation becomes
S = 1+
¥ å
k = 1
æ ç
è
1
k^{2}k(k+1)
ö ÷
ø
1k^{2}
= 1+
¥ å
k = 1
1k^{2}(k+1)
.
The process can be repeated with this time
S^{*}
=
¥ å
k = 1
1k^{2}(k+1)
C
=
¥ å
k = 1
1k(k+1)(k+2)
=
14
,
giving
S =
54
+2
¥ å
k = 1
1k^{2}(k+1)(k+2)
.
After N applications of the transformation
S =
N å
k = 1
1k^{2}
+N!
¥ å
k = 1
1k^{2}(k+1)(k+2)...(k+N)
,
whose convergence may be rapid enough for suitable values of N (this
example is due to Knopp [7]).
Note, that we have used the following family of series to improve the
convergence
In 1926, Alexander Aitken found a new procedure to accelerate the rate of
convergence of a series, the Aitken d^{2}process
[2]. It consists in constructing a new series S^{*}, whose
partial sums s_{n}^{*} are given by
s_{n}^{*} = s_{n+1}
(s_{n+1}s_{n})^{2}s_{n+1}2s_{n}+s_{n1}
,
where (s_{n1},s_{n},s_{n+1}) are three successive partial sums of the
original series. Of course, it's possible to repeat the process on the new
series S^{*} ... For example for the converging geometric series with 0 < x < 1:
s_{n}
=
1+x+x^{2}+...+x^{n} =
1x^{n+1}1x
and the limit is
S
=
lim
n® ¥
s_{n} =
11x
,
hence
s_{n} = (1x^{n+1})S = Sx^{n+1}S,
and if we replace (s_{n1},s_{n},s_{n+1}) by this expression we have s_{n}^{*} = S ..., the exact limit is given just by three evaluations of the
original series. This indicates that for a series almost geometric, Aitken's
process may be efficient.
Example 4
If we consider the extremely slow (logarithmic rate) converging series
Suppose that the (a_{k}) may be written as the image of a given
differentiable function f, that is
a_{k} = f(k)
and so
s_{n} =
n å
k = 0
a_{k} =
n å
k = 0
f(k)
the following famous result due to Euler (1732) and Colin Maclaurin (1742)
states that
s_{n} =
ó õ
n
0
f(t)dt+
12
( f(0)+f(n)) +
m å
k = 1
B_{2k}(2k)!
( f^{2k1}(n)f^{2k1}(0)) +e_{m,n}
where e_{m,n} is a remainder given by
e_{m,n} =
1(2m+1)!
ó õ
n
0
B_{2m+1}(x
ëx
û )f^{2m+1}(x)dx
where the B_{2k} are Bernoulli's Numbers and B_{i}(x) being Bernoulli's
polynomials (see Bernoulli's number essay for more details or [6]
for a proof).
Example 5
Suppose that s > 1 and
f(x) =
1(x+1)^{s}
then
s_{n1}
=
1+
12^{s}
+
13^{s}
+...+
1n^{s}
=
1s1
æ ç
è
1
1n^{s1}
ö ÷
ø
+
12
æ ç
è
1+
1n^{s}
ö ÷
ø
+
B_{2}2
æ ç
è
s
1
ö ÷
ø
æ ç
è
1
1n^{s+1}
ö ÷
ø
+...
+
B_{2m}2m
æ ç
è
s+2m2
2m1
ö ÷
ø
æ ç
è
1
1n^{s+2m1}
ö ÷
ø
+e_{m,n}
if n® ¥ in this identity, we find the relation
z(s) =
¥ å
k = 1
1k^{s}
=
1s1
+
12
+
B_{2}2
æ ç
è
s
1
ö ÷
ø
+...+
B_{2m}2m
æ ç
è
s+2m2
2m1
ö ÷
ø
+e_{m,¥}
and by computing z(s)s_{n1} we find the useful algorithm to
estimate z(s)
z(s) = 1+
12^{s}
+...+
1n^{s}
+
1s1
1n^{s1}

12n^{s}
+
m å
i = 1
B_{2i}2i
æ ç
è
s+2i2
2i1
ö ÷
ø
1n^{s+2i1}
+( e_{m,¥}e_{m,n}) ,
and for any given integer m, the remainder tends to zero (Exercise : check
this with care!).
Very efficient methods exist to accelerate the convergence of an alternating
series
S =
¥ å
k = 0
(1)^{k}a_{k},
one of the first is due to Euler. Usually the transformed series converges
geometrically with a rate of convergence depending on the method.
Because the speed of converge may be dramatically improved, there is a trick
due to Van Wijngaarden which permits to transform any series åb_{k} with
positive terms b_{k} into an alternating series. It may be written
and because 2^{j}(k+1)1 increases very rapidly with j, only a few terms
of this sum are required. Sometimes a closed form exists for the a_{k},
for example if
In 1755, Euler gave a first version of what is now called Euler's
transformation. To establish it, observe that the sum of the alternating
series may be written as
In 1991, a generalization of Euler's method was given in [4] (this
work in fact generalizes a technique that was used to obtain irrationality
measures of some classical constants like log(2) for example). This
algorithm is very general, powerful and easy to understand. For an
alternating series å(1)^{k} a_{k}, we assume that there exist a positive function w(x) such that
a_{k} =
ó õ
1
0
x^{k}w(x)dx.
This relation permits to write the sum of the series as
¥ å
k = 0
(1)^{k}a_{k} =
ó õ
1
0
æ è
¥ å
k = 0
(1)^{k}x^{k} w(x)dx
ö ø
=
ó õ
1
0
w(x)1+x
dx.
Now, for any sequence of polynomials P_{n}(x) of degree n with P_{n}(1) ¹ 0, we denote by S_{n} the number
S_{n} =
1P_{n}(1)
ó õ
1
0
P_{n}(1)P_{n}(x)1+x
w(x)dx.
Notice that S_{n} is a linear combination of the number (a_{k})_{0 £ k < n}
since if we write P_{n}(x) = å_{k = 0}^{n} p_{k} (x)^{k}, we have easily
S_{n} =
1P_{n}(1)
n1 å
k = 0
c_{k} (1)^{k} a_{k} with c_{k} =
n å
j = k+1
p_{j}
(1)
The number S_{n} satisfies
S_{n} =
ó õ
1
0
w(x)1+x
dx 
ó õ
1
0
P_{n}(x)w(x)P_{n}(1)(1+x)
dx = S
ó õ
1
0
P_{n}(x)w(x)P_{n}(1)(1+x)
dx.
Therefore we deduce
 S_{n}S £
1 P_{n}(1)
ó õ
1
0
 P_{n}(x) w(x)(1+x)
dx £
M_{n} P_{n}(1)
 S, M_{n} =
sup
x Î [0,1]
p_{n}(x)
This inequality suggests to choose polynomials with small value of M_{n}/P_{n}(1).
This choice corresponds in fact to Euler's method.
Another choice is
P_{n}(x) = (12x)^{n} =
n å
k = 0
2^{k}
æ ç
è
n
k
ö ÷
ø
(x)^{k}, for which P_{n}(1) = 3^{n},M_{n} = 1.
It leads to the acceleration
 S_{n}S £
 S3^{n}
where
S_{n} =
13^{n}
n1 å
k = 0
(1)^{k}c_{k}a_{k}, c_{k} =
n å
j = k+1
2^{j}
æ ç
è
n
j
ö ÷
ø
.
Chebyshev's polynomials (see [1]) shifted to [0,1]
provide a more efficient acceleration. They satisfy the relation
P_{n}(sin^{2}t) = cos(2nt)
(2)
and explicitly writes in the form
P_{n}(x) =
n å
j = 0
nn+j
æ ç
è
n+j
2j
ö ÷
ø
4^{j}(x)^{j}.
The relation (2) show that M_{n} = 1 and P_{n}(1) ³ ^{1}/_{2}(3+Ö8)^{n} > 5.828^{n}/2. This family
leads to the following acceleration process :
 S_{n}S £
2 S5.828^{n}
where
S_{n} =
1P_{n}(1)
n1 å
k = 0
(1)^{k}c_{k}a_{k}, c_{k} =
n å
j = k+1
nn+j
æ ç
è
n+j
2j
ö ÷
ø
4^{j}.
Other families of orthogonal polynomials such as Legendre's
polynomials, Niven's polynomial may give interesting accelerations. More
details and results may be found in the very interesting paper [4].
Once the choice of a sequence of polynomials is made, it can be applied to
compute the value of many alternating series such as
log(2) =
¥ å
k = 0
(1)^{k}k+1
a_{k} =
1k+1
=
ó õ
1
0
x^{k} dx
p4
=
¥ å
k = 0
(1)^{k}2k+1
a_{k} =
12k+1
=
ó õ
1
0
x^{k}
x^{1/2}2
dx
(12^{s})z(s) =
¥ å
k = 0
(1)^{k}(k+1)^{s}
,
a_{k} =
1(k+1)^{s}
=
1G(s)
ó õ
1
0
x^{k}  log(x) ^{s1}dx.
Notice that this latest method is very efficient and may be used to compute
the value of the zeta function at values of s with Â(s) > 0 (see ). Another beautiful alternating series whose convergence can be
accelerated in this way is
If you know how to evaluate the Zeta function at integers values, there is
an easy way to transform your original series in a geometric converging one
(based on [5]). Suppose that you want to estimate a series which
has the form
S = A+
¥ å
k = 2
f
æ ç
è
1k
ö ÷
ø
where A is a constant and where the analytic function f may be written
f(z) =
¥ å
n = 2
a_{n}z^{n}
then
S = A+
¥ å
k = 2
æ ç
è
¥ å
n = 2
a_{n}k^{n}
ö ÷
ø
= A+
¥ å
n = 2
a_{n}
æ ç
è
¥ å
k = 2
1k^{n}
ö ÷
ø
hence
S = A+
¥ å
n = 2
a_{n}( z(n)1) .
Observe that
z(n)1 ~
12^{n}
,
therefore the transformed series has a geometric rate of convergence. This
rate may be improved if a few terms of the original series are computed,
this time the limit is given by
S = A+
M å
k = 2
f
æ ç
è
1k
ö ÷
ø
+
¥ å
k = M+1
f
æ ç
è
1k
ö ÷
ø
= A+
M å
k = 2
f
æ ç
è
1k
ö ÷
ø
+
¥ å
n = 2
a_{n}
æ ç
è
¥ å
k = M+1
1k^{n}
ö ÷
ø
and again
S = A+f
æ ç
è
12
ö ÷
ø
+...+f
æ ç
è
1M
ö ÷
ø
+
¥ å
n = 2
a_{n}
æ ç
è
z(n)1
12^{n}
...
1M^{n}
ö ÷
ø
but this time
z(n)1
12^{n}
...
1M^{n}
= z(n,M+1) ~
1(M+1)^{n}
and z(s,a) is the Hurwitz Zeta Function. The rate of
convergence remains geometric but this rate may be taken as large as desired
by taking a large enough value for M.