ebooksgratis.com

See also ebooksgratis.com: no banners, no cookies, totally FREE.

CLASSICISTRANIERI HOME PAGE - YOUTUBE CHANNEL
Privacy Policy Cookie Policy Terms and Conditions
Clenshaw–Curtis quadrature - Wikipedia, the free encyclopedia

Clenshaw–Curtis quadrature

From Wikipedia, the free encyclopedia

Clenshaw–Curtis quadrature and Fejér quadrature are methods for numerical integration, or "quadrature", that are based on an expansion of the integrand in terms of Chebyshev polynomials. Equivalently, they employ a change of variables x = cosθ and use a discrete cosine transform (DCT) approximation for the cosine series. Besides having fast-converging accuracy comparable to Gaussian quadrature rules, Clenshaw–Curtis and Fejér quadrature naturally lead to nested quadrature rules (where different accuracy orders share points), which is important for both adaptive quadrature and multidimensional quadrature (cubature).

Briefly, the function f(x) to be integrated is evaluated at the N extrema or roots of a Chebyshev polynomial and these values are used to construct a polynomial approximation for the function. This polynomial is then integrated exactly. In practice, the integration weights for the value of the function at each node are precomputed, and this computation can be performed in O(NlogN) time by means of fast Fourier transform-related algorithms for the DCT.

Contents

[edit] Clenshaw–Curtis quadrature

A simple way of understanding the algorithm is to realize that Clenshaw–Curtis quadrature (proposed by those authors in 1960) amounts to integrating via a change of variable x = cosθ. The algorithm is normally expressed for integration of a function f(x) over the interval [ − 1,1] (any other interval can be obtained by appropriate rescaling). For this integral, we can write:

\int_{-1}^1 f(x)\, dx = \int_0^\pi f(\cos \theta) \sin(\theta)\, d\theta .

That is, we have transformed the problem from integrating f(x) to one of integrating f(cosθ)sinθ. This can be performed if we know the cosine series for f(cosθ):

f(\cos \theta) = \frac{a_0}{2} + \sum_{k=1}^\infty a_k \cos (k\theta)

in which case the integral becomes:

\int_0^\pi f(\cos \theta) \sin(\theta)\, d\theta = a_0 + \sum_{k=1}^\infty \frac{2 a_{2k}}{1 - (2k)^2} .

Of course, in order to calculate the cosine series coefficients

a_k = \frac{2}{\pi} \int_0^\pi f(\cos \theta) \cos(k \theta)\, d\theta

one must again perform a numeric integration, so at first this may not seem to have simplified the problem. Unlike computation of arbitrary integrals, however, Fourier-series integrations for periodic functions (like f(cosθ), by construction), up to the Nyquist frequency k = N, are accurately computed by the N equally-spaced and equally-weighted points θn = nπ / N for n = 0,\ldots,N (except the endpoints are weighted by 1/2, to avoid double-counting). That is, we approximate the cosine-series integral by the type-I discrete cosine transform (DCT):

a_k \approx \frac{2}{N} \left[ \frac{f(1)}{2} + \frac{f(-1)}{2} (-1)^k +  \sum_{n=1}^{N-1} f(\cos[n\pi/N]) \cos(n k \pi/N) \right]

for k = 0,\ldots,N and then use the formula above for the integral in terms of these ak.

[edit] Connection to Chebyshev polynomials

The reason that this is connected to the Chebyshev polynomials Tk(x) is that, by definition, Tk(cosθ) = cos(kθ), and so the cosine series above is really an approximation of f(x) by Chebyshev polynomials:

f(x) = \frac{a_0}{2} T_0(x) + \sum_{k=1}^\infty a_k T_k(x),

and thus we are "really" integrating f(x) by integrating its approximate expansion in terms of Chebyshev polynomials. The evaluation points xn = cos(nπ / N) correspond to the extrema of the Chebyshev polynomial TN(x).

The fact that such Chebyshev approximation is just a cosine series under a change of variables is responsible for the rapid convergence of the approximation as more terms Tk(x) are included. A cosine series converges very rapidly for functions that are even, periodic, and sufficiently smooth. This is true here, since f(cosθ) is even and periodic in θ by construction, and is k-times differentiable everywhere if f(x) is k-times differentiable on [ − 1,1]. (In contrast, directly applying a cosine-series expansion to f(x) instead of f(cosθ) will usually not converge rapidly because the slope of the even-periodic extension would generally be discontinuous.)

[edit] Fejér quadrature

Fejér proposed two quadrature rules very similar to Clenshaw–Curtis quadrature, but much earlier (in 1933).

Of these two, Fejér's "second" quadrature rule is nearly identical to Clenshaw–Curtis. The only difference is that the endpoints f( − 1) and f(1) are set to zero. That is, Fejér only used the interior extrema of the Chebyshev polynomials, i.e. the true stationary points.

Fejér's "first" quadrature rule evaluates the ak by evaluating f(cosθ) at a different set of equally-spaced points, halfway between the extrema: θn = (n + 0.5)π / N for 0 \leq n < N. These are the roots of TN(cosθ), and are known as the Chebyshev nodes. (These equally-spaced midpoints are the only other choice of quadrature points that preserve both the even symmetry of the cosine transform and the translational symmetry of the periodic Fourier series.) This leads to a formula:

a_k \approx \frac{2}{N} \sum_{n=0}^{N-1} f(\cos[(n+0.5)\pi/N]) \cos[(n+0.5) k \pi/N]

which is precisely the type-II DCT. However, Fejér's first quadrature rule is not nested: the evaluation points for 2N do not coincide with any of the evaluation points for N, unlike Clenshaw–Curtis quadrature or Fejér's second rule.

Despite the fact that Fejér discovered these techniques before Clenshaw and Curtis, the name "Clenshaw–Curtis quadrature" has become standard.

[edit] Comparison to Gaussian quadrature

The classic method of Gaussian quadrature evaluates the integrand at N + 1 points and is constructed to exactly integrate polynomials up to degree 2N + 1. In contrast, Clenshaw-Curtis quadrature, above, evaluates the integrand at N + 1 points and exactly integrates polynomials only up to degree N. It may seem, therefore, that Clenshaw–Curtis is intrinsically worse than Gaussian quadrature, but in reality this does not seem to be the case.

In practice, several authors have observed that Clenshaw–Curtis can have accuracy comparable to that of Gaussian quadrature for the same number of points. This is possible because most numeric integrands are not polynomials (especially since polynomials can be integrated analytically), and approximation of many functions in terms of Chebyshev polynomials converges rapidly (see Chebyshev approximation). In fact, recent theoretical results (Trefethen, 2008) argue that both Gaussian and Clenshaw–Curtis quadrature have error bounded by O([2N] k / k) for a k-times differentiable integrand.

One often cited advantage of Clenshaw–Curtis quadrature is that the quadrature weights can be evaluated in O(NlogN) time by fast Fourier transform algorithms (or their analogues for the DCT), whereas the Gaussian quadrature weights require O(N2) time to compute. As a practical matter, however, high-order numeric integration is rarely performed by simply evaluating a quadrature formula for very large N. Instead, one usually employs an adaptive quadrature scheme that first evaluates the integral to low order, and then successively refines the accuracy only in regions where the integral is inaccurate. To evaluate the accuracy of the quadrature, one compares the answer with that of a quadrature rule of even lower order. Ideally, this lower-order quadrature rule evaluates the integrand at a subset of the original N points, to minimize the integrand evaluations. This is called a nested quadrature rule, and here Clenshaw-Curtis has the advantage that the rule for order N uses a subset of the points from order 2N. In contrast, Gaussian quadrature rules are not naturally nested, and so one must employ Gauss-Kronrod quadrature or similar methods. Nested rules are also important for sparse grids in multidimensional quadrature.

[edit] Integration with weight functions

More generally, one can pose the problem of integrating an arbitrary f(x) against a fixed weight function w(x) that is known ahead of time:

\int_{-1}^1 f(x) w(x)\, dx = \int_0^\pi f(\cos \theta) w(\cos\theta) \sin(\theta)\, d\theta .

The most common case is w(x) = 1, as above, but in certain applications a different weight function is desirable. The basic reason is that, since w(x) can be taken into account a priori, the integration error can be made to depend only on the accuracy in approximating f(x), regardless of how badly behaved the weight function might be.

Clenshaw–Curtis quadrature can be generalized to this case as follows. As before, it works by finding the cosine-series expansion of f(cosθ) via a DCT, and then integrating each term in the cosine series. Now, however, these integrals are of the form

W_k = \int_0^\pi w(\cos \theta) \cos(k \theta) \sin(\theta)\, d\theta .

For most w(x), this integral cannot be computed analytically, unlike before. Since the same weight function is generally used for many integrands f(x), however, one can afford to compute these Wk numerically to high accuracy beforehand. Moreover, since w(x) is generally specified analytically, one can sometimes employ specialized methods to compute Wk.

For example, special methods have been developed to apply Clenshaw–Curtis quadrature to integrands of the form f(x)w(x) with a weight function w(x) that is highly oscillatory, e.g. a sinusoid or Bessel function (see, e.g., Evans & Webster, 1999). This is useful for high-accuracy Fourier series and Fourier–Bessel series computation, where simple w(x) = 1 quadrature methods are problematic because of the high accuracy required to resolve the contribution of rapid oscillations. Here, the rapid-oscillation part of the integrand is taken into account via specialized methods for Wk, whereas the unknown function f(x) is usually better behaved.

Another case where weight functions are especially useful is if the integrand is unknown but has a known singularity of some form, e.g. a known discontinuity or integrable divergence (such as 1/√x) at some point. In this case the singularity can be pulled into the weight function w(x) and its analytical properties can be used to compute Wk accurately beforehand.

Note that Gaussian quadrature can also be adapted for various weight functions, but the technique is somewhat different. In Clenshaw–Curtis quadrature, the integrand is always evaluated at the same set of points regardless of w(x), corresponding to the extrema or roots of a Chebyshev polynomial. In Gaussian quadrature, different weight functions lead to different orthogonal polynomials, and thus different roots where the integrand is evaluated.

[edit] References


aa - ab - af - ak - als - am - an - ang - ar - arc - as - ast - av - ay - az - ba - bar - bat_smg - bcl - be - be_x_old - bg - bh - bi - bm - bn - bo - bpy - br - bs - bug - bxr - ca - cbk_zam - cdo - ce - ceb - ch - cho - chr - chy - co - cr - crh - cs - csb - cu - cv - cy - da - de - diq - dsb - dv - dz - ee - el - eml - en - eo - es - et - eu - ext - fa - ff - fi - fiu_vro - fj - fo - fr - frp - fur - fy - ga - gan - gd - gl - glk - gn - got - gu - gv - ha - hak - haw - he - hi - hif - ho - hr - hsb - ht - hu - hy - hz - ia - id - ie - ig - ii - ik - ilo - io - is - it - iu - ja - jbo - jv - ka - kaa - kab - kg - ki - kj - kk - kl - km - kn - ko - kr - ks - ksh - ku - kv - kw - ky - la - lad - lb - lbe - lg - li - lij - lmo - ln - lo - lt - lv - map_bms - mdf - mg - mh - mi - mk - ml - mn - mo - mr - mt - mus - my - myv - mzn - na - nah - nap - nds - nds_nl - ne - new - ng - nl - nn - no - nov - nrm - nv - ny - oc - om - or - os - pa - pag - pam - pap - pdc - pi - pih - pl - pms - ps - pt - qu - quality - rm - rmy - rn - ro - roa_rup - roa_tara - ru - rw - sa - sah - sc - scn - sco - sd - se - sg - sh - si - simple - sk - sl - sm - sn - so - sr - srn - ss - st - stq - su - sv - sw - szl - ta - te - tet - tg - th - ti - tk - tl - tlh - tn - to - tpi - tr - ts - tt - tum - tw - ty - udm - ug - uk - ur - uz - ve - vec - vi - vls - vo - wa - war - wo - wuu - xal - xh - yi - yo - za - zea - zh - zh_classical - zh_min_nan - zh_yue - zu -