2010-07-26 Ralf Wildenhues
* doc/blas.texi, doc/bspline.texi, doc/complex.texi, doc/dwt.texi, doc/fftalgorithms.tex, doc/fft.texi, doc/fitting.texi, doc/gsl-design.texi, doc/gsl-ref.texi, doc/histogram.texi, doc/integration.texi, doc/linalg.texi, doc/montecarlo.texi, doc/multifit.texi, doc/multimin.texi, doc/ntuple.texi, doc/randist.texi, doc/statnotes.tex: Fix typos. diff -ru orig/gsl-1.14/doc/blas.texi gsl-1.14/doc/blas.texi --- orig/gsl-1.14/doc/blas.texi 2010-03-10 11:57:12.000000000 +0100 +++ gsl-1.14/doc/blas.texi 2010-07-26 14:41:50.000000000 +0200 @@ -22,7 +22,7 @@ functions. The full @sc{blas} functionality for band-format and packed-format matrices is available through the low-level @sc{cblas} interface. Similarly, GSL vectors are restricted to positive strides, -whereas the the low-level @sc{cblas} interface supports negative +whereas the low-level @sc{cblas} interface supports negative strides as specified in the @sc{blas} address@hidden the low-level @sc{cblas} interface, a negative stride accesses the vector elements in reverse order, i.e. the @math{i}-th element is given by diff -ru orig/gsl-1.14/doc/bspline.texi gsl-1.14/doc/bspline.texi --- orig/gsl-1.14/doc/bspline.texi 2010-03-10 11:57:12.000000000 +0100 +++ gsl-1.14/doc/bspline.texi 2010-07-26 14:46:25.000000000 +0200 @@ -227,7 +227,7 @@ The following program computes a linear least squares fit to data using cubic B-spline basis functions with uniform breakpoints. The data is generated from the curve @math{y(x) = \cos{(x)} \exp{(-x/10)}} on -the interval @math{[0, 15]} with gaussian noise added. +the interval @math{[0, 15]} with Gaussian noise added. @example @verbatiminclude examples/bspline.c diff -ru orig/gsl-1.14/doc/complex.texi gsl-1.14/doc/complex.texi --- orig/gsl-1.14/doc/complex.texi 2010-03-10 11:57:12.000000000 +0100 +++ gsl-1.14/doc/complex.texi 2010-07-26 14:56:25.000000000 +0200 @@ -60,7 +60,7 @@ be mapped correctly onto packed complex arrays. @deftypefun gsl_complex gsl_complex_rect (double @var{x}, double @var{y}) -This function uses the rectangular cartesian components +This function uses the rectangular Cartesian components (@var{x},@var{y}) to return the complex number @math{z = x + i y}. @inlinefn{} @end deftypefun @@ -77,7 +77,7 @@ @end defmac @defmac GSL_SET_COMPLEX (@var{zp}, @var{x}, @var{y}) -This macro uses the cartesian components (@var{x},@var{y}) to set the +This macro uses the Cartesian components (@var{x},@var{y}) to set the real and imaginary parts of the complex number pointed to by @var{zp}. For example, diff -ru orig/gsl-1.14/doc/dwt.texi gsl-1.14/doc/dwt.texi --- orig/gsl-1.14/doc/dwt.texi 2010-03-10 11:57:12.000000000 +0100 +++ gsl-1.14/doc/dwt.texi 2010-07-26 15:00:13.000000000 +0200 @@ -276,7 +276,7 @@ transform on the rows of the matrix, followed by a separate complete discrete wavelet transform on the columns of the resulting row-transformed matrix. This procedure uses the same ordering as a -two-dimensional fourier transform. +two-dimensional Fourier transform. The ``non-standard'' transform is performed in interleaved passes on the rows and columns of the matrix for each level of the transform. The diff -ru orig/gsl-1.14/doc/fftalgorithms.tex gsl-1.14/doc/fftalgorithms.tex --- orig/gsl-1.14/doc/fftalgorithms.tex 2010-03-10 11:57:13.000000000 +0100 +++ gsl-1.14/doc/fftalgorithms.tex 2010-07-26 14:14:53.000000000 +0200 @@ -21,7 +21,7 @@ \section{Introduction} Fast Fourier Transforms (FFTs) are efficient algorithms for -calculating the discrete fourier transform (DFT), +calculating the discrete Fourier transform (DFT), % \begin{eqnarray} h_a &=& \mathrm{DFT}(g_b) \\ @@ -29,9 +29,9 @@ &=& \sum_{b=0}^{N-1} g_b W_N^{ab} \qquad W_N= \exp(-2\pi i/N) \end{eqnarray} % -The DFT usually arises as an approximation to the continuous fourier +The DFT usually arises as an approximation to the continuous Fourier transform when functions are sampled at discrete intervals in space or -time. The naive evaluation of the discrete fourier transform is a +time. The naive evaluation of the discrete Fourier transform is a matrix-vector multiplication ${\mathbf W}\vec{g}$, and would take $O(N^2)$ operations for $N$ data-points. The general principle of the Fast Fourier Transform algorithms is to use a divide-and-conquer @@ -334,7 +334,7 @@ \subsection{Radix-2 Decimation-in-Time (DIT)} % -To derive the the decimation-in-time algorithm we start by separating +To derive the decimation-in-time algorithm we start by separating out the most significant bit of the index $b$, % \begin{equation} @@ -504,7 +504,7 @@ So for an in-place pass our storage has to be arranged so that the two outputs $g_1(a_0,\dots)$ overwrite the two input terms $g([b_{n-1},\dots])$. Note that the order of $a$ is reversed from the -natural order of $b$. i.e. the least significant bit of $a$ +natural order of $b$, i.e.@: the least significant bit of $a$ replaces the most significant bit of $b$. This is inconvenient because $a$ occurs in its natural order in all the exponentials, $W^{ab}$. We could keep track of both $a$ and its bit-reverse, @@ -1362,7 +1362,7 @@ $p_{i-1}$ independent multiplications of $PD$ on $q_{i-1}$ different subsets of $t$. The index $\mu$ of $t(\lambda,\mu)$ which runs from 0 to $m$ will include $q_i$ copies of each $PD$ operation because -$m=p_{i-1}q$. i.e. we can split the index $\mu$ further into $\mu = a +$m=p_{i-1}q$, i.e.@: we can split the index $\mu$ further into $\mu = a p_{i-1} + b$, where $a = 0 \dots q-1$ and $b=0 \dots p_{i-1}$, % \begin{eqnarray} @@ -1571,7 +1571,7 @@ $\omega^a_{q_{i-1}}$ are taken out of the {\tt trig} array. To compute the inverse transform we go back to the definition of the -fourier transform and note that the inverse matrix is just the complex +Fourier transform and note that the inverse matrix is just the complex conjugate of the forward matrix (with a factor of $1/N$), % \begin{equation} @@ -1784,7 +1784,7 @@ for computing a DFT~\cite{singleton}. Although it is an $O(N^2)$ algorithm it does reduce the number of multiplications by a factor of 4 compared with a naive evaluation of the DFT. If we look at the -general stucture of a DFT matrix, shown schematically below, +general structure of a DFT matrix, shown schematically below, % \begin{equation} \left( @@ -2490,7 +2490,7 @@ \subsection{Mixed-Radix FFTs for real data} % As discussed earlier the radix-2 decimation-in-time algorithm had the -special property that its intermediate passes are interleaved fourier +special property that its intermediate passes are interleaved Fourier transforms of the original data, and this generalizes to the mixed-radix algorithm. The complex mixed-radix algorithm that we derived earlier was a decimation-in-frequency algorithm, but we can @@ -2580,7 +2580,7 @@ v^{(i)} = (W_{p_i} \otimes I_{q_i}) z \end{equation} % -Each intermediate stage will be a set of $q_i$ interleaved fourier +Each intermediate stage will be a set of $q_i$ interleaved Fourier transforms, each of length $p_i$. We can prove this result by induction. First we assume that the result is true for $v^{(i-1)}$, % @@ -2634,7 +2634,7 @@ explicitly, and induction then shows that the result is true for all $i$. As discussed for the radix-2 algorithm this result is important because if the initial data $z$ is real then each intermediate pass is -a set of interleaved fourier transforms of $z$, having half-complex +a set of interleaved Fourier transforms of $z$, having half-complex symmetries (appropriately applied in the subspaces of the Kronecker product). Consequently only $N$ real numbers are needed to store the intermediate and final results. diff -ru orig/gsl-1.14/doc/fft.texi gsl-1.14/doc/fft.texi --- orig/gsl-1.14/doc/fft.texi 2010-03-10 11:57:13.000000000 +0100 +++ gsl-1.14/doc/fft.texi 2010-07-26 15:06:24.000000000 +0200 @@ -31,7 +31,7 @@ @cindex FFT mathematical definition Fast Fourier Transforms are efficient algorithms for -calculating the discrete fourier transform (DFT), +calculating the discrete Fourier transform (DFT), @tex \beforedisplay $$ @@ -46,13 +46,13 @@ @end example @end ifinfo -The DFT usually arises as an approximation to the continuous fourier +The DFT usually arises as an approximation to the continuous Fourier transform when functions are sampled at discrete intervals in space or -time. The naive evaluation of the discrete fourier transform is a +time. The naive evaluation of the discrete Fourier transform is a matrix-vector multiplication @c{$W\vec{z}$} @address@hidden@}}. A general matrix-vector multiplication takes address@hidden(n^2)} operations for @math{n} data-points. Fast fourier address@hidden(n^2)} operations for @math{n} data-points. Fast Fourier transform algorithms use a divide-and-conquer strategy to factorize the matrix @math{W} into smaller sub-matrices, corresponding to the integer factors of the length @math{n}. If @math{n} can be factorized into a @@ -64,7 +64,7 @@ All the FFT functions offer three types of transform: forwards, inverse and backwards, based on the same mathematical definitions. The -definition of the @dfn{forward fourier transform}, +definition of the @dfn{forward Fourier transform}, @c{$x = \hbox{FFT}(z)$} @math{x = FFT(z)}, is, @tex @@ -82,7 +82,7 @@ @end ifinfo @noindent -and the definition of the @dfn{inverse fourier transform}, +and the definition of the @dfn{inverse Fourier transform}, @c{$x = \hbox{IFFT}(z)$} @math{x = IFFT(z)}, is, @tex @@ -109,7 +109,7 @@ exponential in the transform/ inverse-transform pair. GSL follows the same convention as @sc{fftpack}, using a negative exponential for the forward transform. The advantage of this convention is that the inverse -transform recreates the original function with simple fourier +transform recreates the original function with simple Fourier synthesis. Numerical Recipes uses the opposite convention, a positive exponential in the forward transform. @@ -269,7 +269,7 @@ @comment @subsection Example of using radix-2 FFT routines for complex data Here is an example program which computes the FFT of a short pulse in a -sample of length 128. To make the resulting fourier transform real the +sample of length 128. To make the resulting Fourier transform real the pulse is defined for equal positive and negative times (@math{-10} @dots{} @math{10}), where the negative times wrap around the end of the array. @@ -288,7 +288,7 @@ the same plot as the input. Only the real part is shown, by the choice of the input data the imaginary part is zero. Allowing for the wrap-around of negative times at @math{t=128}, and working in units of address@hidden/n}, the DFT approximates the continuum fourier transform, giving address@hidden/n}, the DFT approximates the continuum Fourier transform, giving a modulated sine function. @iftex @tex @@ -303,7 +303,7 @@ @center @image{fft-complex-radix2-t,2.8in} @center @image{fft-complex-radix2-f,2.8in} @quotation -A pulse and its discrete fourier transform, output from +A pulse and its discrete Fourier transform, output from the example program. @end quotation @end iftex @@ -513,7 +513,7 @@ @cindex FFT of real data The functions for real data are similar to those for complex data. However, there is an important difference between forward and inverse -transforms. The fourier transform of a real sequence is not real. It is +transforms. The Fourier transform of a real sequence is not real. It is a complex sequence with a special symmetry: @tex \beforedisplay @@ -540,7 +540,7 @@ Functions in @code{gsl_fft_real} compute the frequency coefficients of a real sequence. The half-complex coefficients @math{c} of a real sequence address@hidden are given by fourier analysis, address@hidden are given by Fourier analysis, @tex \beforedisplay $$ @@ -557,7 +557,7 @@ @end ifinfo @noindent Functions in @code{gsl_fft_halfcomplex} compute inverse or backwards -transforms. They reconstruct real sequences by fourier synthesis from +transforms. They reconstruct real sequences by Fourier synthesis from their half-complex frequency coefficients, @math{c}, @tex \beforedisplay @@ -832,7 +832,7 @@ array of length @var{n}, using a mixed radix decimation-in-frequency algorithm. For @code{gsl_fft_real_transform} @var{data} is an array of time-ordered real data. For @code{gsl_fft_halfcomplex_transform} address@hidden contains fourier coefficients in the half-complex ordering address@hidden contains Fourier coefficients in the half-complex ordering described above. There is no restriction on the length @var{n}. Efficient modules are provided for subtransforms of length 2, 3, 4 and 5. Any remaining factors are computed with a slow, @math{O(n^2)}, @@ -902,14 +902,14 @@ Here is an example program using @code{gsl_fft_real_transform} and @code{gsl_fft_halfcomplex_inverse}. It generates a real signal in the -shape of a square pulse. The pulse is fourier transformed to frequency +shape of a square pulse. The pulse is Fourier transformed to frequency space, and all but the lowest ten frequency components are removed from -the array of fourier coefficients returned by +the array of Fourier coefficients returned by @code{gsl_fft_real_transform}. -The remaining fourier coefficients are transformed back to the +The remaining Fourier coefficients are transformed back to the time-domain, to give a filtered version of the square pulse. Since -fourier coefficients are stored using the half-complex symmetry both +Fourier coefficients are stored using the half-complex symmetry both positive and negative frequencies are removed and the final filtered signal is also real. @@ -935,7 +935,7 @@ @itemize @w{} @item P. Duhamel and M. Vetterli. -Fast fourier transforms: A tutorial review and a state of the art. +Fast Fourier transforms: A tutorial review and a state of the art. @cite{Signal Processing}, 19:259--299, 1990. @end itemize @@ -972,7 +972,7 @@ @itemize @w{} @item Clive Temperton. -Self-sorting mixed-radix fast fourier transforms. +Self-sorting mixed-radix fast Fourier transforms. @cite{Journal of Computational Physics}, 52(1):1--23, 1983. @end itemize @@ -984,13 +984,13 @@ @item Henrik V. Sorenson, Douglas L. Jones, Michael T. Heideman, and C. Sidney Burrus. -Real-valued fast fourier transform algorithms. +Real-valued fast Fourier transform algorithms. @cite{IEEE Transactions on Acoustics, Speech, and Signal Processing}, ASSP-35(6):849--863, 1987. @item Clive Temperton. -Fast mixed-radix real fourier transforms. +Fast mixed-radix real Fourier transforms. @cite{Journal of Computational Physics}, 52:340--350, 1983. @end itemize diff -ru orig/gsl-1.14/doc/fitting.texi gsl-1.14/doc/fitting.texi --- orig/gsl-1.14/doc/fitting.texi 2010-03-10 11:57:13.000000000 +0100 +++ gsl-1.14/doc/fitting.texi 2010-07-26 15:07:05.000000000 +0200 @@ -50,7 +50,7 @@ weight factors @math{w_i} are given by @math{w_i = 1/\sigma_i^2}, where @math{\sigma_i} is the experimental error on the data-point @math{y_i}. The errors are assumed to be -gaussian and uncorrelated. +Gaussian and uncorrelated. For unweighted data the chi-squared sum is computed without any weight factors. The fitting routines return the best-fit parameters @math{c} and their @@ -60,7 +60,7 @@ @cindex covariance matrix, linear fits as @c{$C_{ab} = \langle \delta c_a \delta c_b \rangle$} @address@hidden@} = <\delta c_a \delta c_b>} where @c{$\langle \, \rangle$} address@hidden< >} denotes an average over the gaussian error distributions of the underlying datapoints. address@hidden< >} denotes an average over the Gaussian error distributions of the underlying datapoints. The covariance matrix is calculated by error propagation from the data errors @math{\sigma_i}. The change in a fitted parameter @math{\delta diff -ru orig/gsl-1.14/doc/gsl-design.texi gsl-1.14/doc/gsl-design.texi --- orig/gsl-1.14/doc/gsl-design.texi 2010-03-10 11:57:13.000000000 +0100 +++ gsl-1.14/doc/gsl-design.texi 2010-07-26 15:13:09.000000000 +0200 @@ -386,7 +386,7 @@ @c reliable and accurate (but not necessarily fast or efficient) estimation @c of values for special functions, explicitly using Taylor series, asymptotic @c expansions, continued fraction expansions, etc. As well as these routines, address@hidden fast approximations will also be provided, primarily based on Chebyschev address@hidden fast approximations will also be provided, primarily based on Chebyshev @c polynomials and ratios of polynomials. In this vision, the approximations @c will be the "standard" routines for the users, and the exact (so-called) @c routines will be used for verification of the approximations. It may also @@ -413,7 +413,7 @@ @c @item Direct integration address@hidden @item Monte carlo methods address@hidden @item Monte Carlo methods @c @item Simulated annealing @@ -459,12 +459,12 @@ "closed". In mathematics objects can be combined and operated on in an infinite number of ways. For example, I can take the derivative of a scalar field with respect to a vector and the derivative of a vector -field wrt a scalar (along a path). +field wrt.@: a scalar (along a path). There is a definite tendency to unconsciously try to reproduce all these possibilities in a numerical library, by adding new features one by one. After all, it is always easy enough to support just one more -feature.... so why not? +feature @dots{} so why not? Looking at the big picture, no-one would start out by saying "I want to be able to represent every possible mathematical object and operation @@ -660,7 +660,7 @@ should be to Knuth, references concerning statistics should be to Kendall & Stuart, references concerning special functions should be to Abramowitz & Stegun (Handbook of Mathematical Functions AMS-55), etc. -Whereever possible refer to Abramowitz & Stegun rather than other +Wherever possible refer to Abramowitz & Stegun rather than other reference books because it is a public domain work, so it is inexpensive and freely redistributable. @@ -711,16 +711,16 @@ and Texinfo. This is a problem if you want to write something like @address@hidden@}}. -To work around it you can preceed the math command with a special +To work around it you can precede the math command with a special macro @code{@@c} which contains the explicit TeX commands you want to use (no restrictions), and put an ASCII approximation into the @code{@@math} command (you can write @code{@@@{} and @code{@@@}} there for the left and right braces). The explicit TeX -commands are used in the TeX ouput and the argument of @code{@@math} +commands are used in the TeX output and the argument of @code{@@math} in the plain info output. Note that the @code{@@address@hidden@}} macro must go at the end of the -preceeding line, because everything else after it is ignored---as far +preceding line, because everything else after it is ignored---as far as texinfo is concerned it's actually a 'comment'. The comment command @@c has been modified to capture a TeX expression which is output by the next @@math command. For ordinary comments use the @@comment @@ -763,7 +763,7 @@ Any installed executables (utility programs etc) should have the prefix @code{gsl-} (with a hyphen, not an underscore). -All function names, variables, etc should be in lower case. Macros and +All function names, variables, etc.@: should be in lower case. Macros and preprocessor variables should be in upper case. Some common conventions in variable and function names: @@ -816,12 +816,12 @@ Note: it is possible to define an abstract base class easily in C, using function pointers. See the rng directory for an example. -When reimplementing public domain fortran code, please try to introduce +When reimplementing public domain Fortran code, please try to introduce the appropriate object concepts as structs, rather than translating the code literally in terms of arrays. The structs can be useful just within the file, you don't need to export them to the user. -For example, if a fortran program repeatedly uses a subroutine like, +For example, if a Fortran program repeatedly uses a subroutine like, @example SUBROUTINE RESIZE (X, K, ND, K1) @@ -954,10 +954,10 @@ @section Error estimates In the special functions error bounds are given as twice the expected -``gaussian'' error. i.e. 2-sigma, so the result is inside the error +``Gaussian'' error, i.e.@: 2-sigma, so the result is inside the error 98% of the time. People expect the true value to be within +/- the quoted error (this wouldn't be the case 32% of the time for 1 sigma). -Obviously the errors are not gaussian but a factor of two works well +Obviously the errors are not Gaussian but a factor of two works well in practice. @node Exceptions and Error handling, Persistence, Error estimates, Design @@ -1293,7 +1293,7 @@ significant or not. The only place where it is acceptable to use constants like address@hidden is in function approximations, (e.g. taylor address@hidden is in function approximations, (e.g.@: Taylor series, asymptotic expansions, etc). In these cases it is not an arbitrary constant, but an inherent part of the algorithm. @@ -1406,7 +1406,7 @@ @smallexample Yoyodyne, Inc., hereby disclaims all copyright interest in the software `GNU Scientific Library - Legendre Functions' (routines for computing -legendre functions numerically in C) written by James Hacker. +Legendre functions numerically in C) written by James Hacker.