Tentative schedule
(please click on the title of the talks to see an abstract, or see the pdf)
 

Time
Thursday June, 22, 2006 Friday June, 23, 2006
9h00-9h30
 Welcome of participants M. Gasca (Zaragoza)
"On multivariate polynomial interpolation and related topics''
9h30-10h20
L. Reichel (Kent)
"Iterative Methods for Large Ill-Posed Problems"
A. Cohen (Paris)
"Adaptive Approximation by Greedy Algorithms"
10h20-11h00
Coffee break Coffee break
11h00-11h30
S. Serra-Capizzano (Como)
"Jordan Canonical form of the Google matrix: a potential contribution to the PageRank computation and to a general matrix theoretic result"
P. Sablonnière (Rennes)
"B-splines and Hermite-Padé approximants to the exponential function"
11h30-12h00
P. Graves-Morris (Bradford)
"BiCGStab, VPAStab and an adaptation to mildly nonlinear systems"
G. Mühlbach (Hannover)
``On ECT-splines: knot insertion and variation diminution''
12h00-12h30
D. Bini (Pisa)
"Computing curve intersection by means of simultaneous iterations"
W. Gautschi (Purdue)
"On Euler's (failed) attempt to compute logarithms by interpolation"
12h30-14h30
Lunch (Restaurant universitaire Charles Barrois) Lunch (Restaurant universitaire Charles Barrois)
14h30-15h20
N. Higham (Manchester)
"Computing Matrix Functions by Iteration: Convergence, Stability and the Role of Padé Approximants''
W. Van Assche (Leuven)
"Unicity of certains solutions of discrete Painlevé I and II''
15h20-15h50
G. Meurant (CEA)
"Estimates of the error in the Preconditioned Conjugate Gradient algorithm"
F. Marcellan (Madrid)
"Non-standard Orthogonal Polynomials. Applications in Numerical Linear Algebra and Approximation Theory"
15h50-16h10
Coffee break
Coffee break
16h10-17h00
Y. Saad (Minnesota)
" New trends in large scale numerical linear algebra "
M. Raydan (Venezuela)
"A derivative-free tolerant line search method for unconstrained optimization"
17h00-17h30
J. Erhel (Rennes)
"Nonlinear methods for reactive transport simulations"
C. Brezinski (Lille)
17h30-18h00
M. Eiermann (Freiberg)
"On the Convergence of Krylov Subspace Methods for the Evaluation of Matrix Functions"

20h00-??
Banquet





Abstracts

[Return to schedule]

L. Reichel, University of Kent (USA)
Iterative Methods for Large Ill-Posed Problems

Large-scale ill-posed problems arise in many applications, for instance in image deblurring. The solution of these problems has recently received considerable attention. This talk surveys several kinds of iterative methods for large ill-posed problems, including methods for constrained problems.

[Return to schedule]

S. Serra-Capizzano, Universita dell' Insubria (Italy)
Jordan canonical form of the Google matrix: a potential contribution to the PageRank computation and to a general matrix theoretic result

We consider the web hyperlink matrix used by Google for computing the PageRank whose form is given by $ A(c)=3D[cP+(1-c)E]^T$ where $ P$ is a row stochastic matrix, $ E$ is a row stochastic rank one matrix and $ c\in [0,1]$. We determine the analytic expression of the Jordan form of $ A(c)$ and in particular a rational formula for the PageRank in terms of $ c$ [3]. The implications of the result are twofold. From a theoretical viewpoint, a general theorem on standard canonical forms of special rank-one perturbations is suggested [2]. From a practical viewpoint, the use of extrapolation procedures [1] could be the key point for the efficient computation of the PageRank when $ c$ is close or equal to $ 1$.


[1]
C. Brezinski, M. Redivo Zaglia, Extrapolation Methods. Theory and Practice, North-Holland, Amsterdam, 1991.
[2]
R. Horn, C. Johnson, Matrix Analysis, Second Edition, to appear (2006).
[3]
S. Serra-Capizzano, Jordan canonical form of the Google matrix: a potential contribution to the PageRank computation, SIAM J. Matrix Anal. Appl., 27-2 (2005), pp. 305-312.
[Return to schedule]

P. Graves-Morris, University of Bradford, UK
BiCGStab, VPAStab and an adaptation to mildly nonlinear systems

BiCGStab, the stabilised biconjugate gradient algorithm [8], is Van der Vorst's popular and successful method for the solution of a large sparse system of linear equations. Usually, the system is expressed as

$\displaystyle A\mathbf{x}=\mathbf{b}$ (1)

with $ \mathbf{x}, \mathbf{b} \in \mathbb{R}^{d}$, and $ A \in \mathbb{R}^{d\times d}$ is nonsingular but not necessarily symmetric.

Brezinski and Redivo-Zaglia [3] have found extensions of BiCGStab which avoid serious breakdowns; other possible modifications of BiCGStab have been reviewed by Gutknecht, [6].

There is another, closely related algorithm algorithm called VPAStab (stabilised vector-Padé approximation) which can also be applied to the solution of large sparse systems of linear equations, [4,9]. VPAStab was devised to provide a staircase sequence of vector-valued rational approximants for partial sums of the series in powers of $ \lambda$ of

$\displaystyle \mathbf{s}(\lambda):=(I - \lambda G)^{-1}\mathbf{b}$ (2)

with $ \mathbf{s}(\lambda) \in \mathbb{R}^{d}$ and $ G \in \mathbb{R}^{d\times d}$, with greater accuracy of approximation near $ \lambda \approx 1$. The partial sums of the MacLaurin series of $ \mathbf{s}(\lambda)$ are initialised by $ \mathbf{s}_{0}(\lambda ):=\mathbf{b}$ and generated by iteration of

$\displaystyle \mathbf{s}_{i+1}(\lambda)=\mathbf{b} + \lambda G\mathbf{s}_{i}(\lambda), \ \ \ i=0,1,2,\ldots$ (3)

We notice that (2) is equivalent to (1) when $ \lambda = 1$, $ A=I - \lambda G$ and $ \mathbf{x}=\mathbf{s}(1)$. As a consequence of comparing VPAStab with Newton iterative methods for solving nonlinear systems, [7, Sec 6.2], it was noticed that VPAStab has a very natural generalisation for the solution of mildly nonlinear systems, and this algorithm is called NonLinIterSolve [5]. We will see that NonLinIterSolve is a fast and accurate solver for the Bratu equation in 2D, for example.

To end with a familiar theme, [1,2], we will recall that BiCGStab developed from Lanczos' BiCG 3-term recursion relations, whereas VPAStab developed from Frobenius' 3-term recursion relations. The similarities between these identities are well-known here in Lille, and their derivations can be found from the determinantal formulas for the polynomials in question.

[1]
C. Brezinski, Padé-type Approximation and General Orthogonal Polynomials, Birkhäuser, 1980.

[2]
C. Brezinski and H. Sadok, Lanczos-type algorithms for solving systems of linear equations, Applied Numer Math 11 (1993) 443-473.

[3]
C. Brezinski and M Redivo-Zaglia, Look-ahead in BiCGStab and other product methods for linear systems, BIT 35 (1995) 169-201.

[4]
P.R. Graves-Morris, VPAStab: stabilised vector-Padé approximation with application to linear systems, Numer Algorithms 33 (2003) 293-304.

[5]
P.R. Graves-Morris, BiCGStab, VPAStab and an adaptation to nonlinear systems, J Comp Appld Math, submitted.

[6]
M.H. Gutknecht, Lanczos-type solvers for non-symmetric linear systems of equations, Acta Numerica 6 (1997) 271-397.

[7]
C.T. Kelley, Iterative Methods for Linear and Nonlinear Equations SIAM, (1995).

[8]
H.A. van der Vorst, Bi-CGStab: A fast and smoothly convergent variant of Bi-CG for the solution of non-symmetric linear systems, SIAM J. Sci. Statist. Comput. 13 (1992) 631-644.

[9]
E. Venturino, P. R. Graves-Morris and A De Rossi, An adaptation of BiCGStab for nonlinear systems and application to biological models, 22nd IFIP Conference on System Modeling and Optimization, July 2005, eds A. Dontchev, K. Marti, H. Furuta and L. Pandolfi, to appear.

[Return to schedule]



N. Higham, University of Manchester, UK
Computing Matrix Functions by Iteration:
Convergence, Stability and the Role of Padé Approximants

A variety of iterations exist for computing matrix functions such as the sign function, matrix roots, and the unitary polar factor. Underlying many of these iterations are Padé approximants to an appropriate scalar function. We review the role of Padé approximation in iterations for computing the matrix sign function and the matrix square root. We show how questions of convergence and stability of iterations for these functions can often be reduced to consideration of whether the powers of a certain matrix converge to zero, or are bounded. The analysis of stability is formalized in a new way in terms of Frechét derivatives. Our approach permits short, general, unifying analyses of convergence and stability that avoid consideration of the Jordan canonical form. We also analyze some linearly convergent iterations for the matrix square root. A different approach to analyzing convergence is now necessary, and in one case an unexpected connection is found with the Mandelbrot set.
[Return to schedule]



G. Meurant, CEA (France)
Estimates of the error in the Preconditioned Conjugate Gradient algorithm

During the last ten years there has been a renewed interest in computing bounds or estimates of norms of the errors in the Conjugate Gradient algorithm for solving linear systems with symmetric positive definite matrices, see [3], [6], [8] and in iterative methods for more general systems [2]. The techniques used in [4], [6] are based on writing the norms of the error (A or l2 norms) as Riemann-Stieltjes integrals and to obtain bounds or estimates of these integrals with Gauss quadrature rules. The Gauss rule gives a lower bound. If we have estimates of the smallest and largest eigenvalues of A, we can use the Gauss-Radau and Gauss-Lobatto quadrature rules which can provide upper bounds. Such estimates can be used to devise reliable stopping criteria for stopping the iterations, particularly in problems arising from the finite element method, see [1]. It has been shown in [5], [9] that these techniques can be efficiently used in finite precision computations despite the rounding errors that delay CG convergence.
In this lecture we shall summarize these techniques, describe their relations with orthogonal polynomials and show how they can be extended to the Preconditioned Conjugate Gradient, see [7], [10]. We shall provide several numerical examples of the effectiveness of the estimates of the error norms.

[1]   M. Arioli, Stopping criterion for the conjugate gradient algorithm in a finite element method framework, Numer. Math., v 97, (2004), pp 1-24.

[2]   C. Brezinski, Error estimates in the solution of linear systems, SIAM J. Sci. Comput., v 21 n 2, (1999), pp 764-781.

[3]   G.H. Golub and G. Meurant, Matrices, moments and quadrature, in Numerical Analysis 1993, D.F. Griffiths & G.A. Watson Eds., Pitman Research Notes in Mathematics, v 303, (1994), pp 105-156.

[4]    Golub and G. Meurant, Matrices, moments and quadrature II or how to compute the norm of the error in iterative methods, BIT, v 37 n 3, (1997), pp 687-705.

[5]   G.H. Golub and Z. StrakoŠ, Estimates in quadratic formulas, Numerical Algorithms, v 8 no II-IV, (1994).

[6]   G. Meurant, The computation of bounds for the norm of the error in the conjugate gradient algorithm, Numerical Algorithms, v 16, (1997), pp 77-87.

[7]   G. Meurant, Numerical experiments in computing bounds for the norm of the error in the preconditioned conjugate gradient algorithm, Numerical Algorithms, v 22, (1999), pp 353-365.

[8]   G. Meurant, The Lanczos and Conjugate Gradient algorithms, from theory to finite precision computations, to be published by SIAM, (2006).

[9]   Z. StrakoŠ and P. TichÝ, On error estimation in the conjugate gradient method and why it works in finite precision computation, Electr. Trans. Numer. Anal., v 13, (2002), pp 56-80.

[10]   Z. StrakoŠ and P. TichÝ, Error estimation in preconditioned conjugate gradient, accepted in BIT Numerical Mathematics, (2005).

[Return to schedule]

D. Bini, University of Pisa (Italy)
Computing curve intersection by means of simultaneous iterations

A new algorithm is proposed for computing the intersection of two plane curves assigned in a rational parametric form. It relies on the Ehrlich-Aberth iteration complemented with some computational tools like the properties of Sylvester and Bézout matrices, a stopping criterion based on the concept of pseudo-zero, an inclusion result and the choice of initial approximations based on the Newton polygon. The algorithm is implemented as a Fortran 95 module. From the numerical experiments performed with a wide set of test problems it turns out its superiority with respect to the Manocha-Demmel approach based on eigenvalue computation. In fact, the algorithm provides better approximations in terms of the relative error and performs successfully in many critical cases where the eigenvalue computation fails.


[Return to schedule]

Y. Saad, University of Minnesota (USA)
New trends in large scale numerical linear algebra

Research in numerical linear algebra is changing rapidly because of the new trends in the sciences and the world economy. For example, the current information revolution has given rise to a large number of new challenges to computer scientists and mathematicians. Many of the numerical methods in scientific computing of the past few decades were motivated by problems arising from the solution of PDEs -- as a response to demands of now established industries, such as the aerospace industry (Navier-Stokes). A few of the new trends that are likely to shape the decades ahead are: (1) Nanotechnology (e.g. materials science), (2) Biology and genetics (3) information technology. In each case there are enormously challenging problems to solve which require matrix algorithms that are far more powerful than those available today. I will discuss a few of these problems. In materials science, the fundamental equation is Schrodinger and the basic problem is an eigenvalue problem with many eigenvalues. In genetics, one of the major issues is clustering, i.e., recognizing patterns in genes. In information sciences, one can mention again clustering (e.g., face recognition) but also the broad problem of dimensionality reduction (e.g., Principal Component Analysis) which is essential for handling huge data sets.


[Return to schedule]



J. Erhel, INRIA Rennes (France)
Nonlinear methods for reactive transport simulations

Many environmental studies rely on modelling geochemical reactions combined with hydrodynamic processes such as groundwater flow, transport of solutes by advection and diffusion, heat transfer in porous media. Some issues concern aquifer contamination, underground waste disposal, etc. Reactive transport models are complex non-linear PDEs, coupling the transport engine with the geochemical operator. We discuss efficient and robust numerical methods, based on DAE solvers, combined with either a Gauss-Seidel or on a modified Newton method with a powerful linear solver.


[Return to schedule]

M. Eiermann, University of Freiberg (Germany)
On the Convergence of Krylov Subspace Methods for the Evaluation of Matrix Functions

The evaluation of

$\displaystyle f(A){\mathbf b},$    where $\displaystyle A \in \mathbb{C}^{n \times n}, {\mathbf b}\in \mathbb{C}^n$ (4)

and $ f:\mathbb{C} \supset D \to \mathbb{C}$ is a function for which $ f(A)$ is defined, is a common computational task. Besides the solution of linear systems of equations, which involves the reciprocal function $ f(\zeta) = 1/\zeta$, by far the most important application is the time evolution of a system under a linear operator, in which case $ f(\zeta) = f_t(\zeta) =
e^{t\zeta}$ and time acts as a parameter $ t$. Other applications arising in the context of solving differential equations require the evaluation of (4) for the square root and trigonometric functions. Further applications include identification problems for semigroups involving the logarithm and lattice quantum chromodynamics simulations requiring the evaluation of the matrix sign function.

In many of the applications mentioned above the matrix $ A$ is large and sparse or structured, typically resulting from discretization of an infinite-dimensional operator. In this case evaluating (4) by first computing $ f(A)$ is usually unfeasible, so that most of the algorithms for the latter task cannot be used. The standard approach for approximating (4) directly is based on a Krylov subspace $ {\cal
K}_m(A,{\mathbf b})$ of $ A$ with initial vector $ {\mathbf b}$ or, more precisely, on its Arnoldi decomposition

$\displaystyle A V_m = V_m H_m + \eta_{m+1,m} {\mathbf v}_{m+1} {\mathbf e}_m^T.
$

The most popular approximation to $ f(A){\mathbf b}$ from $ {\cal
K}_m(A,{\mathbf b})$ is then given by

$\displaystyle {\mathbf f}_m := \Vert {\mathbf b}\Vert V_m f(H_m) {\mathbf e}_1.$ (5)

The advantage of this approach is that it requires $ A$ only for computing matrix-vector products (in the course of generating the Arnoldi decomposition).

It is well known that the vector $ {\mathbf f}_m$ of (5) can also be represented as

$\displaystyle {\mathbf f}_m = p_{m-1}(A) {\mathbf b},
$

where $ p_{m-1}$ is the unique polynomial of degree (at most) $ m-1$ that interpolates $ f$ (in Hermite's sense) at the eigenvalues of $ H_m$, i.e., at the Ritz values of $ A$ with respect to $ {\cal
K}_m(A,{\mathbf b})$.

To judge the quality of $ {\mathbf f}_m$ as an approximation to $ f(A){\mathbf b}$ we thus consider the following two problems:

The classical approach to answer the first question is based on potential theory and, as recent results of Kuijlaars and Beckermann show, the latter theory also helps to attack the second problem (at least if $ A$ is a normal matrix). Based on these observations we present a qualitative analysis of the convergence properties of Krylov subspace approximations to $ f(A){\mathbf b}$. Special emphasis is put on the question under which assumptions on $ f$ and $ A$ we can expect a superlinear convergence behavior.

[Return to schedule]

M. Gasca, University of Zaragoza (Spain)
On multivariate polynomial interpolation and related topics

Multivariate polynomial interpolation formulae which are extension of the most well-known ones for univariate polynomial interpolation are revisited. Some natural ways to obtain them show the relation with elimination techniques, totally positive matrices and other branches of mathematics.
[Return to schedule]



A. Cohen, University of Paris VI (France)
Adaptive Approximation by Greedy Algorithms

This talk will discuss computational algorithms that deal with the following general task: given a function f and a dictionary of functions D in a Hilbert space, extract a linear combination of N functions in D which approximates f at best. We shall review the properties of existing algorithms, and establish some new convergence properties. This work is motivated by applications as various as data compression, adaptive numerical simulation of PDE's in large space dimension, statistical learning theory.
[Return to schedule]



P. Sablonnière, INSA & IRMAR Rennes (France)
B-splines and Hermite-Padé approximants to the exponential function

This paper is the continuation of a work initiated in [3] about the computation of Hermite-Padé forms (HPF) and associated Hermite-Padé approximants (HPA) to the exponential function. We present an alternative algorithm also based on the representation of HPF, which are divided differences of the function $ t\to \exp(xt)$, in terms of integral remainders with B-splines as Peano kernels. This algorithm uses the nice properties of discrete B-splines giving rise to a great variety of representations of HPF of higher orders in terms of HPF of low orders, and in particular of classical Padé forms. We give some illustrations of this algorithm, for example we find again quadratic HPF of the exponential functions already described in [1] and [2]. Extensions to other types of functions are also possible.

[1] P. Borwein : Quadratic HPA to the exponential function. Constr. Approx. 2 (1986) 291-302.

[2] K. A. Driver : Nondiagonal quadratic HPA to the exponential function. J. Comput. Appl. Math. 65 (1995) 125-134.

[3] P. Sablonnière : An algorithm for the computation of HPA to the exponential function: divided differences and HPF. Numer. Algorithms 33 (2003) 443-452.
[Return to schedule]



G. Mühlbach, University of Hannover (Germany)
On ECT-splines: knot insertion and variation diminution

See the pdf file.


[Return to schedule]

W. Van Assche, University of Leuven (Belgium)
Unicity of certains solutions of discrete Painlevé I and II

The recurrence coefficients of certain semi-classical orthogonal polynomials satisfy discrete Painlevé equations. The Freud equation for the recurrence coefficients of the orthogonal polynomials for the weight $ \exp(-x^4 + \lambda x^2)$ is in fact a special case of discrete Painlevé I, the Verblunsky coefficients of orthogonal polynomials on the unit circle with weight $ \exp(\lambda \cos \theta)$ satisfy discrete Painlevé II, the recurrence coefficients of generalized Charlier polynomials can be written in terms of a solution of discrete Painlevé II, and a $ q$-deformation of the Freud polynomials on the exponential lattice $ \{\pm q^n, n=0,1,2,3,\ldots\}$ has recurrence coefficients that satisfy a $ q$-discrete Painlevé I equation. Unfortunately, these non-linear recurrence relations are not suited for computing the recurrence coefficients starting from two initial conditions, since minor deviations from the correct initial values quickly leads to major deviations from the correct value. For the Freud equations for the weight $ \exp(-x^4 + \lambda x^2)$, Lew and Quarles showed that there is a unique solution of the Painlevé I equation which starts at 0 and remains positive for all $ n > 0$. This positive solution is in fact a fixed point in a metric space of sequences, and it can be found by successive iterations of a contractive mapping. This procedure give a numerically stable way to compute the recurrence coefficients. We will show that a similar result is also true for the Painlevé II equation and for the $ q$-discrete Painlevé I equation. In both cases the fixed point solution is precisely the solution that gives the recurrence coefficients of the corresponding orthogonal polynomals.


[Return to schedule]

W. Gautschi, University of Purdue (USA)
On Euler's (failed) attempt to compute logarithms by interpolation

In a letter of February 16, 1734 to his friend Daniel Bernoulli, Euler reports on his attempt to compute the common logarithm by interpolation at the successive powers of ten. He notes that for log 9 the procedure, though converging fast, yields an incorrect answer. The interpolation procedure is analyzed mathematically, and the discrepancy explained on the basis of modern function theory. Alternative procedures, also based on interpolation, are suggested that in principle yield more satisfactory answers.
[Return to schedule]



F. Marcellan, University of Madrid (Spain)
Non-standard Orthogonal Polynomials. Applications in Numerical Linear Algebra and Approximation Theory

In our talk we will survey some applications of non-standard orthogonal polynomials in Numerical Analysis and Approximation Theory.

I. Spectral transformations for Hermitian Toeplitz matrices

Let $ \mu$ be a nontrivial probability measure supported on the unit circle and let denote

$\displaystyle \langle p(z),q(z) \rangle = \int_{\mathbb{T}} p(z) \bar{q}(1/z) d \mu (z) \,$ (6)

the corresponding inner product in the linear space of polynomials with complex coefficients. There exists a sequence $ \{ \phi_n \}$ of monic polynomials such that

(i) $ \deg \phi_n = n$

(ii) $ \langle \phi_n , \phi_m \rangle = 0 $, $ n \neq m$

(iii) $ \langle \phi_n , \phi_n \rangle > 0 $, $ n \in \mathbb{N}$.

The matrix representation of the multiplication operator in terms of the above orthogonal polynomial basis, the so called GGT representation, is a Hessenberg matrix. We will present some recent results (see [2]) concerning the LU and QR factorization of such a matrix and the connection with some canonical spectral transformations of $ \mu$ which are the counterpart on the unit circle of the Geronimus, Christoffel, and Uvarov transforms for measures supported on the real line, i.e, the spectral measures of Jacobi matrices (see [1]). On the other hand, we will analyze similar factorization problems for the five-diagonal representation of the multiplication operator, the so called CMV representation, where an orthogonal basis of Laurent polynomials with respect to $ \mu$ is needed.

II.Sobolev orthogonal polynomials

Let $ \{\mu_k\}_{k=1}^{m}$ be a vector of positive Borel measures supported on the real line. In the linear space $ \mathbf{P}$ of polynomials of real coefficients we introduce an inner product

$\displaystyle \langle p,q \rangle_S:= \sum_{k=0}^{m} \int_{\mathbf{R}} p^{(k)}(x)\, q^{(k)}(x)\, d\mu_k(x),$ (7)

assuming all the integrals are convergent. Asymptotic properties and the distribution of zeros of polynomials orthogonal with respect to the above inner product have been studied when the $ {\rm supp\/}(\mu_k)$ is a bounded subset of the real line $ k=0,\,1,\, \cdots,\, m$ in [3] and using a different approach based on the matrix representation of the multiplication operator in [5].

An application of such Sobolev orthogonal polynomials to the solution of semilinear two-point boundary value problems using a Galerkin method will be described (see [4]).

In the case $ m=1$ and $ \{\mu_k\}_{k=0}^{1}$ for positive Borel measures with unbounded support very few examples were considered in the literature. Nevertheless, a general approach is given in [6]. There the authors assume $ d\mu_0(x)=(\Psi W)^2dx$ and $ d\mu_1(x)=\lambda W^2 dx$.

If $ \{q_n\}$ denotes the sequence of orthonormal polynomials with respect to the Sobolev inner product and $ \{p_n\}$ is the sequence of orthonormal polynomials with respect to $ W^2 dx$, then $ q_n^{\prime}$ behaves like $ {\lambda}^{-\frac{1}{2}} \, p_{n-1}$ for fairly general weight on a unbounded interval but some growth restrictions on $ \Psi$ are needed. If $ \Psi$ grows too fast at $ \infty$, then the first term in the inner product will swamp the term involving the derivative. As a sample, the Freud weight $ W=e^{-Q}$ where $ Q$ is an continuous function in $ \mathbf{R}$ willbe studied. Assuming some extra conditions on $ Q$, the weighted estimate of $ q_n^{\prime}-\lambda^{-\frac{1}{2}}p_{n-1}$ in $ L^2(\mathbf{R})$ and $ L^{\infty}(\mathbf{R})$ as well as strong asymptotics for the re-scaled polynomials are obtained.

An interesting problem is the comparison of the zero distribution of $ \{q_n\}$ and $ \{p_n\}$. In particular, an estimate of the largest zero of $ q_n$ in terms of n will be discussed.

Finally, the behavior for smooth functions of the Fourier expansions with respect to $ \{q_n\}$ and $ \{p_n\}$ will be analyzed.

References

[1]
M.I. Bueno and F. Marcellán, Darboux Transformation and Perturbation of Linear Functionals, Linear Algebra and its Appl., 384 (2004), 215-242.

[2]
L. Daruis, J. Hernández, and F. Marcellán, Spectral Transformations for Hermitian Toeplitz Matrices, (2005). Submitted.

[3]
W. Gautschi and A. B. S. Kuijlaars, Zeros and critical points of Sobolev Orthogonal Polynomials, J. Approx. Theory, 91 (1997), 117-137.

[4]
O. Hansen, Orthogonal polynomials for the solution of semilinear two-point boundary value problems, J.Integral Eq. and Appl. (2006). In press.

[5]
G. López Lagomasino and H. Pijeira Cabrera, Zero Location and n-th root asymptotics of Sobolev Orthogonal Polynomials, J. Approx.Theory 99 (1999), 30-43.

[6]
J. S. Geronimo, D. S. Lubinsky, and F. Marcellán, Asymptotics for Sobolev Orthogonal Polynomials for Exponential Weights, Constr. Approx. 22 (2005), 309-346

[Return to schedule]

M. Raydan, University of Caracas (Venezuela)
A derivative-free tolerant line search method for unconstrained optimization

A tolerant derivative-free nonmonotone line search technique is proposed and analyzed. As a globalization strategy, it allows several consecutive increases in the objective function and also non descent directions for unconstrained minimization. To illustrate the power of this new line search we describe a direct search algorithm in which the directions are chosen randomly. The convergence properties of this random method relies exclusively on the line search technique. We also present numerical experiments with the spectral gradient method, and the inverse SR1 update that could produce non descent directions. In both cases we report results using the exact gradient vector and also a local variation finite differences approximation to the gradient.


May 20, 2006