The Numerical Analysis exam covers topics from both Numerical Analysis (Math 620) and Numerical Linear Algebra (Math 630) courses.
The two syllabi listed below include an outline of the basic material. The student is expected to master for these exams. The exam will be made by a committee of faculty, not necessarily by the faculty member who taught your course.
The two syllabi listed below include an outline of the basic material. The student is expected to master for these exams. The exam will be made by a committee of faculty, not necessarily by the faculty member who taught your course.
Syllabus: Math 620
- Floating point numbers and roundoff errors using IEEE standard (“Marc 32 representation”); loss of significance.
- Methods for solving nonlinear equations: bisection method; Newton’s method; secant method; definition of order of convergence; fixed point method and contraction principle; homotopy and continuation methods; Newton’s method and fixed point methods for systems of nonlinear equations.
- Interpolation theory: polynomial interpolation including Lagrange and Newton interpolation; existence and uniqueness of interpolating polynomials; Hermite interpolation (including generalized Hermite interpolation); error estimation for polynomial interpolation; spline interpolation.
- Function approximation: the min-max approximation problem; best approximation and the least-squares theory; orthogonal polynomials (Legendre, Chebyshev).
- Numerical differentiation and integration: numerical integration based on interpolation: Newton-Cotes integration formulas; adaptive quadrature; Gaussian quadrature; numerical differentiation and Richardson extrapolation; error estimation for numerical integration and differentiation.
- Numerical solution of ODEs: Taylor series methods including Euler and its convergence; Runge-Kutta methods; multistep methods; the midpoint method; the trapezoidal method; convergence (local truncation error, global error, and linear stability analysis) and stability of multistep methods; stiff ODEs.
- Kendall Atkinson, An Introduction to Numerical Analysis, 2nd Edition, 1989.
- Kincaid and Cheney, Mathematics of Scientific Computing, 3rd Edition, 2002.
Syllabus: Math 630
- Matrix Multiplication (cost, algorithms)
- Direct Methods for Solving Linear Systems: Cholesky Decomposition, Gaussian Elimination (with and without pivoting) and LU Decomposition, Cost of GE
- Perturbing the Coefficient Matrix and the RHS of a Linear System: Vector and Matrix Norms, Condition Numbers
- Backward Stability of Algorithms: Backward Error Analysis of GE
- Discrete Least Squares Problem: Solution of the Least Squares Problem via the Normal Equations
- Orthogonal Matrices, Rotators, and Reflectors
- Gram-Schmidt Process
- The Singular Value Decomposition: Applications, Definition, and Properties of SVD, Use of the SVD for solving Least Squares Problems
- Methods for Finding Eigenvalues and Eigenvectors: (Why Iterative Methods are needed): the Power Method, Reduction to Hessenberg and Tridiagonal Forms, the QR Algorithm, use of QR Algorithm to Calculate Eigenvectors, Cost of QR and Hessenberg Reduction;
- Eigenvalues of Large, Sparse Matrices: Lanczos Algorithm
- Iterative Methods for Solving Linear Systems: Classical Iterative Methods including Jacobi, Gauss Seidel, SOR, SSOR; Convergence of Iterative Methods
- Iterative Methods for Solving Linear Systems 2: Steepest Descent and Conjugate Gradient Method (Derivation and Convergence);
- Idea of preconditioning for linear systems, reduction of condition number, SSORpreconditioning of CG.
- Watkins, Fundamentals of Matrix Computations, (2nd or 3rd editions).
- Golub and Van Loan, Matrix Computations, (any recent edition).
- Trefethen and Bau, Numerical Linear Algebra, 1997.