These programs are supplied for students of MAT342. Others may use them as they see fit. Note that most of them are based on material from several different textbooks—see the individual programs for more information.
Instructions
Click on the links below to access the python source code.
Python Programs vs. Python Functions
Some of these files only contain python functions while others contain python programs. The easiest way to tell is to look for the line #/usr/bin/env python at the top of the file. To run a python program on a Linux computer you can either type
python <program name>or mark the program as executable by typing
chmod u+x <program name>(this need be done only once for each downloaded program) and then typing
./<program name>
Demos
- Visualizing Taylor polynomials
-
This program does essentially the same thing as the sequence of MatLab commands given on page 20 of "Numerical Mathematics and Computing" 5th edition, by Cheney and Kincaid, Brooks/Cole, 2004.
- Visualizing Taylor polynomials one-by-one
-
An animated version of the sineplot program.
- Base Conversion
-
This function accepts a decimal number and and returns the string representation of the number in the specified base. This function only converts to bases between (inclusive) base-2 and base-36. It uses the convention that hexadecimal numbers use of using alphabetic letters to represent values larger than 9.
- Estimate π with a Monte Carlo Method
-
Procedure to estimate π as outlined in the University of Nebraska-Lincoln Physical Chemistry lab's Basics of Monte Carlo Simulations.
- Loss of significance
-
This program demonstrates the lost of precision that occurs when two numbers very close to each other are subtracted. This example uses x and sin(x) for a small value of x.
- Root Finding Algorithms
-
This program solves f(x) = 0 using several different methods, reporting the rates of convergence of each method. The methods used are bisection, Newton-Raphson, secant, fixed-point, and Steffensen's.
The programs good_newton.py and bad_newton.py illustrate that while Newton's method normally has a quadratic rate of convergence when finding a root of f(x), the rate of convergence is only linear if the derivative f'(x) = 0.
- Divided Differences
-
This function computes the divided differences that can be used to construct an interpolating polynomial for the given set of data.
- Richardson's Extrapolation
-
This function implements Richardson's Extrapolation to determine an approximation of f'(x) at a particular value of x. Based on an algorithm on page 185 in "Numerical Mathematics and Computing" 5th Edition, by Cheney and Kincaid, Brooks-Cole, 2004.
- Romberg Integration
-
Compute and estimate of the integral of f(x) on the interval [a,b] using Romberg Integration. This approach recursively improves trapezoid rule estimates using interval subdivisions.
- Adaptive Simpson's Rule
-
This function uses an adaptive Simpson's rule to approximate the value of the integral. Notice that this is not very efficient; it is recursive and recomputes many function values that have already been computed. The code in this function is meant to be clear and explain the ideas behind adaptive schemes rather than to be an efficient implementation of one. The program verify_adaptive_simpson.py demonstrates that the error is usually within the specified tolerance, but may not be.
- Gaussian Quadrature
-
Estimates integral using Gaussian quadrature. Needs grule.py to compute nodes and weights from Legendre Polynomials.
- Gaussian Elimination
-
Provides the routine lu to perform LU factorization a NumPy matrix, returning a permutation vector that indicates how the rows of the matrix were rearranged during factorization. Also provides the routine solve to perform forward and backward solves using the factored matrix A to solve Ax=b.
- Tridiagonal Solver
-
Solves Ax=b where A is a tridiagonal matrix stored in three vectors.
- Numerical Solution to Ordinary Differential Equations: Taylor series method
-
This program demonstrates the use of Taylor Series to solve the initial value problem x'(t) = tsin(t), x(0)=1.
- Numerical Solution of Ordinary Differential Equations: Single-step and Multistep methods
-
Demonstrates various methods to solve the initial value problem
x'(t) = x sin t, subject to x(0)=−1.The following methods are implemented, but only a subset of these are demonstrated by the program.
- Euler's method
- Heun's method
- 2nd order Runge-Kutta (two versions)
- 4th order Runge-Kutta
- 4th order Runge-Kutta with error estimate
- Runge-Kutta-Fehlberg (adaptive step size)
- 4th order Adams-Bashforth-Moulton Predictor-Corrector multistep method
- Numerical Solution to 2nd Order Boundary Value Problems
-
Demonstrates the shooting method and the method of finite differences. The shooting method is more general and works for linear and nonlinear problems while the implementation of the finite difference method only handles linear problems.
The shooting method function assumes that the second order equation has been converted to a first order system of two equations and uses the 4th order Runge-Kutta routine from diffeq.py to solve the necessary initial value problems.
The finite difference method function solves linear second order equations that are written in the form
x'' = u(t) + v(t)x + w(t)x'
x(t(0)) = a, x(t(n)) = bon the interval [t(0), t(n)].
- Forward Time, Centered Space (FTCS) scheme applied to the Heat Equation
-
Use the FTCS scheme to solve the heat equation in a thin rod.
- Crank Nicolson applied to the Heat Equation
-
Use Crank-Nicolson scheme to solve the heat equation in a thin rod. Needs the tridiagonal solver.
- Finite Difference Solution of Wave Equation
-
Solves the one-dimensional wave equation.
- Steepest Decent Animation
-
Demonstrates the method of Steepest Decent to find the minimum of a function z = f(x,y).
- Nelder-Mead Animation
-
Demonstrates the Nelder-Mead Simplex Method to find the minimum of a function z = f(x,y).
- Simulated Annealing in One Dimension
-
Demonstrates the use of Simulated Annealing to find the minimum of the function xe−0.5xsin(55x)cos(12x) on the interval [0,5]. Needs the simulated annealing solver.