9.6   Optimization


This section under major construction.

Root finding.

Goal: given function f(x), find x* such that f(x*) = 0. Nonlinear equations can have any number of solutions.
x2 + y2 = -1 no real solutions
e-x  =  17 one real solution
x2 -4x + 3 = 0  has two solutions (1, 3)
sin(x) = 0 has infinitely many solutions

Unconstrained optimization. Goal: given function f(x), find x* such that f(x) is maximized or minimized. If f(x) is differentiable, then we are looking for an x* such that f'(x*) = 0. However, this may lead to local minima, maxima, or saddle points.

Bisection method. Goal: given function f(x), find x* such that f(x*) = 0. Assume you know interval [a, b] such that f(a) < 0 and f(b) > 0.

Newton's method. Quadratic approximation. Fast convergence if close enough to answer. The update formulas below are for finding the root of f(x) and f'(x).

root finding:  xk+1 = xk - f'(xk)-1 f(xk)
optimization:  xk+1 = xk - f''(xk)-1 f'(xk)

Newton's method only reliable if started "close enough" to solution. Bad example (Smale): f(x) = x^3 - 2*x + 2. If you start in the interval [-0.1, 0.1] , Newton's method reaches a stable 2-cycle. If started to the left of the negative real root, it will converge.

To handle general differentiable or twice differentiable functions of one variable, we might declare an interface

public interface Function {
    public double eval(double x);
    public double deriv(double x);
}

Program Newton.java runs Newton's method on a differentiable function to compute points x* where f(x*) = 0 and f'(x*) = 0.

The probability of finding an electron in the 4s excited state of hydrogen ar radius r is given by: f(x) = (1 - 3x/4 + x2/8 - x3/192)2 e-x/2, where x is the radius in units of the Bohr radius (0.529173E-8 cm). Program BohrRadius.java contains the formula for f(x), f'(x), and f''(x). By starting Newton's method at 0, 4, 5, and 13, and 22, we obtain all three roots and all five local minima and maxima.

Newton's method in higher dimensions. [probably omit or leave as an exercise] Use to solve system of nonlinear equations. In general, there are no good methods for solving a nonlinear system of equations

xk+1 = xk - J(xk)-1 f(xk)

where J is the Jacobian matrix of partial derivatives. In practice, we don't explicitly compute the inverse. Instead of computing y = J-1f, we solve the linear system of equations Jy = f.

To illustrate the method, suppose we want to find a solution (x, y) to the following system of two nonlinear equations.

x3 - 3xy2 - 1 = 0
3x2y - y3 = 0

In this example, the Jacobian is given by

J  = [ 3x2 - 3y2     -6xy      ]
     [ 6x           3x2 - 3y2  ]

If we start Newton's method at the point (-0.6, 0.6), we quickly obtain one of the roots (-1/2, sqrt(3)/2) up to machine accuracy. The other roots are (-1/2, -sqrt(3)/2) and (1, 0). Program TestEquations.java uses the interface Equations.java and EquationSolver.java to solve the system of equations. We use the Jama matrix library to do the matrix computations.

Optimization. Use same method to optimize a function of several variables. Good methods exist if multivariate function is sufficiently smooth.

xk+1 = xk - H(xk)-1 g(xk)

Need gradient g(x) = ∇f(x) and Hessian H(x) = ∇2f(x). Method finds an x* where g(x*) = 0, but this could be a maxima, minima, or saddle point. If Hessian is positive definite (all eigenvalues are positive) then it is a minima; if all eigenvalues are negative, then it's a maxima; otherwise it's a saddle point.

Also, 2nd derivatives change slowly, so it may not be necessary to recalculate the Hessian (or its LU decomposition) at each step. In practice, it is expensive to compute the Hessian exactly, so other so called quasi-Newton methods are preferred, including the Broyden-Fletcher-Goldfarb-Shanno (BFGS) update rule.

Linear programming. Create matrix interface. Generalizes two-person zero-sum games, many problems in combinatorial optimization, .... run AMPL from the web.

Programming = planning. Give some history. Decision problem not known to be in P for along time. In 1979, Khachian resolved the question in the affirmative and made headlines in the New York Times with a geometric divide-and-conquer algorithm known as the ellipsoid algorithm. It requires O(N4L) bit operations where N is the number of variables and L is the number of bits in the input. Although this was a landmark in optimization, it did not immediately lead to a practical algorithm. In 1984, Karmarkar proposed a projective scaling algorithm that takes O(N3.5L) time. It opened up the door for efficient implementations because by typically performing much better than its worst case guarantee. Various interior point methods were proposed in the 1990s, and the best known complexity bound is O(N3 L). More importantly, these algorithm are practical and competitive with the simplex method. They also extend to handle even more general problems.

Simplex method.

Linear programming solvers. In 1947, George Dantzig proposed the simplex algorithm for linear programming. One of greatest and most successful algorithms of all time. Linear programming, but not industrial strength. Program LPDemo.java illustrates how to use it. The classes MPSReader and MPSWriter can parse input files and write output files in the standard MPS format. Test LP data files in MPS format.

More applications. OR-Objects also has graph coloring, traveling salesman problem, vehicle routing, shortest path.

Exercises

  1. Use Newton's method to find an x (in radians) such that x = cos(x).
  2. Use Newton's method to find an x (in radians) such that x2 = 4 sin(x).
  3. Use Newton's method to find an x (in radians) that minimizes f(x) = sin(x) + x - exp(x).
  4. Use Newton's method to find (x, y) that solve
    x + y - xy  = -2
    x exp(-y) = 1
    

    Start at the point (0.1, -0.2), which is near the true root (0.09777, -2.325).

  5. Use Newton's method to find (x, y) that solve
    x + 2y = 2
    x2 + 4y2 = 4
    

    Start at the point (1, 2), which is near the true root (0, 1).

  6. Use Newton's method to find (x, y) that solve
    x + y  = 3
    x2 + y2 = 9
    

    Start at the point (2, 7).

  7. Use Newton's method to minimize f(x) = 1/2 - x e-x2. Hint: f'(x) = (2x2-1)e-x2, f''(x) = 2x(3-2x2)e-x2.
  8. Use Newton's method to find all minima, maxima, and saddle points of the following function of two variables.
    f(x,y) = 3(1-x)2 exp(-x2-(y+1)2) -  10((y/6)-y3) exp(-x2-y2)
    

Creative Exercises

  1. Bernoulli numbers. Bernoulli numbers appear in the Taylor expansion of the tangent function, the Euler-MacLaurin summation formula, and the Riemann zeta function. They can be defined recursively by B0 = 1, and using the fact that the sum from j = 0 to N of binomial(N+1, j) Bj = 0. For example B1 = -1/2, B2 = 1/6, B3 = 0, and B12 = -691/2730. Bernoulli computed the first 10 Bernoulli numbers by hand; Euler's compute the first 30. In 1842, Ada Lovelace suggested to Charles Babbage that he devise an algorithm for computing Bernoulli numbers using his Analytic Engine. Write a program Bernoulli.java that takes a command-line argument N and prints out the first N Bernoulli numbers. Use the BigRational.java data type from Section 9.2.
  2. Eccentricity anomaly. (Cleve Moler) The eccentricity anomaly arises in Kepler's model of planetary motion and satisfies M = E - e sin E, where M is the mean anomaly (24.851090) and e is the orbit eccentricity (0.1). Solve for E.
  3. Newton's method with complex numbers. Implement Newton's method for finding a complex root of an equation. Use Complex.java and implement Newton's method exactly as when finding real roots, but use complex numbers.