Skip to main content
Log in

Block truncated-Newton methods for parallel optimization

  • Published:
Mathematical Programming Submit manuscript

Abstract

Truncated-Newton methods are a class of optimization methods suitable for large scale problems. At each iteration, a search direction is obtained by approximately solving the Newton equations using an iterative method. In this way, matrix costs and second-derivative calculations are avoided, hence removing the major drawbacks of Newton's method. In this form, the algorithms are well-suited for vectorization. Further improvements in performance are sought by using block iterative methods for computing the search direction. In particular, conjugate-gradient-type methods are considered. Computational experience on a hypercube computer is reported, indicating that on some problems the improvements in performance can be better than that attributable to parallelism alone.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. R.H. Byrd, R.B. Schnabel and G.A. Shultz, “Using parallel function evaluations to improve Hessian approximations for unconstrained optimization,” Technical Report CU-CS-361-87, Department of Computer Science, University of Colorado at Boulder (Boulder, CO, 1987).

    Google Scholar 

  2. R.H. Byrd, R.B. Schnabel and G.A. Shultz, “Parallel quasi-Newton methods for unconstrained optimization,” Technical Report CU-CS-396-88, Department of Computer Science, University of Colorado at Boulder (Boulder, CO, 1988).

    Google Scholar 

  3. T.F. Coleman and P.E. Plassmann, “Solution of nonlinear least-square problems on a multiprocessor,” Report TR 88-923, Computer Science Department, Cornell University (Ithaca, NY, 1988).

    Google Scholar 

  4. P. Concus, G.H. Golub and D.P. O'Leary, “A generalized conjugate-gradient method for the numerical solution of elliptic partial differential equations,” in: J. Bunch and D. Rose, eds.,Sparse Matrix Computations (Academic Press, NY, 1976) pp. 309–332.

    Google Scholar 

  5. R.S. Dembo and T. Steihaug, “Truncated-Newton algorithms for large-scale unconstrained optimization,”Mathematical Programming 26 (1983) 190–212.

    Google Scholar 

  6. P.E. Gill and W. Murray, “The numerical solution of a problem in the calculus of variations,” in: D.J. Bell, ed.,Recent Mathematical Developments in Control (Academic Press, NY, 1973) pp. 97–122.

    Google Scholar 

  7. P.E. Gill and W. Murray, “Safeguarded steplength algorithms for optimization using descent methods,” Report NAC 37, National Physical Laboratory (UK, 1974).

    Google Scholar 

  8. P.E. Gill and W. Murray, “Newton-type methods for unconstrained and linearly constrained optimization,”Mathematical Programming 28 (1974) 311–350.

    Google Scholar 

  9. S.P. Han, “Optimization by updated conjugate subspaces,” in: D.F. Griffiths and G.A. Watson, eds.,Numerical Analysis: Pitman Research Notes in Mathematics Series 140 (Longman, Burnt Mill, 1986).

    Google Scholar 

  10. M. Hestenes and E. Stiefel, “Methods of conjugate gradients for solving linear systems,”Journal of Research of the National Bureau of Standards 49 (1952) 409–436.

    Google Scholar 

  11. S.G. Nash, “Newton-like minimization via the Lanczos method,”SIAM Journal on Numerical Analysis 21 (1984) 770–788.

    Google Scholar 

  12. S.G. Nash, “Preconditioning of truncated-Newton methods,”SIAM Journal on Scientific and Statistical Computing 6 (1985) 599–616.

    Google Scholar 

  13. S.G. Nash and A. Sofer, “Parallel optimization via the block Lanczos method,” in: E. Wegman, D. Gantz and J. Miller, eds.,Computer Science and Statistics: Proceedings of the 20th Symposium on the Interface (ASA, Reston, VA, 1989) pp. 209–213.

    Google Scholar 

  14. S.G. Nash and A. Sofer, “Assessing a search direction within a truncated-Newton method,” Report No. 43, Center for Computational Statistics and Probability, George Mason University (Fairfax, VA, 1989).

    Google Scholar 

  15. D.P. O'Leary, “The block conjugate-gradient algorithm and related methods,”Linear Algebra and its Applications 29 (1980) 293–322.

    Google Scholar 

  16. D.P. O'Leary, “A discrete Newton algorithm for minimizing a function of many variables,”Mathematical Programming 23 (1983) 20–33.

    Google Scholar 

  17. D.P. O'Leary, “Parallel implementation of the block conjugate gradient algorithm,”Parallel Computing 4 (1987) 127–139.

    Google Scholar 

  18. J.M. Ortega and R.G. Voigt, “Solution of partial differential equations on vector and parallel computers,”SIAM Review 27 (1985) 149–240.

    Google Scholar 

  19. C.C. Paige and M.A. Saunders, “Solution of sparse indefinite systems of linear equations,”SIAM Journal on Numerical Analysis 12 (1975) 617–629.

    Google Scholar 

  20. R. Underwood, “An iterative block Lanczos method for the solution of large sparse symmetric eigenproblems,” Computer Science Department Report STAN-C5-75-496, Stanford University (Stanford, CA, 1975).

    Google Scholar 

  21. S.A. Zenios and J.M. Mulvey, “Nonlinear network programming on vector supercomputers: a study on the Cray X-MP,”Operations Research 34 (1986) 667–682.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Additional information

Partially supported by Air Force Office of Scientific Research grant AFOSR-85-0222.

Partially supported by National Science Foundation grant ECS-8709795, co-funded by the U.S. Air Force Office of Scientific Research.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Nash, S.G., Sofer, A. Block truncated-Newton methods for parallel optimization. Mathematical Programming 45, 529–546 (1989). https://doi.org/10.1007/BF01589117

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF01589117

Key words

Navigation