ISSN:
1573-2878
Keywords:
Unconstrained optimization
;
quasi-Newton equations
;
quasi-Newton methods
Source:
Springer Online Journal Archives 1860-2000
Topics:
Mathematics
Notes:
Abstract In unconstrained optimization, the usual quasi-Newton equation is B k+1 s k=y k, where y k is the difference of the gradients at the last two iterates. In this paper, we propose a new quasi-Newton equation, $$B_{k + 1} s_k = \tilde y_k $$ , in which $$\tilde y_k $$ is based on both the function values and gradients at the last two iterates. The new equation is superior to the old equation in the sense that $$\tilde y_k $$ better approximates ∇ 2 f(x k+1)s k than y k. Modified quasi-Newton methods based on the new quasi-Newton equation are locally and superlinearly convergent. Extensive numerical experiments have been conducted which show that the new quasi-Newton methods are encouraging.
Type of Medium:
Electronic Resource
URL:
http://dx.doi.org/10.1023/A:1021898630001
Permalink