Library

feed icon rss

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
  • 1
    Electronic Resource
    Electronic Resource
    Springer
    Mathematical programming 13 (1977), S. 111-115 
    ISSN: 1436-4646
    Keywords: Unconstrained minimization ; Step size procedures ; Negative curvature ; Second order convergence ; Line search ; Newton's method
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Notes: Abstract Armijo's step-size procedure for function minimization is modified to include second derivative information. Accumulation points using this procedure are shown to be stationary points with positive semi-definite Hessian matrices.
    Type of Medium: Electronic Resource
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 2
    Electronic Resource
    Electronic Resource
    Springer
    Mathematical programming 3 (1972), S. 101-116 
    ISSN: 1436-4646
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Notes: Abstract It is shown that algorithms for minimizing an unconstrained functionF(x), x ∈ E n , which are solely methods of conjugate directions can be expected to exhibit only ann or (n−1) step superlinear rate of convergence to an isolated local minimizer. This is contrasted with quasi-Newton methods which can be expected to exhibit every step superlinear convergence. Similar statements about a quadratic rate of convergence hold when a Lipschitz condition is placed on the second derivatives ofF(x).
    Type of Medium: Electronic Resource
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 3
    Electronic Resource
    Electronic Resource
    Springer
    Mathematical programming 10 (1976), S. 147-175 
    ISSN: 1436-4646
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Notes: Abstract For nonlinear programming problems which are factorable, a computable procedure for obtaining tight underestimating convex programs is presented. This is used to exclude from consideration regions where the global minimizer cannot exist.
    Type of Medium: Electronic Resource
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 4
    Electronic Resource
    Electronic Resource
    Springer
    Mathematical programming 41 (1988), S. 1-27 
    ISSN: 1436-4646
    Keywords: Second-order sensitivity analysis ; high-order methods ; Nth-order derivatives ; polyads ; tensors ; factorable functions ; nonlinear optimization ; Lagrangian analysis
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Notes: Abstract Second-order sensitivity analysis methods are developed for analyzing the behavior of a local solution to a constrained nonlinear optimization problem when the problem functions are perturbed slightly. Specifically, formulas involving third-order tensors are given to compute second derivatives of components of the local solution with respect to the problem parameters. When in addition, the problem functions are factorable, it is shown that the resulting tensors are polyadic in nature.
    Type of Medium: Electronic Resource
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 5
    Electronic Resource
    Electronic Resource
    Springer
    Mathematical programming 1 (1971), S. 217-238 
    ISSN: 1436-4646
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Notes: Abstract The relative merits of using sequential unconstrained methods for solving: minimizef(x) subject tog i (x) ⩾ 0, i = 1, ⋯, m, h j (x) = 0, j = 1, ⋯, p versus methods which handle the constraints directly are explored. Nonlinearly constrained problems are emphasized. Both classes of methods are analyzed as to parameter selection requirements, convergence to first and second-order Kuhn-Tucker Points, rate of convergence, matrix conditioning problems and computations required.
    Type of Medium: Electronic Resource
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 6
    Electronic Resource
    Electronic Resource
    Springer
    Mathematical programming 52 (1991), S. 167-178 
    ISSN: 1436-4646
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Notes: Abstract Most nonlinear programming problems consist of functions which are sums of unary functions of linear functions. Advantage can be taken of this form to calculate second and higher order derivatives easily and at little cost. Using these, high order optimization techniques such as Halley's method can be utilized to accelerate the rate of convergence to the solution. These higher order derivatives can also be used to compute second order sensitivity information. These techniques are applied to the solution of the classical chemical equilibrium problem.
    Type of Medium: Electronic Resource
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 7
    Electronic Resource
    Electronic Resource
    Springer
    Annals of operations research 34 (1992), S. 107-124 
    ISSN: 1572-9338
    Source: Springer Online Journal Archives 1860-2000
    Topics: Mathematics , Economics
    Notes: Abstract This paper considers the solution of the problem: inff[y, x(y)] s.t.y ∈ $$\bar R$$ [y, x(y)] ⊆E k , wherex(y) solves: minF(x, y) s.t.x ∈R(x, y) ⊆E n . In order to obtain local solutions, a first-order algorithm, which uses {dx(y)/dy} for solving a special case of the implicitly definedy-problem, is given. The derivative is obtained from {dx(y, r)/dy}, wherer is a penalty function parameter and {x(y, r)} are approximations to the solution of thex-problem given by a sequential minimization algorithm. Conditions are stated under whichx(y, r) and {dx(y, r)/dy} exist. The computation of {dx(y, r)/dy} requires the availability of ∇ y F(x, y) and the partial derivatives of the other functions defining the setR(x, y) with respect to the parametersy.
    Type of Medium: Electronic Resource
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 8
    Title: Nonlinear programming : theory, algorithms, and applications
    Author: McCormick, Garth P.
    Publisher: New York u.a. :Wiley,
    Year of publication: 1983
    Pages: 444 S.
    Type of Medium: Book
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...