Library

feed icon rss

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
  • 1
    Electronic Resource
    Electronic Resource
    Springer
    Numerische Mathematik 69 (1994), S. 1-15 
    ISSN: 0945-3245
    Keywords: Mathematics Subject Classification (1991): 65H10; 65F10
    Source: Springer Online Journal Archives 1860-2000
    Topics: Mathematics
    Notes: Summary. The Generalized Conjugate Gradient method (see [1]) is an iterative method for nonsymmetric linear systems. We obtain generalizations of this method for nonlinear systems with nonsymmetric Jacobians. We prove global convergence results.
    Type of Medium: Electronic Resource
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 2
    Electronic Resource
    Electronic Resource
    s.l. : American Chemical Society
    The @journal of physical chemistry 〈Washington, DC〉 97 (1993), S. 7652-7659 
    Source: ACS Legacy Archives
    Topics: Chemistry and Pharmacology , Physics
    Type of Medium: Electronic Resource
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 3
    Electronic Resource
    Electronic Resource
    Springer
    Computing 46 (1991), S. 233-252 
    ISSN: 1436-5057
    Keywords: Preconditioned iterative methods ; generalized SSOR methods ; wavefront methods ; 15-point difference methods ; mesh-connected computer architectures
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Description / Table of Contents: Zusammenfassung Klassische Wellenfront-vorkonditionierte iterative Methoden für Differenzmatrizen verwenden Wellenfronten, die auf diagonalen (Linien- oder Flächen-) Ordnungen der Gitterpunkte basieren. Da solche Wellenfronten keine konstante Breite haben, ist es nicht möglich, sie effizient auf Parallelrechner-Architekturen auszuführen. Wir diskutieren verschiedene Methoden, wie man Wellenfronten mit konstanter Breite für elliptische Probleme zweiter Ordnung erhalten kann. Insbesondere diskutieren wir die Anwendung dieser Methoden für neun- (2D) und fünfzehnpunktige (3D) Differenzapproximationen des Laplaceoperators, die für geeignete Wahl der Koeffizienten von vierter Ordnung sind. Wir erhalten vorkonditionierte Methoden mit Wellenfronten als vertikale oder horizontale Linien sowohl in 2D als auch in 3D, die die KonditionszahlO(h −1) haben. Die Methoden benutzen nur Verbindungen zu benachbarten Knoten. Infolgedessen können sie nicht nur auf Rechner-Architekturen mit geteiltem Speicher, sondern auch auf verteilten Systemen, Z.B. vernetzten Parallelrechner-Architekturen, effizient ausgeführt werden.
    Notes: Abstract Classical wavefront preconditioned iteration methods for difference matrices on a rectangular or on a rectangular parallelepipedal domain use wavefronts based on diagonal (line or plane, respectively) orderings of the meshpoint. Since such wavefronts do not have constant widths, they cannot be implemented efficiently on parallel computers. We discuss various methods to get wavefronts with constant width for difference matrices for second order elliptic problems. In particular, we discuss their applications for the nine-point (2D) and 15-point (3D) difference approximations for the Laplacian, which are fourth order accurate for proper choices of the coefficients. It turns out that we can easily get preconditioning methods with wavefronts in the form of vertical or horizontal lines both in 2D and 3D, which have condition numberO(h −1), but for general three space dimensional problems no simple ordering leading to constant plane wavefronts seems to exist in general, for which the corresponding preconditioner has such a small condition number. A crucial property we make use of in the methods is the spectral equivalence between the nine-point and the standard five-point difference matrices and between the 15-point and the standard seven-point difference matrices in two and three space dimensions, respectively. The methods use only nearest neighbor connections and can therefore be implemented efficiently not only on shared memory computers but also on distributed memory computer architectures, such as mesh-connected computer architectures.
    Type of Medium: Electronic Resource
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 4
    Electronic Resource
    Electronic Resource
    New York, NY [u.a.] : Wiley-Blackwell
    Numerical Linear Algebra with Applications 1 (1994), S. 75-101 
    ISSN: 1070-5325
    Keywords: Variable-step preconditioners ; Nonlinear preconditioning ; Generalized conjugate gradient method ; Engineering ; Engineering General
    Source: Wiley InterScience Backfile Collection 1832-2000
    Topics: Mathematics
    Notes: When solving large size systems of equations by preconditioned iterative solution methods, one normally uses a fixed preconditioner which may be defined by some eigenvalue information, such as in a Chebyshev iteration method. In many problems, however, it may be more effective to use variable preconditioners, in particular when the eigenvalue information is not available.In the present paper, a recursive way of constructing variable-step of, in general, nonlinear multilevel preconditioners for selfadjoint and coercive second-order elliptic problems, discretized by the finite element method is proposed. The preconditioner is constructed recursively from the coarsest to finer and finer levels. Each preconditioning step requires only block-diagonal solvers at all levels except at every k0, k0 ≥ 1 level where we perform a sufficient number ν, ν ≥ 1 of GCG-type variable-step iterations that involve the use again of a variable-step preconditioning for that level.It turns out that for any sufficiently large value of k0 and, asymptotically, for ν sufficiently large, but not too large, the method has both an optimal rate of convergence and an optimal order of computational complexity, both for two and three space dimensional problem domains.The method requires no parameter estimates and the convergence results do not depend on the regularity of the elliptic problem.
    Additional Material: 8 Tab.
    Type of Medium: Electronic Resource
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 5
    Electronic Resource
    Electronic Resource
    New York, NY [u.a.] : Wiley-Blackwell
    Numerical Linear Algebra with Applications 1 (1994), S. 155-177 
    ISSN: 1070-5325
    Keywords: Preconditioning ; Diagonal compensation ; Eigenvalue bounds ; Engineering ; Engineering General
    Source: Wiley InterScience Backfile Collection 1832-2000
    Topics: Mathematics
    Notes: When solving linear algebraic equations with large and sparse coefficient matrices, arising, for instance, from the discretization of partial differential equations, it is quite common to use preconditioning to accelerate the convergence of a basic iterative scheme. Incomplete factorizations and sparse approximate inverses can provide efficient preconditioning methods but their existence and convergence theory is based mostly on M-matrices (H-matrices). In some application areas, however, the arising coefficient matrices are not H-matrices. This is the case, for instance, when higher-order finite element approximations are used, which is typical for structural mechanics problems. We show that modification of a symmetric, positive definite matrix by reduction of positive offdiagonal entries and diagonal compensation of them leads to an M-matrix. This diagonally compensated reduction can take place in the whole matrix or only at the current pivot block in a recursive incomplete factorization method. Applications for constructing preconditioning matrices for finite element matrices are described.
    Type of Medium: Electronic Resource
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 6
    Electronic Resource
    Electronic Resource
    New York, NY [u.a.] : Wiley-Blackwell
    Numerical Linear Algebra with Applications 1 (1994), S. 213-236 
    ISSN: 1070-5325
    Keywords: Optimal order preconditioners ; Algebraic multilevel ; Chebyshev polynomial approximation ; Diagonal compensation ; Approximate inverses ; Engineering ; Engineering General
    Source: Wiley InterScience Backfile Collection 1832-2000
    Topics: Mathematics
    Notes: The numerical solution of elliptic selfadjoint second-order boundary value problems leads to a class of linear systems of equations with symmetric, positive definite, large and sparse matrices which can be solved iteratively using a preconditioned version of some algorithm. Such differential equations originate from various applications such as heat conducting and electromagnetics. Systems of equations of similar type can also arise in the finite element analysis of structures.We discuss a recursive method constructing preconditioners to a symmetric, positive definite matrix. An algebraic multilevel technique based on partitioning of the matrix in two by two matrix block form, approximating some of these by other matrices with more simple sparsity structure and using the corresponding Schur complement as a matrix on the lower level, is considered.The quality of the preconditioners is improved by special matrix polynomials which recursively connect the preconditioners on every two adjoining levels. Upper and lower bounds for the degree of the polynomials are derived as conditions for a computational complexity of optimal order for each level and for an optimal rate of convergence, respectively.The method is an extended and more accurate algebraic formulation of a method for nine-point and mixed five- and nine-point difference matrices, presented in some previous papers.
    Additional Material: 9 Tab.
    Type of Medium: Electronic Resource
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...