Library

feed icon rss

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
Filter
  • 1995-1999  (8)
Material
Years
Year
Person/Organisation
Language
  • 1
    Title: Designing and building parallel programs: concepts & tools for parallel software engineering
    Author: Foster, Ian
    Publisher: Reading, MA u.a. :Addison-Wesley,
    Year of publication: 1995
    Pages: 379 S.
    Type of Medium: Book
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 2
    Publication Date: 2022-07-19
    Language: English
    Type: conferenceobject , doc-type:conferenceObject
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 3
    ISSN: 1573-7543
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract A cost‐effective secondary storage architecture for parallel computers is to distribute storage across all processors, which then engage in either computation or I/O, depending on the demands of the moment. A difficulty associated with this architecture is that access to storage on another processor typically requires the cooperation of that processor, which can be hard to arrange if the processor is engaged in other computation. One partial solution to this problem is to require that remote I/O operations occur only via collective calls. In this paper, we describe an alternative approach based on the use of single‐sided communication operations such as Active Messages. We present an implementation of this basic approach called Distant I/O and present experimental results that quantify the low‐level performance of DIO mechanisms. This technique is exploited to support noncollective parallel shared file model for a large out‐of‐core scientific application with very high I/O bandwidth requirements. The achieved performance exceeds by a wide margin the performance of a well equipped PIOFS parallel filesystem on the IBM SP.
    Type of Medium: Electronic Resource
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 4
    Electronic Resource
    Electronic Resource
    Springer
    Cluster computing 1 (1998), S. 95-107 
    ISSN: 1573-7543
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract We describe a software infrastructure designed to support the development of applications that use high‐speed networks to connect geographically distributed supercomputers, databases, and scientific instruments. Such applications may need to operate over open networks and access valuable resources, and hence can require mechanisms for ensuring integrity and confidentiality of communications and for authenticating both users and resources. Yet security solutions developed for traditional client‐server applications do not provide direct support for the distinctive program structures, programming tools, and performance requirements encountered in these applications. To address these requirements, we are developing a security‐enhanced version of a communication library called Nexus, which is then used to provide secure versions of various parallel libraries and languages, including the popular Message Passing Interface. These tools support the wide range of process creation mechanisms and communication structures used in high‐performance computing. They also provide a fine degree of control over what, where, and when security mechanisms are applied. In particular, a single application can mix secure and nonsecure communication, allowing the programmer to make fine‐grained security/performance tradeoffs. We present performance results that enable us to quantify the performance of our infrastructure.
    Type of Medium: Electronic Resource
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 5
    Electronic Resource
    Electronic Resource
    Springer
    Cluster computing 2 (1999), S. 105-105 
    ISSN: 1573-7543
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Type of Medium: Electronic Resource
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 6
    ISSN: 1573-7543
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract The potential for faults in distributed computing systems is a significant complicating factor for application developers. While a variety of techniques exist for detecting and correcting faults, the implementation of these techniques in a particular context can be difficult. Hence, we propose a fault detection service designed to be incorporated, in a modular fashion, into distributed computing systems, tools, or applications. This service uses well-known techniques based on unreliable fault detectors to detect and report component failure, while allowing the user to trade off timeliness of reporting against false positive rates. We describe the architecture of this service, report on experimental results that quantify its cost and accuracy, and describe its use in two applications, monitoring the status of system components of the GUSTO computational grid testbed and as part of the NetSolve network-enabled numerical solver.
    Type of Medium: Electronic Resource
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 7
    ISSN: 0192-8651
    Keywords: Chemistry ; Theoretical, Physical and Computational Chemistry
    Source: Wiley InterScience Backfile Collection 1832-2000
    Topics: Chemistry and Pharmacology , Computer Science
    Notes: We discuss issues in developing scalable parallel algorithms and focus on the distribution, as opposed to the replication, of key data structures. Replication of large data structures limits the maximum calculation size by imposing a low ratio of processors to memory. Only applications which distribute both data and computation across processors are truly scalable. The use of shared data structures that may be independently accessed by each process even in a distributed memory environment greatly simplifies development and provides a significant performance enhancement. We describe tools we have developed to support this programming paradigm. These tools are used to develop a highly efficient and scalable algorithm to perform self-consistent field calculations on molecular systems. A simple and classical strip-mining algorithm suffices to achieve an efficient and scalable Fock matrix construction in which all matrices are fully distributed. By strip mining over atoms, we also exploit all available sparsity and pave the way to adopting more sophisticated methods for summation of the Coulomb and exchange interactions. © 1996 by John Wiley & Sons, Inc.
    Additional Material: 5 Ill.
    Type of Medium: Electronic Resource
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 8
    ISSN: 0192-8651
    Keywords: Chemistry ; Theoretical, Physical and Computational Chemistry
    Source: Wiley InterScience Backfile Collection 1832-2000
    Topics: Chemistry and Pharmacology , Computer Science
    Notes: Several parallel algorithms for Fock matrix construction are described. The algorithms calculate only the unique integrals, distribute the Fock and density matrices over the processors of a massively parallel computer, use blocking techniques to construct the distributed data structures, and use clustering techniques on each processor to maximize data reuse. Algorithms based on both square and row-blocked distributions of the Fock and density matrices are described and evaluated. Variants of the algorithms are discussed that use either triple-sort or canonical ordering of integrals, and dynamic or static task clustering schemes. The algorithms are shown to adapt to screening, with communication volume scaling down with computation costs. Modeling techniques are used to characterize algorithm performance. Given the characteristics of existing massively parallel computers, all the algorithms are shown to be highly efficient for problems of moderate size. The algorithms using the row-blocked data distribution are the most efficient. © 1996 by John Wiley & Sons, Inc.
    Additional Material: 8 Ill.
    Type of Medium: Electronic Resource
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...