Molecular dynamics on a distributed‐memory multiprocessor

S. L. Lin, J. Mellor‐Crummey, B. M. Pettitt, G. N. Phillips

Research output: Contribution to journalArticlepeer-review

32 Scopus citations

Abstract

Dynamics simulations of molecular systems are notoriously computationally intensive. Using parallel computers for these simulations is important for reducing their turnaround time. In this article we describe a parallelization of the simulation program CHARMM for the Intel iPSC/860, a distributed memory multiprocessor. In the parallelization, the computational work is partitioned among the processors for core calculations including the calculation of forces, the integration of equations of motion, the correction of atomic coordinates by constraint, and the generation and update of data structures used to compute nonbonded interactions. Processors coordinate their activity using synchronous communication to exchange data values. Key data structures used are partitioned among the processors in nearly equal pieces, reducing the memory requirement per node and making it possible to simulate larger molecular systems. We examine the effectiveness of the parallelization in the context of a case study of a realistic molecular system. While effective speedup was achieved for many of the dynamics calculations, other calculations fared less well due to growing communication costs for exchanging data among processors. The strategies we used are applicable to parallelization of similar molecular mechanics and dynamics programs for distributed memory multiprocessors. © 1992 by John Wiley & Sons, Inc.

Original languageEnglish (US)
Pages (from-to)1022-1035
Number of pages14
JournalJournal of Computational Chemistry
Volume13
Issue number8
DOIs
StatePublished - Oct 1992
Externally publishedYes

ASJC Scopus subject areas

  • General Chemistry
  • Computational Mathematics

Fingerprint

Dive into the research topics of 'Molecular dynamics on a distributed‐memory multiprocessor'. Together they form a unique fingerprint.

Cite this