World Scientific
  • Search
Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×
Our website is made possible by displaying certain online content using javascript.
In order to view the full content, please disable your ad blocker or whitelist our website www.worldscientific.com.

System Upgrade on Tue, Oct 25th, 2022 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at [email protected] for any enquiries.

DERIVING AND SCHEDULING COMMUNICATION OPERATIONS FOR GENERIC SKELETON IMPLEMENTATIONS

    Data distributions are an abstract notion for describing parallel programs by means of overlapping data structures. A generic data distribution layer serves as a basis for implementing specific data distributions over arbitrary algebraic data types and arrays as well as generic skeletons. The necessary communication operations for exchanging overlapping data elements are derived automatically from the specification of the overlappings. This paper describes how the communication operations used internally by the generic skeletons are derived, especially for the asynchronous and synchronous communication scheduling. As a case study, we discuss the iterative solution of PDEs and compare a hand-coded MPI version with a skeletal one based on overlapping data distributions.

    References

    • B. Bacci, M. Danelutto, S. Orlando, S. Pelagatti, and M. Vanneschi. P3L: A Structured High–level Parallel Language and its Structured Support. Technical Report HPL-PSC-93-55, Hewlett-Packard Laboraties, Pisa Science Center, 1993 . Google Scholar
    • G. H. Botorog and H. Kuchen. Efficient high-level parallel programming. Theoretical Computer Science, Special Issue on Parallel Computing, 1998 . Google Scholar
    • M. Chetlur, G. D. Sharma, N. Abu-Ghazaleh, U. K. V. Rajasekaran, and P. A. Wilsey. An active layer extension to MPI. In V. Alexandrov and J. Dongarra, editors, Proc. EuroPVM/MPI'98, volume 1497 of LNCS, pages 97–104, Sep. 1998 . Google Scholar
    • M. I.   Cole , Algorithmic Skeletons: Structured Management of Parallel Computation ( MIT Press , 1989 ) . Google Scholar
    • M. I. Cole. Algorithmic skeletons. In K. Hammond and G. Michaelson, editors, Research Directions in Parallel Functional Programming, pages 289–303. 1999 . Google Scholar
    • M. I. Cole, Parallel Computing 30, 389 (2004). Crossref, ISIGoogle Scholar
    • M. I. Cole, S. Gorlatch, C. Lengauer, and D. B. Skillicorn. Theory and practice of higher-order parallel programming. Techn. Report 9708, Dagstuhl Seminar, Feb. 1997 . Google Scholar
    • M. I. Cole, S. Gorlatch, J. Prins, and D. B. Skillicorn. High level parallel programming: Applicability, analysis and performance. Techn. Report 99171, Dagstuhl Seminar, 1999 . Google Scholar
    • M. Danelutto and M. Stigliani, Euro-Par'00 Parallel Processing, LNCS 1900, eds. A. Bodeet al. (Springer, 2000) pp. 1175–1184. Google Scholar
    • J. Darlingtonet al., Proc. PPoPP'95 (ACM Press, 1995) pp. 19–28. CrossrefGoogle Scholar
    • S. Gorlatch and C. Lengauer, Acta Informatica 36(9–10), 761 (2000). Crossref, ISIGoogle Scholar
    • S. Gorlatch and S. Pelagatti, Parallel & Distr. Proc. IPPS/SPDP'99 Workshop Proc., LNCS 1586, eds. J. Rohlimet al. (Springer, 1999) pp. 123–137. Google Scholar
    • M. M. Hamdan. A Combinational Framework for Parallel Programming Using Skeletons. PhD thesis, Heriot-Watt Univ., Edinburgh, Jan. 2000 . Google Scholar
    • C. A. Herrmann. The Skeleton-Based Parallelization of Divide-and-Conquer Recursions. PhD thesis, Universität Passau, 2000 . Google Scholar
    • C. A. Herrmann. Functional meta-programming in the construction of parallel programs. In S. Gorlatch, editor, 4th Intern. Workshop on Construction Methods for Parallel Programming (CMPP'04), pages 3–17, Jul. 2004 . Google Scholar
    • C. A. Herrmann and C. Lengauer, Parallel Processing Letters 10(2–3), 239 (2000). Link, ISIGoogle Scholar
    • C. A. Herrmann and C. Lengauer. Transforming rapid prototypes to efficient parallel programs. In [23], pages 65–94. Springer, 2003 . Google Scholar
    • T. Nitsche. Thread communication over MPI. In J. Dongarra, P. Kacsuk, and N. Podhorszki, editors, Proc. EuroPVM/MPI'00, volume 1908 of LNCS, 2000 . Google Scholar
    • T. Nitsche, Constructive Methods for Parallel Programming, eds. S. Gorlatch and C. Lengauer (Nova Science, 2002) pp. 23–29. Google Scholar
    • T. Nitsche. Transformation of Data Distribution Algebras into Communication Structures for Parallel Systems. PhD thesis, Techn. Univ. Berlin, 2005. forthcoming . Google Scholar
    • S. Pelagatti. A Methodology for the Development and the Support of Massively Parallel Programs. PhD thesis, Department of Computer Science, Pisa, Mar. 1993 . Google Scholar
    • S.   Pelagatti , Structured Development of Parallel Programs ( Taylor & Francis , 1998 ) . Google Scholar
    • F. A.   Rabhi and S.   Gorlatch (eds.) , Patterns and Skeletons for Parallel and Distributed Computing ( Springer , 2003 ) . CrossrefGoogle Scholar
    • R. Rangaswami. A Cost Analysis for a Higher-order Parallel Programming Model. PhD thesis, Univ. of Edinburgh, 1996 . Google Scholar
    • M. Südholt. The Transformational Derivation of Parallel Programs using Data Distribution Algebras and Skeletons. PhD thesis, Technical Univ. of Berlin, 1997 . Google Scholar
    • H. H. Wang, ACM Transaction on Mathematical Software 7(2), 170 (1981). Crossref, ISIGoogle Scholar