World Scientific
  • Search
Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×
Our website is made possible by displaying certain online content using javascript.
In order to view the full content, please disable your ad blocker or whitelist our website www.worldscientific.com.

System Upgrade on Tue, Oct 25th, 2022 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at [email protected] for any enquiries.

Integrating Amdahl-like Laws and Divisible Load Theory

    A simple means of integrating the characteristics of networked processors under divisible loads into Amdahl’s Law is presented. Amdahl’s Law serves as an upper bound to these speedup results. Amdahl’s Law with divisible load processing characteristics included serves as an upper bound to speedup for any model taking into consideration more detailed peculiarities of real systems such as the overhead of task creation, synchronization, resource contention and memory issues.

    Communicated by Andrew Adamatzky

    References

    • 1. Y. C. Cheng and T. G. Robertazzi, Distributed computation with communication delays, IEEE Transactions on Aerospace and Electronic Systems 24(6) (1988) 700–712. Crossref, ISIGoogle Scholar
    • 2. R. Agrawal and H. V. Jagadish, Partitioning techniques for large-grained parallelsim, IEEE Transactions on Computers 37(12) (1988) 1627–1634. Crossref, ISIGoogle Scholar
    • 3. M. Drozdowski, Scheduling for Parallel Processing (Springer, 2009). CrossrefGoogle Scholar
    • 4. H. Casanova, A. Legrand and Y. Robert, Parallel Algorithms (CRC Press, 2009). Google Scholar
    • 5. D. L. H. Ping, B. Veeravalli and D. Bader, On the design of high-performance algorithms for aligning multiple protein sequences in mesh-based multiprocessor architectures, Journal of Parallel and Distributed Computing 67(9) (2007) 1007–1017. Crossref, ISIGoogle Scholar
    • 6. J. Berlinska and M. Drozdowski, Scheduling divisible MapReduce computations, Journal of Parallel and Distributed Computing 71(3) (March 2011) 450–459. Crossref, ISIGoogle Scholar
    • 7. G. D. Barlas, An analytical approach to optimizing parallel image registration, IEEE Transactions on Parallel and Distributed Systems 21(8) (August 2010) 1074–1088. Crossref, ISIGoogle Scholar
    • 8. Z. Ying and T. G. Robertazzi, Signature searching in a networked collection of files, IEEE Transaction on Parallel and Distributed Systems 25(5) (May 2014) 1339–1348. Crossref, ISIGoogle Scholar
    • 9. C. F. Gamboa and T. G. Robertazzi, Simple performance bounds for multicore and parallel channel systems, Parallel Processing Letters 21(4) (December 2011) 439–459. LinkGoogle Scholar
    • 10. G. D. Barlas, A. Hassan and Y. Al Jundi, An analytical approach to the design of parallel block cipher encryption/decryption: A CPU/GPU case study, Proc. of PDP, February 2011, pp. 247–251. Google Scholar
    • 11. M. Moges and T. G. Robertazzi, Wireless sensor networks: Scheduling for measurement and data reporting, IEEE Transactions on Aerospace and Electronic Systems 42(1) (January 2006) 327–340. Crossref, ISIGoogle Scholar
    • 12. X. Lin, Y. Lu, J. Deogun and S. Goddard, Enhanced real-time divisible load scheduling with different processor available times, 14th International Conference on High Performance Computing (HiPC), December 2007. Google Scholar
    • 13. P. Thysebaert, F. De Turck, B. Dhoedt and P. Demeester, Using divisible load theory to dimension optical transport networks computational grids, Proc. of Optical Fibre Communication Conference (OFC), March 2005. Google Scholar
    • 14. G. M. Amdahl, Validity of the single processor approach to achieving large scale computing capabilities, Proc. of the AFIPS Conference, 1967, 483–485. Google Scholar
    • 15. G. M. Amdahl, Computer architecture and Amdahl’s law, Computer (2013) 38–46. Crossref, ISIGoogle Scholar
    • 16. J. J. Gustafson, Reevaluating Amdahl’s law, Communications of the ACM 31(5) (May 1988) 532–533. Crossref, ISIGoogle Scholar
    • 17. M. D. Hill and M. R. Marty, Amdahl’s law in the multicore era, Computer (July 2008) 33–38. Crossref, ISIGoogle Scholar
    • 18. M. D. Hill and M. R. Marty, Retrospective on Amdahl’s law in the multicore era, Computer (June 2017) 12–14. Crossref, ISIGoogle Scholar
    • 19. B. H. H. Juurlink and C. H. Meenderinck, Amdahl’s law for predicting the future of multicores considered harmful, ACM SIGARCH Computer Architecture News 40(2) (May 2012), 9 pages. CrossrefGoogle Scholar
    • 20. A. Marowka, Analytical modeling of energy efficiency in heterogeneous processors, Computers and Electrical Engineering 39 (2013) 2566–2578. Crossref, ISIGoogle Scholar
    • 21. A. S. Cassidy and A. G. Andreou, Beyond Amdahl’s law: An objective function that links multiprocessor performance gains to delay and energy, IEEE Transactions on Computers 61(8) (August 2012) 1110–1126. Crossref, ISIGoogle Scholar
    • 22. F. Diaz-del-Rio, J. Salmeron-Garcia and J. Luis Sevillano, Extending Amdahl’s law for the cloud computing era, Computer (February 2016) 14–22. Crossref, ISIGoogle Scholar
    • 23. T. Robertazzi, Networks and Grids (Springer, 2007). Second edition: T. Robertazzi and L. Shi, Networking and Computation (Springer, 2020). Google Scholar