Object-oriented design for massively parallel computing

11/22/2018
by   Edward Givelberg, et al.
0

We define an abstract framework for object-oriented programming and show that object-oriented languages, such as C++, can be interpreted as parallel programming languages. Parallel C++ code is typically more than ten times shorter than the equivalent C++ code with MPI. The large reduction in the number of lines of code in parallel C++ is primarily due to the fact that coordination of concurrency, and the communications instructions, including packing and unpacking of messages, are automatically generated in the implementation of object operations. We implemented a prototype of a compiler and a runtime system for parallel C++ and used them to create complex data-intensive and HPC applications. These results indicate that adoption of the parallel object-oriented framework has the potential to drastically reduce the cost of parallel programming. We also show that standard sequential object-oriented programs can be ported to parallel architecture, parallelized automatically, and potentially sped up. The parallel object-oriented framework enables an implementation of a compiler with a dedicated backend for the interconnect fabric, which exposes the network hardware features directly to the application. We discuss the potential implications for computer architecture.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset