Collective Vector Clocks: Low-Overhead Transparent Checkpointing for MPI
MPI is the de facto standard for parallel computation on a cluster of computers. Yet resilience for MPI continues to be an issue for large-scale computations, and especially for long-running computations that exceed the maximum time allocated to a job by a resource manager. Transparent checkpointing (with no modification of the underlying binary executable) is an important component in any strategy for software resilience and chaining of resource allocations. However, achieving low runtime overhead is critical for community acceptance of a transparent checkpointing solution. ("Runtime overhead" is the overhead in time when running an application with no checkpoints, both with and without the checkpointing package.) A collective-vector-clock algorithm for transparent checkpointing of MPI is presented. The algorithm is built using the software of the mature MANA project for transparent checkpointing of MPI. MANA's existing two-phase-commit algorithm produces very high runtime overhead as compared to "native" execution. For example, MANA was found to result in runtime overheads as high as 37 micro-benchmarks – especially on workloads that intensively use collective communication. The new algorithm replaces two-phase commit. It is a novel variation on vector clock algorithms. It uses a vector of logical clocks, with an individual clock for each distinct group of MPI processes underlying the MPI communicators in the application. This contrasts with the traditional vector of logical clocks across individual processes. Micro-benchmarks show a runtime overhead of essentially zero for many MPI processes. And two real-world applications, VASP and GROMACS, show a runtime overhead ranging mostly from 0 to 7 optimization of other sources of overhead.
READ FULL TEXT