Each week on the InSilico World and CompBioMed Slack Scalability channel one of our research group outlines the work that they have done to scale their codes. We are happy that 60 experts have already joined this channel and we would like more interaction and discussions taking place on the channel around the codes being described and more general help and advice for our members. Here we give a short outline of each of these codes and we hope it encourages people to view the full posts on the channel and increase the discussions in this field.
The first post of the new year, in which Jon McCullough from University College London posted information on the HemeLB code, its large-scale scaling capabilities and some challenges for maintaining performance on future exascale machines.
HemeLB [1, 2] is an open-source fluid dynamics solver used for the study of 3D blood flow in human-scale arterial and venous geometries. Originally developed in C++ with MPI parallelism for use on CPU based machines, we now have a GPU version of HemeLB under development to take advantage of the accelerators that are becoming commonplace on the world’s fastest supercomputers. HemeLB has been specifically optimised to enable efficient and very large-scale simulation of the sparse geometries that are characteristic of the vascular networks. Like the Palabos code discussed previously on this channel, HemeLB solves fluid flow problems using a parallelised implementation of the lattice Boltzmann method (LBM). Parallelising the LBM for bulk flow problems is relatively straightforward; however, for the complex and highly individualised geometries of interest in human blood flow, a more involved indirect addressing algorithm must be used for which parallelisation is significantly more complicated [1]. HemeLB has a long record of demonstrating excellent strong scaling performance on state-of-the-art computing facilities [3, 4, 5]…
For the full report and all references, please join and access the InSilico World and CompBioMed Slack Scalability channel by following the link.