Description of problem: From https://bugzilla.redhat.com/show_bug.cgi?id=1432288 --- Comment #6 from Florian Weimer <fweimer> --- (In reply to Ben Woodard from comment #5) They really nead consistent library load addresses across compute nodes participating in a compute job. I'm not sure if we can actually support that in the long run. Everything is moving towards adding more randomness. ------------ My response is: This will ultimately end up becoming a dynamic linker feature request then. So please take that into account as you think about that section of code. Remember supercomputers are sort of like the borg or a bee hive and one of the things that keeps them operable is high levels of fractal like symmetry. In the same way that genetic diversity protects a species from pathogens, randomization protects applications from exploits but as we can see from nature while most species have diversity on an individual level there are still a few like social bees and ants which maintain diversity on a colony or hive level as superorganisms. There needs to be some sort of mechanism within the dynamic linker that allows all jobs participating in an MPI job to load their libraries in a consistent way. This is important for at least two reasons. 1) This greatly simplifies debugging and massively reduces the memory footprint used by MPI debuggers. 2) When using RDMA being able to pass data and actual pointers between the various ranks of the MPI job greatly simplifies and improves the performance of MPI jobs. When things have random offsets then all data and function pointers must be marshaled into offset and then translated to the other rank's address space. This extra step slows down interactions between compute nodes participating in the job.
“setarch -R” already supports this: https://manpages.debian.org/setarch