Releases: tskisner/pshmem
Small updates to tests and CI workflows
- Bump python versions for tests and requirements.
- Fix use of dtype
np.int_in tests. - Use concurrency in github actions rather than cancel-workflow action.
- Update versioneer for python 3.12 compatibility.
Use Numpy Arrays for Single Process per Node
Even in the MPI case, if there is only one process in the node shared communicator, then use a simple numpy array. This reduces the total number of shared memory segments, which is limited by the kernel in terms of maximum number of open files (since shared memory segments appear as files to userspace).
Use POSIX shared memory instead of MPI shared windows
This converts the MPIShared class to use POSIX shared memory underneath. We had been using MPI shared windows for this even though we are managing write access to this memory ourselves. Most MPI implementations have a limit on the number of global shared memory buffers that can be allocated and this is insufficient for many cases. This new code is limited only by the number of shared memory segments per node supported by the kernel. The unit tests were run successfully up to 8192 processes.
Fix the array interface
This just adds pass-through methods to the underlying data view.
Add array interface to MPIShared
This defines the __array__() method to the MPIShared class, which exposes the data member when wrapping in a numpy array.
Fix shared memory allocation on some MPI implementations
On some MPI implementations, allocating zero bytes of shared memory on some processes produces an error. Allocate a small dummy buffer on those processes as a workaround.
Fix RMA error
- Fix RMA sequence error with some MPI implementations
- Add python 3.9 and 3.10 to test matrix
Support zero-size data
- Support size-zero MPIShared objects
- Small fixes to Bcast for mpi4py 3.1.1
Support Re-use of Communicators
The MPIShared class now accepts pre-existing node and node-rank communicators in the constructor.
Remove use of MPI Abort
When encountering an error, print a message and raise or re-raise an exception. This allows easier viewing of problems that occur in the underlying mpi4py package. It does mean that calling applications must be diligent about handling exceptions on all processes an taking appropriate action to avoid deadlocks at future barriers and collective calls.