Skip to content

Conversation

joezuntz
Copy link

Hi there,

This PR adds the ability to use MPI to parallelize SOM training, by adding a reduction step in the merge_updates. it should make no difference to non-MPI usage, but allow people to scale across clusters, including GPU clusters.

To make use of it, users pass their mpi4py communicator object, e.g. MPI_COMM_WORLD, to the relevant functions. I've added some tests, which require the MPI mocking library mockmpi.

No worries if this isn't a feature you're interested in having in the code - I can always maintain it separately.

Cheers,
Joe

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant