-
Notifications
You must be signed in to change notification settings - Fork 53
Open
Labels
cat:enhancementNew feature or requestNew feature or request
Description
pytorch-pfn-extras/pytorch_pfn_extras/distributed/_initialize.py
Lines 63 to 67 in c5b4d58
| if world_size > 1 and not torch.distributed.is_initialized(): # type: ignore | |
| torch.distributed.init_process_group( # type: ignore | |
| backend, init_method=init_method, world_size=world_size, rank=rank | |
| ) | |
| torch.distributed.barrier() # type: ignore |
I think torch.distributed.init_process_group() can be executed since there is no error even in the world_size=1 state.
By allowing this to be executed even in the world_size=1 state, it is possible to check the operation with respect to the function assuming that torch.distributed.is_initialized() is True, without having to run MPI.
Metadata
Metadata
Assignees
Labels
cat:enhancementNew feature or requestNew feature or request