Skip to content

Conversation

@Quentin-Anthony
Copy link

@Quentin-Anthony Quentin-Anthony commented Jun 13, 2022

@awan-10 -- FYI

DeepSpeed no longer directly calls torch.distributed deepspeedai/DeepSpeed#1985. This commit updates Megatron-DeepSpeed to adhere to the new deepspeed.comm interface.

@Quentin-Anthony
Copy link
Author

One thing I'm uncertain about is whether we should wrap calls in checks to whether args.deepspeed is set like in: https://github.com/microsoft/Megatron-DeepSpeed/blob/main/megatron/training.py#L114

E.g.

if args.deepspeed:
    deepspeed.comm.get_rank()
else:
    torch.distributed.get_rank()

However, this would require that args be propagated to all files where comms are used, which may get a bit messy. Thoughts?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants