Skip to content

Support distributed training with torch.nn.DataParallel() #850

@lxqpku

Description

@lxqpku

🐝 Expected behavior

Support multiple GPU training.

How to set multiple GPUs to the device and train the model on multiple GPUs

Metadata

Metadata

Assignees

Labels

coreCore avl functionalities and assets

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions