You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: src/blog/2020-09-10-pytorch-ignite.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -698,7 +698,7 @@ PyTorch-Ignite provides an ensemble of metrics dedicated to many Deep Learning t
698
698
699
699
- For classification : `Precision`, `Recall`, `Accuracy`, `ConfusionMatrix` and more!
700
700
- For segmentation : `DiceCoefficient`, `IoU`, `mIOU` and more!
701
-
-~20 regression metrics, e.g. MSE, MAE, MedianAbsoluteError, etc
701
+
-~20 regression metrics, e.g. MSE, MAE, MedianAbsoluteError, etc
702
702
- Metrics that store the entire output history per epoch
703
703
- Possible to use with `scikit-learn` metrics, e.g. `EpochMetric`, `AveragePrecision`, `ROC_AUC`, etc
704
704
- Easily composable to assemble a custom metric
@@ -942,7 +942,7 @@ with idist.Parallel(backend=backend, **dist_configs) as parallel:
942
942
```
943
943
944
944
2020-08-31 11:27:07,128 ignite.distributed.launcher.Parallel INFO: Initialized distributed launcher with backend: 'gloo'
945
-
2020-08-31 11:27:07,128 ignite.distributed.launcher.Parallel INFO: - Parameters to spawn processes:
945
+
2020-08-31 11:27:07,128 ignite.distributed.launcher.Parallel INFO: - Parameters to spawn processes:
946
946
nproc_per_node: 2
947
947
nnodes: 1
948
948
node_rank: 0
@@ -951,7 +951,7 @@ with idist.Parallel(backend=backend, **dist_configs) as parallel:
951
951
1 : run with config: {'c': 12345} - backend= gloo
952
952
2020-08-31 11:27:09,959 ignite.distributed.launcher.Parallel INFO: End of run
953
953
954
-
The above code with a single modification can run on a GPU, single-node multiple GPUs, single or multiple TPUs etc. It can be executed with the `torch.distributed.launch` tool or by Python and spawning the required number of processes. For more details, see [the documentation](https://pytorch.org/ignite/distributed.html).
954
+
The above code with a single modification can run on a GPU, single-node multiple GPUs, single or multiple TPUs etc. It can be executed with the `torchrun` or by Python and spawning the required number of processes. For more details, see [the documentation](https://pytorch.org/ignite/distributed.html).
955
955
956
956
In addition, methods like `auto_model()`, `auto_optim()` and `auto_dataloader()` help to adapt in a transparent way the provided model, optimizer and data loaders to an existing configuration:
This context manager has the capability to either spawn `nproc_per_node` (passed as a script argument) child processes and initialize a processing group according to the provided backend or use tools like `torch.distributed.launch`, `slurm`, `horovodrun` by initializing the processing group given the `backend` argument only
186
+
This context manager has the capability to either spawn `nproc_per_node` (passed as a script argument) child processes and initialize a processing group according to the provided backend or use tools like `torchrun`, `slurm`, `horovodrun` by initializing the processing group given the `backend` argument only
0 commit comments