Skip to content

Commit ef0ac7d

Browse files
Fix missspelled words (#2003)
Fixes #2001 . ### Description Prompt used in codex ``` Please try to find 5 more documentation bugs in Jupyter notebooks ``` ### Checks <!--- Put an `x` in all the boxes that apply, and remove the not applicable items --> - [x] Avoid including large-size files in the PR. - [x] Clean up long text outputs from code cells in the notebook. - [x] For security purposes, please check the contents and remove any sensitive info such as user names and private key. - [x] Ensure (1) hyperlinks and markdown anchors are working (2) use relative paths for tutorial repo files (3) put figure and graphs in the `./figure` folder - [x] Notebook runs automatically `./runner.sh -t <path to .ipynb file>` Signed-off-by: Mingxin Zheng <[email protected]>
1 parent 5f12844 commit ef0ac7d

File tree

10 files changed

+11
-11
lines changed

10 files changed

+11
-11
lines changed

3d_segmentation/swin_unetr_brats21_segmentation_3d.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -455,7 +455,7 @@
455455
"source": [
456456
"## Create Swin UNETR model\n",
457457
"\n",
458-
"In this scetion, we create Swin UNETR model for the 3-class brain tumor semantic segmentation. We use a feature size of 48. We also use gradient checkpointing (use_checkpoint) for more memory-efficient training. However, use_checkpoint for faster training if enough GPU memory is available. "
458+
"In this section, we create Swin UNETR model for the 3-class brain tumor semantic segmentation. We use a feature size of 48. We also use gradient checkpointing (use_checkpoint) for more memory-efficient training. However, use_checkpoint for faster training if enough GPU memory is available. "
459459
]
460460
},
461461
{

3d_segmentation/swin_unetr_btcv_segmentation_3d.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -96,7 +96,7 @@
9696
"source": [
9797
"# Pre-trained Swin UNETR Encoder\n",
9898
"\n",
99-
"We use weights from self-supervised pre-training of Swin UNETR encoder (3D Swin Tranformer) on a cohort of 5050 CT scans from publicly available datasets. The encoder is pre-trained using reconstructin, rotation prediction and contrastive learning pre-text tasks as shown below. For more details, please refer to [1] (CVPR paper) and see this [repository](https://github.com/Project-MONAI/research-contributions/tree/main/SwinUNETR/Pretrain). \n",
99+
"We use weights from self-supervised pre-training of Swin UNETR encoder (3D Swin Transformer) on a cohort of 5050 CT scans from publicly available datasets. The encoder is pre-trained using reconstructin, rotation prediction and contrastive learning pre-text tasks as shown below. For more details, please refer to [1] (CVPR paper) and see this [repository](https://github.com/Project-MONAI/research-contributions/tree/main/SwinUNETR/Pretrain). \n",
100100
"\n",
101101
"Please download the pre-trained weights from this [link](https://github.com/Project-MONAI/MONAI-extra-test-data/releases/download/0.8.1/model_swinvit.pt) and place it in the root directory of this tutorial. \n",
102102
"\n",
@@ -452,7 +452,7 @@
452452
"source": [
453453
"### Initialize Swin UNETR encoder from self-supervised pre-trained weights\n",
454454
"\n",
455-
"In this section, we intialize the Swin UNETR encoder from pre-trained weights. The weights can be downloaded using the wget command below, or by following [this link](https://github.com/Project-MONAI/MONAI-extra-test-data/releases/download/0.8.1/model_swinvit.pt) to GitHub. If training from scratch is desired, please skip this section."
455+
"In this section, we initialize the Swin UNETR encoder from pre-trained weights. The weights can be downloaded using the wget command below, or by following [this link](https://github.com/Project-MONAI/MONAI-extra-test-data/releases/download/0.8.1/model_swinvit.pt) to GitHub. If training from scratch is desired, please skip this section."
456456
]
457457
},
458458
{

auto3dseg/docs/gpu_opt.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ Sometimes the low GPU utilization is because that GPU capacities is not fully ut
88
Our proposed solution is capable to automatically estimate hyper-parameters in model training configurations maximizing utilities of the available GPU capacities.
99
The solution is leveraging hyper-parameter optimization algorithms to search for optimital hyper-parameters with any given GPU devices.
1010

11-
The following hyper-paramters in model training configurations are optimized in the process.
11+
The following hyper-parameters in model training configurations are optimized in the process.
1212

1313
1. **num_images_per_batch:** Batch size determines how many images are in each mini-batch and how many training iterations per epoch. Large batch size can reduce training time per epoch and increase GPU memory usage with decent CPU capacities for I/O;
1414
2. **num_sw_batch_size:** Batch size in sliding-window inference directly relates to how many patches are in one pass of model feedforward operation. Large batch size in sliding-window inference can reduce overall inference time and increase GPU memory usage;

deployment/Triton/models/mednist_class/1/model.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -89,7 +89,7 @@ def initialize(self, args):
8989
"""
9090
`initialize` is called only once when the model is being loaded.
9191
Implementing `initialize` function is optional. This function allows
92-
the model to intialize any state associated with this model.
92+
the model to initialize any state associated with this model.
9393
"""
9494

9595
# Pull model from google drive

deployment/Triton/models/monai_covid/1/model.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -79,7 +79,7 @@ def initialize(self, args):
7979
"""
8080
`initialize` is called only once when the model is being loaded.
8181
Implementing `initialize` function is optional. This function allows
82-
the model to intialize any state associated with this model.
82+
the model to initialize any state associated with this model.
8383
"""
8484

8585
# Pull model from google drive

generation/anomaly_detection/anomaly_detection_with_transformers.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1313,7 +1313,7 @@
13131313
"# loop over tokens\n",
13141314
"for i in range(1, latent.shape[1]):\n",
13151315
" if mask_flattened[i - 1]:\n",
1316-
" # if token is low probability, replace with tranformer's most likely token\n",
1316+
" # if token is low probability, replace with Transformer's most likely token\n",
13171317
" logits = transformer_model(latent_healed[:, :i])\n",
13181318
" probs = F.softmax(logits, dim=-1)\n",
13191319
" # don't sample beginning of sequence token\n",

generation/maisi/scripts/sample.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -597,7 +597,7 @@ def __init__(
597597
label_dict = json.load(f)
598598
self.all_anatomy_size_condtions_json = all_anatomy_size_condtions_json
599599

600-
# intialize variables
600+
# initialize variables
601601
self.body_region = body_region
602602
self.anatomy_list = [label_dict[organ] for organ in anatomy_list]
603603
self.all_mask_files_json = all_mask_files_json

monailabel/monailabel_HelloWorld_radiology_3dslicer.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@
1919
"\n",
2020
"***The Active Learning Process with MONAI Label***\n",
2121
"\n",
22-
"In this notebook, we provide a hello world example of MONAI Label use case. Using Spleen segmentation in Radiology app as the demonstration, 3D Slicer as the client viewer, we show how MONAI Label workflow serves as interacitve AI-Assisted tool for labeling CT scans. \n",
22+
"In this notebook, we provide a hello world example of MONAI Label use case. Using Spleen segmentation in Radiology app as the demonstration, 3D Slicer as the client viewer, we show how MONAI Label workflow serves as interactive AI-Assisted tool for labeling CT scans. \n",
2323
"\n",
2424
"![workflow](./figures/monailabel_radiology_3dslicer/teaser_img.png)\n",
2525
"\n",

multimodal/openi_multilabel_classification_transchex/transchex_openi_multilabel_classification.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@
2828
"An example of images and corresponding reports in Open-I dataset is presented as follows [2]:\n",
2929
"![image](../../figures/openi_sample.png)\n",
3030
"\n",
31-
"In this tutorial, we use the TransCheX model with 2 layers for each of vision, language mixed modality encoders respectively. As an input to the TransCheX, we use the patient **report** and corresponding **chest X-ray image**. The image itself will be divided into non-overlapping patches with a specified patch resolution and projected into an embedding space. Similarly the reports are tokenized and projected into their respective embedding space. The language and vision encoders seperately encode their respective features from the projected embeddings in each modality. Furthmore, the output of vision and language encoders are fed into a mixed modality encoder which extraxts mutual information. The output of the mixed modality encoder is then utilized for the classification application. \n",
31+
"In this tutorial, we use the TransCheX model with 2 layers for each of vision, language mixed modality encoders respectively. As an input to the TransCheX, we use the patient **report** and corresponding **chest X-ray image**. The image itself will be divided into non-overlapping patches with a specified patch resolution and projected into an embedding space. Similarly the reports are tokenized and projected into their respective embedding space. The language and vision encoders seperately encode their respective features from the projected embeddings in each modality. Furthmore, the output of vision and language encoders are fed into a mixed modality encoder which extracts mutual information. The output of the mixed modality encoder is then utilized for the classification application. \n",
3232
"\n",
3333
"[1] : \"Hatamizadeh et al.,TransCheX: Self-Supervised Pretraining of Vision-Language Transformers for Chest X-ray Analysis\"\n",
3434
"\n",

self_supervised_pretraining/vit_unetr_ssl/ssl_train.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -260,7 +260,7 @@
260260
"\n",
261261
"model = model.to(device)\n",
262262
"\n",
263-
"# Define Hyper-paramters for training loop\n",
263+
"# Define Hyper-parameters for training loop\n",
264264
"max_epochs = 500\n",
265265
"val_interval = 2\n",
266266
"batch_size = 4\n",

0 commit comments

Comments
 (0)