Skip to content

Commit 2cd9c5b

Browse files
authored
Update/readme (#284)
1 parent bab5cce commit 2cd9c5b

File tree

1 file changed

+82
-41
lines changed

1 file changed

+82
-41
lines changed

README.md

Lines changed: 82 additions & 41 deletions
Original file line numberDiff line numberDiff line change
@@ -11,11 +11,11 @@ Segmentation based on [PyTorch](https://pytorch.org/).**
1111
The main features of this library are:
1212

1313
- High level API (just two lines to create neural network)
14-
- 7 models architectures for binary and multi class segmentation (including legendary Unet)
14+
- 8 models architectures for binary and multi class segmentation (including legendary Unet)
1515
- 57 available encoders for each architecture
1616
- All encoders have pre-trained weights for faster and better convergence
1717

18-
### Table of content
18+
### 📋 Table of content
1919
1. [Quick start](#start)
2020
2. [Examples](#examples)
2121
3. [Models](#models)
@@ -31,36 +31,42 @@ The main features of this library are:
3131
8. [Citing](#citing)
3232
9. [License](#license)
3333

34-
### Quick start <a name="start"></a>
35-
Since the library is built on the PyTorch framework, created segmentation model is just a PyTorch nn.Module, which can be created as easy as:
36-
```python
37-
import segmentation_models_pytorch as smp
34+
### ⏳ Quick start <a name="start"></a>
3835

39-
model = smp.Unet()
40-
```
41-
Depending on the task, you can change the network architecture by choosing backbones with fewer or more parameters and use pretrainded weights to initialize it:
36+
#### 1. Create your first Segmentation model with SMP
37+
38+
Segmentation model is just a PyTorch nn.Module, which can be created as easy as:
4239

4340
```python
44-
model = smp.Unet('resnet34', encoder_weights='imagenet')
41+
import segmentation_models_pytorch as smp
42+
43+
model = smp.Unet(
44+
encoder_name="resnet34", # choose encoder, e.g. mobilenet_v2 or efficientnet-b7
45+
encoder_weights="imagenet", # use `imagenet` pretreined weights for encoder initialization
46+
in_channels=1, # model input channels (1 for grayscale images, 3 for RGB, etc.)
47+
classes=3, # model output channels (number of classes in your dataset)
48+
)
4549
```
50+
- see [table](#architectires) with available model architectures
51+
- see [table](#encoders) with avaliable encoders and its corresponding weights
4652

47-
Change number of output classes in the model:
53+
#### 2. Configure data preprocessing
4854

49-
```python
50-
model = smp.Unet('resnet34', classes=3, activation='softmax')
51-
```
55+
All encoders have pretrained weights. Preparing your data the same way as during weights pretraining may give your better results (higher metric score and faster convergence). But it is relevant only for 1-2-3-channels images and **not necessary** in case you train the whole model, not only decoder.
5256

53-
All models have pretrained encoders, so you have to prepare your data the same way as during weights pretraining:
5457
```python
5558
from segmentation_models_pytorch.encoders import get_preprocessing_fn
5659

5760
preprocess_input = get_preprocessing_fn('resnet18', pretrained='imagenet')
5861
```
59-
### Examples <a name="examples"></a>
62+
63+
Congratulations! You are done! Now you can train your model with your favorite framework!
64+
65+
### 💡 Examples <a name="examples"></a>
6066
- Training model for cars segmentation on CamVid dataset [here](https://github.com/qubvel/segmentation_models.pytorch/blob/master/examples/cars%20segmentation%20(camvid).ipynb).
61-
- Training SMP model with [Catalyst](https://github.com/catalyst-team/catalyst) (high-level framework for PyTorch), [Ttach](https://github.com/qubvel/ttach) (TTA library for PyTorch) and [Albumentations](https://github.com/albu/albumentations) (fast image augmentation library) - [here](https://github.com/catalyst-team/catalyst/blob/master/examples/notebooks/segmentation-tutorial.ipynb) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/catalyst-team/catalyst/blob/master/examples/notebooks/segmentation-tutorial.ipynb)
67+
- Training SMP model with [Catalyst](https://github.com/catalyst-team/catalyst) (high-level framework for PyTorch), [TTAch](https://github.com/qubvel/ttach) (TTA library for PyTorch) and [Albumentations](https://github.com/albu/albumentations) (fast image augmentation library) - [here](https://github.com/catalyst-team/catalyst/blob/master/examples/notebooks/segmentation-tutorial.ipynb) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/catalyst-team/catalyst/blob/master/examples/notebooks/segmentation-tutorial.ipynb)
6268

63-
### Models <a name="models"></a>
69+
### 📦 Models <a name="models"></a>
6470

6571
#### Architectures <a name="architectires"></a>
6672
- [Unet](https://arxiv.org/abs/1505.04597) and [Unet++](https://arxiv.org/pdf/1807.10165.pdf)
@@ -72,17 +78,20 @@ preprocess_input = get_preprocessing_fn('resnet18', pretrained='imagenet')
7278

7379
#### Encoders <a name="encoders"></a>
7480

81+
<details>
82+
<summary>Table with ALL avaliable encoders (click to expand)</summary>
83+
7584
|Encoder |Weights |Params, M |
7685
|--------------------------------|:------------------------------:|:------------------------------:|
77-
|resnet18 |imagenet<br>ssl*<br>swsl* |11M |
86+
|resnet18 |imagenet / ssl / swsl |11M |
7887
|resnet34 |imagenet |21M |
79-
|resnet50 |imagenet<br>ssl*<br>swsl* |23M |
88+
|resnet50 |imagenet / ssl / swsl |23M |
8089
|resnet101 |imagenet |42M |
8190
|resnet152 |imagenet |58M |
82-
|resnext50_32x4d |imagenet<br>ssl*<br>swsl* |22M |
83-
|resnext101_32x4d |ssl<br>swsl |42M |
84-
|resnext101_32x8d |imagenet<br>instagram<br>ssl*<br>swsl*|86M |
85-
|resnext101_32x16d |instagram<br>ssl*<br>swsl* |191M |
91+
|resnext50_32x4d |imagenet / ssl / swsl |22M |
92+
|resnext101_32x4d |ssl / swsl |42M |
93+
|resnext101_32x8d |imagenet / instagram / ssl / swsl|86M |
94+
|resnext101_32x16d |instagram / ssl / swsl |191M |
8695
|resnext101_32x32d |instagram |466M |
8796
|resnext101_32x48d |instagram |826M |
8897
|dpn68 |imagenet |11M |
@@ -109,8 +118,8 @@ preprocess_input = get_preprocessing_fn('resnet18', pretrained='imagenet')
109118
|densenet169 |imagenet |12M |
110119
|densenet201 |imagenet |18M |
111120
|densenet161 |imagenet |26M |
112-
|inceptionresnetv2 |imagenet<br>imagenet+background |54M |
113-
|inceptionv4 |imagenet<br>imagenet+background |41M |
121+
|inceptionresnetv2 |imagenet / imagenet+background |54M |
122+
|inceptionv4 |imagenet / imagenet+background |41M |
114123
|efficientnet-b0 |imagenet |4M |
115124
|efficientnet-b1 |imagenet |6M |
116125
|efficientnet-b2 |imagenet |7M |
@@ -121,20 +130,52 @@ preprocess_input = get_preprocessing_fn('resnet18', pretrained='imagenet')
121130
|efficientnet-b7 |imagenet |63M |
122131
|mobilenet_v2 |imagenet |2M |
123132
|xception |imagenet |22M |
124-
|timm-efficientnet-b0 |imagenet<br>advprop<br>noisy-student|4M |
125-
|timm-efficientnet-b1 |imagenet<br>advprop<br>noisy-student|6M |
126-
|timm-efficientnet-b2 |imagenet<br>advprop<br>noisy-student|7M |
127-
|timm-efficientnet-b3 |imagenet<br>advprop<br>noisy-student|10M |
128-
|timm-efficientnet-b4 |imagenet<br>advprop<br>noisy-student|17M |
129-
|timm-efficientnet-b5 |imagenet<br>advprop<br>noisy-student|28M |
130-
|timm-efficientnet-b6 |imagenet<br>advprop<br>noisy-student|40M |
131-
|timm-efficientnet-b7 |imagenet<br>advprop<br>noisy-student|63M |
132-
|timm-efficientnet-b8 |imagenet<br>advprop |84M |
133+
|timm-efficientnet-b0 |imagenet / advprop / noisy-student|4M |
134+
|timm-efficientnet-b1 |imagenet / advprop / noisy-student|6M |
135+
|timm-efficientnet-b2 |imagenet / advprop / noisy-student|7M |
136+
|timm-efficientnet-b3 |imagenet / advprop / noisy-student|10M |
137+
|timm-efficientnet-b4 |imagenet / advprop / noisy-student|17M |
138+
|timm-efficientnet-b5 |imagenet / advprop / noisy-student|28M |
139+
|timm-efficientnet-b6 |imagenet / advprop / noisy-student|40M |
140+
|timm-efficientnet-b7 |imagenet / advprop / noisy-student|63M |
141+
|timm-efficientnet-b8 |imagenet / advprop |84M |
133142
|timm-efficientnet-l2 |noisy-student |474M |
134143

135144
\* `ssl`, `wsl` - semi-supervised and weakly-supervised learning on ImageNet ([repo](https://github.com/facebookresearch/semi-supervised-ImageNet1K-models)).
136145

137-
### Models API <a name="api"></a>
146+
</details>
147+
148+
Just commonly used encoders
149+
150+
|Encoder |Weights |Params, M |
151+
|--------------------------------|:------------------------------:|:------------------------------:|
152+
|resnet18 |imagenet / ssl / swsl |11M |
153+
|resnet34 |imagenet |21M |
154+
|resnet50 |imagenet / ssl / swsl |23M |
155+
|resnet101 |imagenet |42M |
156+
|resnext50_32x4d |imagenet / ssl / swsl |22M |
157+
|resnext101_32x4d |ssl / swsl |42M |
158+
|resnext101_32x8d |imagenet / instagram / ssl / swsl|86M |
159+
|senet154 |imagenet |113M |
160+
|se_resnext50_32x4d |imagenet |25M |
161+
|se_resnext101_32x4d |imagenet |46M |
162+
|densenet121 |imagenet |6M |
163+
|densenet169 |imagenet |12M |
164+
|densenet201 |imagenet |18M |
165+
|inceptionresnetv2 |imagenet / imagenet+background |54M |
166+
|inceptionv4 |imagenet / imagenet+background |41M |
167+
|mobilenet_v2 |imagenet |2M |
168+
|timm-efficientnet-b0 |imagenet / advprop / noisy-student|4M |
169+
|timm-efficientnet-b1 |imagenet / advprop / noisy-student|6M |
170+
|timm-efficientnet-b2 |imagenet / advprop / noisy-student|7M |
171+
|timm-efficientnet-b3 |imagenet / advprop / noisy-student|10M |
172+
|timm-efficientnet-b4 |imagenet / advprop / noisy-student|17M |
173+
|timm-efficientnet-b5 |imagenet / advprop / noisy-student|28M |
174+
|timm-efficientnet-b6 |imagenet / advprop / noisy-student|40M |
175+
|timm-efficientnet-b7 |imagenet / advprop / noisy-student|63M |
176+
177+
178+
### 🔁 Models API <a name="api"></a>
138179

139180
- `model.encoder` - pretrained backbone to extract features of different spatial resolution
140181
- `model.decoder` - depends on models architecture (`Unet`/`Linknet`/`PSPNet`/`FPN`)
@@ -176,7 +217,7 @@ model = smp.Unet('resnet34', encoder_depth=4)
176217
```
177218

178219

179-
### Installation <a name="installation"></a>
220+
### 🛠 Installation <a name="installation"></a>
180221
PyPI version:
181222
```bash
182223
$ pip install segmentation-models-pytorch
@@ -186,12 +227,12 @@ Latest version from source:
186227
$ pip install git+https://github.com/qubvel/segmentation_models.pytorch
187228
````
188229
189-
### Competitions won with the library
230+
### 🏆 Competitions won with the library
190231
191232
`Segmentation Models` package is widely used in the image segmentation competitions.
192233
[Here](https://github.com/qubvel/segmentation_models.pytorch/blob/master/HALLOFFAME.md) you can find competitions, names of the winners and links to their solutions.
193234
194-
### Contributing
235+
### 🤝 Contributing
195236
196237
##### Run test
197238
```bash
@@ -202,7 +243,7 @@ $ docker build -f docker/Dockerfile.dev -t smp:dev . && docker run --rm smp:dev
202243
$ docker build -f docker/Dockerfile.dev -t smp:dev . && docker run --rm smp:dev python misc/generate_table.py
203244
```
204245

205-
### Citing
246+
### 📝 Citing
206247
```
207248
@misc{Yakubovskiy:2019,
208249
Author = {Pavel Yakubovskiy},
@@ -214,5 +255,5 @@ $ docker build -f docker/Dockerfile.dev -t smp:dev . && docker run --rm smp:dev
214255
}
215256
```
216257

217-
### License <a name="license"></a>
258+
### 🛡️ License <a name="license"></a>
218259
Project is distributed under [MIT License](https://github.com/qubvel/segmentation_models.pytorch/blob/master/LICENSE)

0 commit comments

Comments
 (0)