You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+82-41Lines changed: 82 additions & 41 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,11 +11,11 @@ Segmentation based on [PyTorch](https://pytorch.org/).**
11
11
The main features of this library are:
12
12
13
13
- High level API (just two lines to create neural network)
14
-
-7 models architectures for binary and multi class segmentation (including legendary Unet)
14
+
-8 models architectures for binary and multi class segmentation (including legendary Unet)
15
15
- 57 available encoders for each architecture
16
16
- All encoders have pre-trained weights for faster and better convergence
17
17
18
-
### Table of content
18
+
### 📋 Table of content
19
19
1.[Quick start](#start)
20
20
2.[Examples](#examples)
21
21
3.[Models](#models)
@@ -31,36 +31,42 @@ The main features of this library are:
31
31
8.[Citing](#citing)
32
32
9.[License](#license)
33
33
34
-
### Quick start <aname="start"></a>
35
-
Since the library is built on the PyTorch framework, created segmentation model is just a PyTorch nn.Module, which can be created as easy as:
36
-
```python
37
-
import segmentation_models_pytorch as smp
34
+
### ⏳ Quick start <aname="start"></a>
38
35
39
-
model = smp.Unet()
40
-
```
41
-
Depending on the task, you can change the network architecture by choosing backbones with fewer or more parameters and use pretrainded weights to initialize it:
36
+
#### 1. Create your first Segmentation model with SMP
37
+
38
+
Segmentation model is just a PyTorch nn.Module, which can be created as easy as:
42
39
43
40
```python
44
-
model = smp.Unet('resnet34', encoder_weights='imagenet')
41
+
import segmentation_models_pytorch as smp
42
+
43
+
model = smp.Unet(
44
+
encoder_name="resnet34", # choose encoder, e.g. mobilenet_v2 or efficientnet-b7
45
+
encoder_weights="imagenet", # use `imagenet` pretreined weights for encoder initialization
46
+
in_channels=1, # model input channels (1 for grayscale images, 3 for RGB, etc.)
47
+
classes=3, # model output channels (number of classes in your dataset)
48
+
)
45
49
```
50
+
- see [table](#architectires) with available model architectures
51
+
- see [table](#encoders) with avaliable encoders and its corresponding weights
46
52
47
-
Change number of output classes in the model:
53
+
#### 2. Configure data preprocessing
48
54
49
-
```python
50
-
model = smp.Unet('resnet34', classes=3, activation='softmax')
51
-
```
55
+
All encoders have pretrained weights. Preparing your data the same way as during weights pretraining may give your better results (higher metric score and faster convergence). But it is relevant only for 1-2-3-channels images and **not necessary** in case you train the whole model, not only decoder.
52
56
53
-
All models have pretrained encoders, so you have to prepare your data the same way as during weights pretraining:
54
57
```python
55
58
from segmentation_models_pytorch.encoders import get_preprocessing_fn
Congratulations! You are done! Now you can train your model with your favorite framework!
64
+
65
+
### 💡 Examples <aname="examples"></a>
60
66
- Training model for cars segmentation on CamVid dataset [here](https://github.com/qubvel/segmentation_models.pytorch/blob/master/examples/cars%20segmentation%20(camvid).ipynb).
61
-
- Training SMP model with [Catalyst](https://github.com/catalyst-team/catalyst) (high-level framework for PyTorch), [Ttach](https://github.com/qubvel/ttach) (TTA library for PyTorch) and [Albumentations](https://github.com/albu/albumentations) (fast image augmentation library) - [here](https://github.com/catalyst-team/catalyst/blob/master/examples/notebooks/segmentation-tutorial.ipynb)[](https://colab.research.google.com/github/catalyst-team/catalyst/blob/master/examples/notebooks/segmentation-tutorial.ipynb)
67
+
- Training SMP model with [Catalyst](https://github.com/catalyst-team/catalyst) (high-level framework for PyTorch), [TTAch](https://github.com/qubvel/ttach) (TTA library for PyTorch) and [Albumentations](https://github.com/albu/albumentations) (fast image augmentation library) - [here](https://github.com/catalyst-team/catalyst/blob/master/examples/notebooks/segmentation-tutorial.ipynb)[](https://colab.research.google.com/github/catalyst-team/catalyst/blob/master/examples/notebooks/segmentation-tutorial.ipynb)
62
68
63
-
### Models <aname="models"></a>
69
+
### 📦 Models <aname="models"></a>
64
70
65
71
#### Architectures <aname="architectires"></a>
66
72
-[Unet](https://arxiv.org/abs/1505.04597) and [Unet++](https://arxiv.org/pdf/1807.10165.pdf)
\*`ssl`, `wsl` - semi-supervised and weakly-supervised learning on ImageNet ([repo](https://github.com/facebookresearch/semi-supervised-ImageNet1K-models)).
`Segmentation Models` package is widely used in the image segmentation competitions.
192
233
[Here](https://github.com/qubvel/segmentation_models.pytorch/blob/master/HALLOFFAME.md) you can find competitions, names of the winners and links to their solutions.
0 commit comments