Skip to content

Commit 9e67c6c

Browse files
authored
Jingxu10/111 tutorials (#613)
* updated installation guide for 1.11 * add known issues * Add load_state_dict + ipex.optimize() to optimize docstring * update doc for 1.11 release * update doc for 1.11 release * update docs for 1.11 release * update docs for 1.11 release
1 parent b087cec commit 9e67c6c

File tree

5 files changed

+61
-22
lines changed

5 files changed

+61
-22
lines changed

docs/tutorials/features.rst

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -117,6 +117,8 @@ Intel® Extension for PyTorch* has built-in quantization recipes to deliver good
117117

118118
Check more detailed information for `INT8 <features/int8.html>`_.
119119

120+
oneDNN provides an evaluation feature called `oneDNN Graph Compiler <https://github.com/oneapi-src/oneDNN/tree/dev-graph-preview4/doc#onednn-graph-compiler>`_. Please refer to `oneDNN build instruction <https://github.com/oneapi-src/oneDNN/blob/dev-graph-preview4/doc/build/build_options.md#build-graph-compiler>`_ to try this feature.
121+
120122
.. toctree::
121123
:hidden:
122124
:maxdepth: 1

docs/tutorials/installation.md

Lines changed: 40 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -5,16 +5,17 @@ Installation Guide
55

66
|Category|Content|
77
|--|--|
8-
|Compiler|Verified with GCC 9|
8+
|Compiler|Recommend to use GCC 9|
99
|Operating System|CentOS 7, RHEL 8, Ubuntu newer than 18.04|
10-
|Python|3.6, 3.7, 3.8, 3.9|
10+
|Python|See prebuilt wheel files availability matrix below|
1111

1212
## Install PyTorch
1313

1414
You need to make sure PyTorch is installed in order to get the extension working properly. For each PyTorch release, we have a corresponding release of the extension. Here is the PyTorch versions that we support and the mapping relationship:
1515

1616
|PyTorch Version|Extension Version|
1717
|--|--|
18+
|[v1.11.\*](https://github.com/pytorch/pytorch/tree/v1.11.0 "v1.11.0")|[v1.11.\*](https://github.com/intel/intel-extension-for-pytorch/tree/v1.11.0)|
1819
|[v1.10.\*](https://github.com/pytorch/pytorch/tree/v1.10.0 "v1.10.0")|[v1.10.\*](https://github.com/intel/intel-extension-for-pytorch/tree/v1.10.100)|
1920
|[v1.9.0](https://github.com/pytorch/pytorch/tree/v1.9.0 "v1.9.0")|[v1.9.0](https://github.com/intel/intel-extension-for-pytorch/tree/v1.9.0)|
2021
|[v1.8.0](https://github.com/pytorch/pytorch/tree/v1.8.0 "v1.8.0")|[v1.8.0](https://github.com/intel/intel-extension-for-pytorch/tree/v1.8.0)|
@@ -24,7 +25,7 @@ You need to make sure PyTorch is installed in order to get the extension working
2425
|[v1.5.0-rc3](https://github.com/pytorch/pytorch/tree/v1.5.0-rc3 "v1.5.0-rc3")|[v1.0.1](https://github.com/intel/intel-extension-for-pytorch/tree/v1.0.1)|
2526
|[v1.5.0-rc3](https://github.com/pytorch/pytorch/tree/v1.5.0-rc3 "v1.5.0-rc3")|[v1.0.0](https://github.com/intel/intel-extension-for-pytorch/tree/v1.0.0)|
2627

27-
Here is an example showing how to install PyTorch (1.10.0). For more details, please refer to [pytorch.org](https://pytorch.org/get-started/locally/)
28+
Here is an example showing how to install PyTorch. For more details, please refer to [pytorch.org](https://pytorch.org/get-started/locally/)
2829

2930
---
3031

@@ -38,48 +39,67 @@ From 1.8.0, compiling PyTorch from source is not required. If you still want to
3839

3940
## Install via wheel file
4041

41-
Prebuilt wheel files are available starting from 1.8.0 release. We recommend you to install the latest version with the following commands:
42+
Prebuilt wheel files availability matrix for Python versions
43+
44+
| Extension Version | Python 3.6 | Python 3.7 | Python 3.8 | Python 3.9 | Python 3.10 |
45+
| :--: | :--: | :--: | :--: | :--: | :--: |
46+
| 1.11.0 | | ✔️ | ✔️ | ✔️ | ✔️ |
47+
| 1.10.100 | ✔️ | ✔️ | ✔️ | ✔️ | |
48+
| 1.10.0 | ✔️ | ✔️ | ✔️ | ✔️ | |
49+
| 1.9.0 | ✔️ | ✔️ | ✔️ | ✔️ | |
50+
| 1.8.0 | | ✔️ | | | |
51+
52+
Starting from 1.11.0, you can use normal pip command to install the package.
4253

4354
```
44-
python -m pip install intel_extension_for_pytorch==1.10.100 -f https://software.intel.com/ipex-whl-stable
45-
python -m pip install psutil
55+
python -m pip install intel_extension_for_pytorch
4656
```
4757

48-
**Note:** Wheel files availability for Python versions
58+
Alternatively, you can also install the latest version with the following commands:
4959

50-
| Extension Version | Python 3.6 | Python 3.7 | Python 3.8 | Python 3.9 |
51-
| :--: | :--: | :--: | :--: | :--: |
52-
| 1.10.100 | ✔️ | ✔️ | ✔️ | ✔️ |
53-
| 1.10.0 | ✔️ | ✔️ | ✔️ | ✔️ |
54-
| 1.9.0 | ✔️ | ✔️ | ✔️ | ✔️ |
55-
| 1.8.0 | | ✔️ | | |
56-
57-
**Note:** The wheel files released are compiled with AVX-512 instruction set support only. They cannot be running on hardware platforms that don't support AVX-512 instruction set. Please compile from source with AVX2 support in this case.
60+
```
61+
python -m pip install intel_extension_for_pytorch -f https://software.intel.com/ipex-whl-stable
62+
```
5863

5964
**Note:** For version prior to 1.10.0, please use package name `torch_ipex`, rather than `intel_extension_for_pytorch`.
6065

66+
**Note:** To install a package with a specific version, please use the standard way of pip.
67+
68+
```
69+
python -m pip install <package_name>==<version_name> -f https://software.intel.com/ipex-whl-stable
70+
```
71+
6172
## Install via source compilation
6273

6374
```bash
6475
git clone --recursive https://github.com/intel/intel-extension-for-pytorch
6576
cd intel-extension-for-pytorch
66-
git checkout v1.10.100
77+
git checkout v1.11.0
6778

6879
# if you are updating an existing checkout
6980
git submodule sync
7081
git submodule update --init --recursive
7182

72-
# run setup.py to compile and install the binaries
73-
# if you need to compile from source with AVX2 support, please uncomment the following line.
74-
# export AVX2=1
7583
python setup.py install
7684
```
7785

7886
## Install C++ SDK
7987

8088
|Version|Pre-cxx11 ABI|cxx11 ABI|
8189
|--|--|--|
90+
| 1.11.0 | [libintel-ext-pt-shared-with-deps-1.11.0+cpu.run](http://) | [libintel-ext-pt-cxx11-abi-shared-with-deps-1.11.0+cpu.run](http://) |
8291
| 1.10.100 | [libtorch-shared-with-deps-1.10.0%2Bcpu-intel-ext-pt-cpu-1.10.100.zip](http://intel-optimized-pytorch.s3.cn-north-1.amazonaws.com.cn/wheels/v1.10/libtorch-shared-with-deps-1.10.0%2Bcpu-intel-ext-pt-cpu-1.10.100.zip) | [libtorch-cxx11-abi-shared-with-deps-1.10.0%2Bcpu-intel-ext-pt-cpu-1.10.100.zip](http://intel-optimized-pytorch.s3.cn-north-1.amazonaws.com.cn/wheels/v1.10/libtorch-cxx11-abi-shared-with-deps-1.10.0%2Bcpu-intel-ext-pt-cpu-1.10.100.zip) |
8392
| 1.10.0 | [intel-ext-pt-cpu-libtorch-shared-with-deps-1.10.0+cpu.zip](https://intel-optimized-pytorch.s3.cn-north-1.amazonaws.com.cn/wheels/v1.10/intel-ext-pt-cpu-libtorch-shared-with-deps-1.10.0%2Bcpu.zip) | [intel-ext-pt-cpu-libtorch-cxx11-abi-shared-with-deps-1.10.0+cpu.zip](https://intel-optimized-pytorch.s3.cn-north-1.amazonaws.com.cn/wheels/v1.10/intel-ext-pt-cpu-libtorch-cxx11-abi-shared-with-deps-1.10.0%2Bcpu.zip) |
8493

85-
**Usage:** Donwload one zip file above according to your scenario, unzip it and follow the [C++ example](./examples.html#c).
94+
**Usage:** For version newer than 1.11.0, donwload one run file above according to your scenario, run the following command to install it and follow the [C++ example](./examples.html#c).
95+
```
96+
bash <libintel-ext-pt-name>.run install <libtorch_path>
97+
```
98+
99+
You can get full usage help message by running the run file alone, as the following command.
100+
101+
```
102+
bash <libintel-ext-pt-name>.run
103+
```
104+
105+
**Usage:** For version prior to 1.11.0, donwload one zip file above according to your scenario, unzip it and follow the [C++ example](./examples.html#c).

docs/tutorials/performance.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -68,7 +68,7 @@ This page shows performance boost with Intel® Extension for PyTorch\* on severa
6868
<td style="text-align: center; vertical-align: middle" scope="col">Input shape<br />[3, 224, 224]</td>
6969
</tr>
7070
<tr>
71-
<td style="text-align: center; vertical-align: middle" scope="col">Fast R-CNN ResNet50 FPN</td>
71+
<td style="text-align: center; vertical-align: middle" scope="col">Faster R-CNN ResNet50 FPN</td>
7272
<td style="text-align: center; vertical-align: middle" scope="col">Float32</td>
7373
<td style="text-align: center; vertical-align: middle" scope="col">80</td>
7474
<td style="text-align: center; vertical-align: middle" scope="col">1.71x</td>

docs/tutorials/performance_tuning/known_issues.md

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,17 @@
11
Known Issues
22
============
33

4+
- BFloat16 is currently only supported natively on platforms with the following instruction set. The support will be expanded gradually to more platforms in furture releases.
5+
6+
| Instruction Set | Description |
7+
| --- | --- |
8+
| AVX512\_CORE | Intel AVX-512 with AVX512BW, AVX512VL, and AVX512DQ extensions |
9+
| AVX512\_CORE\_VNNI | Intel AVX-512 with Intel DL Boost |
10+
| AVX512\_CORE\_BF16 | Intel AVX-512 with Intel DL Boost and bfloat16 support |
11+
| AVX512\_CORE\_AMX | Intel AVX-512 with Intel DL Boost and bfloat16 support and Intel Advanced Matrix Extensions (Intel AMX) with 8-bit integer and bfloat16 support |
12+
13+
- INT8 performance of EfficientNet and DenseNet with Intel® Extension for PyTorch\* is slower than that of FP32
14+
415
- `omp_set_num_threads` function failed to change OpenMP threads number of oneDNN operators if it was set before.
516

617
`omp_set_num_threads` function is provided in Intel® Extension for PyTorch\* to change number of threads used with openmp. However, it failed to change number of OpenMP threads if it was set before.

intel_extension_for_pytorch/frontend.py

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -172,7 +172,12 @@ def optimize(
172172
173173
.. warning::
174174
175-
Please invoke ``optimize`` function before invoking DDP in distributed
175+
Please invoke ``optimize`` function AFTER loading weights to model via
176+
``model.load_state_dict(torch.load(PATH))``.
177+
178+
.. warning::
179+
180+
Please invoke ``optimize`` function BEFORE invoking DDP in distributed
176181
training scenario.
177182
178183
The ``optimize`` function deepcopys the original model. If DDP is invoked
@@ -185,6 +190,7 @@ def optimize(
185190
186191
>>> # bfloat16 inference case.
187192
>>> model = ...
193+
>>> model.load_state_dict(torch.load(PATH))
188194
>>> model.eval()
189195
>>> optimized_model = ipex.optimize(model, dtype=torch.bfloat16)
190196
>>> # running evaluation step.

0 commit comments

Comments
 (0)