|
3 | 3 | Intel Extension for PyTorch is a Python package to extend official PyTorch. It is designed to make the Out-of-Box user experience of PyTorch CPU better while achieving good performance. The extension also will be the PR(Pull-Request) buffer for the Intel PyTorch framework dev team. The PR buffer will not only contain functions, but also optimization (for example, take advantage of Intel's new hardware features).
|
4 | 4 |
|
5 | 5 | - [Installation](#installation)
|
6 |
| - - [Install PyTorch from Source](#install-pytorch-from-source) |
| 6 | + - [Install PyTorch](#install-pytorch) |
7 | 7 | - [Install Intel Extension for PyTorch from Source](#install-intel-extension-for-pytorch-from-source)
|
8 | 8 | - [Getting Started](#getting-started)
|
9 | 9 | - [Automatically Mix Precison](#automatically-mix-precision)
|
10 | 10 | - [BFloat16](#BFloat16)
|
11 | 11 | - [INT8](#int8-quantization)
|
12 |
| - - [Contribution](#contribution) |
| 12 | + - [Supported Customized Operators](#supported-customized-operators) |
| 13 | + - [Supported Fusion Patterns](#supported-fusion-patterns) |
| 14 | + - [Tutorials](#tutorials) |
| 15 | + - [Joint blogs](#joint-blogs) |
13 | 16 | - [License](#license)
|
14 | 17 |
|
15 | 18 | ## Installation
|
16 | 19 |
|
17 |
| -### Install PyTorch from Source |
| 20 | +### Install PyTorch |
18 | 21 | |IPEX Version|PyTorch Version|
|
19 | 22 | |--|--|
|
| 23 | + |[v1.8.0](https://github.com/intel/intel-extension-for-pytorch/tree/v1.8.0)|[v1.8.0](https://github.com/pytorch/pytorch/tree/v1.8.0 "v1.8.0")| |
20 | 24 | |[v1.2.0](https://github.com/intel/intel-extension-for-pytorch/tree/v1.2.0)|[v1.7.0](https://github.com/pytorch/pytorch/tree/v1.7.0 "v1.7.0")|
|
21 | 25 | |[v1.1.0](https://github.com/intel/intel-extension-for-pytorch/tree/v1.1.0)|[v1.5.0-rc3](https://github.com/pytorch/pytorch/tree/v1.5.0-rc3 "v1.5.0-rc3")|
|
22 | 26 | |[v1.0.2](https://github.com/intel/intel-extension-for-pytorch/tree/v1.0.2)|[v1.5.0-rc3](https://github.com/pytorch/pytorch/tree/v1.5.0-rc3 "v1.5.0-rc3")|
|
23 | 27 | |[v1.0.1](https://github.com/intel/intel-extension-for-pytorch/tree/v1.0.1)|[v1.5.0-rc3](https://github.com/pytorch/pytorch/tree/v1.5.0-rc3 "v1.5.0-rc3")|
|
24 | 28 | |[v1.0.0](https://github.com/intel/intel-extension-for-pytorch/tree/v1.0.0)|[v1.5.0-rc3](https://github.com/pytorch/pytorch/tree/v1.5.0-rc3 "v1.5.0-rc3")|
|
25 | 29 |
|
26 |
| - Take Intel-Extension-for-Pytorch v1.2.0 as the example. |
| 30 | + Take Intel-Extension-for-Pytorch v1.8.0 as the example. |
27 | 31 |
|
28 |
| - 1. Get PyTorch v1.7.0 source(Refer to [PyTorch guide](https://github.com/pytorch/pytorch#get-the-pytorch-source) for more details) |
| 32 | + 1. Install PyTorch from binary |
| 33 | + ```bash |
| 34 | + conda install pytorch torchvision torchaudio cpuonly -c pytorch |
| 35 | + ``` |
| 36 | + |
| 37 | + 2. Install PyTorch from source |
| 38 | + |
| 39 | + Get PyTorch v1.8.0 source(Refer to [PyTorch guide](https://github.com/pytorch/pytorch#get-the-pytorch-source) for more details) |
29 | 40 | ```bash
|
30 | 41 | git clone --recursive https://github.com/pytorch/pytorch
|
| 42 | + ``` |
| 43 | + |
| 44 | + Checkout PyTorch to the specified version |
| 45 | + ```bash |
31 | 46 | cd pytorch
|
32 |
| - |
33 |
| - # checkout source code to the specified version |
34 |
| - git checkout v1.7.0 |
35 |
| - |
36 |
| - # update submodules for the specified PyTorch version |
37 |
| - git submodule sync |
38 |
| - git submodule update --init --recursive |
| 47 | + git checkout v1.8.0 |
39 | 48 | ```
|
40 | 49 |
|
41 |
| - 2. Get the source code of Intel Extension for PyTorch |
| 50 | + Update submodules |
42 | 51 | ```bash
|
43 |
| - git clone --recursive https://github.com/intel/intel-extension-for-pytorch |
44 |
| - cd intel-extension-for-pytorch |
45 |
| -
|
46 |
| - # if you are updating an existing checkout |
47 | 52 | git submodule sync
|
48 | 53 | git submodule update --init --recursive
|
49 | 54 | ```
|
50 | 55 |
|
51 |
| - 3. Add an new backend for Intel Extension for PyTorch |
| 56 | + Build and install PyTorch (Refer to [PyTorch guide](https://github.com/pytorch/pytorch#install-pytorch) for more details) |
52 | 57 | ```bash
|
53 |
| - # Apply git patch to pytorch code |
54 |
| - cd ${pytorch_directory} |
55 |
| - git apply ${intel_extension_for_pytorch_directory}/torch_patches/xpu-1.7.patch |
56 |
| - ``` |
57 |
| - |
58 |
| - 4. Build and install PyTorch (Refer to [PyTorch guide](https://github.com/pytorch/pytorch#install-pytorch) for more details) |
59 |
| - ```bash |
60 |
| - cd ${pytorch_directory} |
61 | 58 | python setup.py install
|
62 | 59 | ```
|
63 | 60 |
|
64 | 61 | ### Install Intel Extension for PyTorch from Source
|
| 62 | + |
| 63 | +Get the source code of Intel Extension for PyTorch |
| 64 | +```bash |
| 65 | +git clone --recursive https://github.com/intel/intel-extension-for-pytorch |
| 66 | +cd intel-extension-for-pytorch |
| 67 | +
|
| 68 | +# if you are updating an existing checkout |
| 69 | +git submodule sync |
| 70 | +git submodule update --init --recursive |
| 71 | +``` |
| 72 | + |
65 | 73 | Install dependencies
|
66 | 74 | ```bash
|
67 | 75 | pip install lark-parser hypothesis
|
@@ -249,6 +257,39 @@ Supported Quantization Operators:
|
249 | 257 | - ```convolution + BatchNorm```
|
250 | 258 |
|
251 | 259 |
|
| 260 | +
|
| 261 | +### Supported Customized Operators |
| 262 | +* ROIAlign |
| 263 | +* NMS |
| 264 | +* BatchScoreNMS |
| 265 | +* MLP |
| 266 | +* Interaction |
| 267 | +* FrozenBatchNorm2d |
| 268 | +
|
| 269 | +### Supported Fusion Patterns |
| 270 | +* Conv2D + ReLU |
| 271 | +* Conv2D + SUM |
| 272 | +* Conv2D + SUM + ReLU |
| 273 | +* Conv2D + Sigmoid |
| 274 | +* Conv2D + Sigmoid + MUL |
| 275 | +* Conv2D + HardTanh |
| 276 | +* Conv2D + ELU |
| 277 | +* Conv3D + ReLU |
| 278 | +* Conv3D + SUM |
| 279 | +* Conv3D + SUM + ReLU |
| 280 | +* Linear + ReLU |
| 281 | +* Linear + GELU |
| 282 | +* View + Transpose + Contiguous + View |
| 283 | +
|
| 284 | +## Tutorials |
| 285 | +* [Performance Tuning](tutorials/Performance_Tuning.md) |
| 286 | +
|
| 287 | +## Joint-blogs |
| 288 | +* [Intel and Facebook Accelerate PyTorch Performance with 3rd Gen Intel® Xeon® Processors and Intel® Deep Learning Boost’s new BFloat16 capability](https://www.intel.com/content/www/us/en/artificial-intelligence/posts/intel-facebook-boost-bfloat16.html) |
| 289 | +* [Accelerate PyTorch with IPEX and oneDNN using Intel BF16 Technology](https://medium.com/pytorch/accelerate-pytorch-with-ipex-and-onednn-using-intel-bf16-technology-dca5b8e6b58f) |
| 290 | +* [Scaling up BERT-like model Inference on modern CPU - Part 1 by IPEX launcher](https://huggingface.co/blog/bert-cpu-scaling-part-1) |
| 291 | +
|
| 292 | +
|
252 | 293 | ## Contribution
|
253 | 294 |
|
254 | 295 | Please submit PR or issue to communicate with us or contribute code.
|
|
0 commit comments