Skip to content

Commit 383aedd

Browse files
authored
update docs for 2.1.0 release (#2121)
update adapt new installation guide page add llm api to api_doc update notes in examples page add torch.compile ipex backend inference examples update torch.cpu.amp.autocast to the new API torch.autocast add know issues add llm docs Updates to address DX feedback and align with the Doc guidelines (#2130) * Updates to address DX feedback and align with the Doc guidelines * Implemented comments, added the Troubleshooting section based on Known Issues * Edited the newly added descriptions * Edited the note * Updated the examples topic * Removed the Code Changes Highlight heading, rewrote the foreword for the Training examples update format issues Add details about iakv in llm_overview.md Update llm_overview.md Update llm_overview.md Add LLM example code doc for fp32/bf16 and quantizations (#2148) * Create optimize_transformers_woq.py * Update optimize_transformers.py * Create optimize_transformers_smoothquant.py * Update optimize_transformers_smoothquant.py * Update optimize_transformers_smoothquant.py * Update optimize_transformers_smoothquant.py * Update optimize_transformers_woq.py * Update optimize_transformers.py update for LLM update codeowners restatement in index update llm pseudocode update transformers version for fast_bert feature add LLM demo gif images add release notes format auto correction by linter
1 parent 539db23 commit 383aedd

File tree

65 files changed

+2022
-711
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

65 files changed

+2022
-711
lines changed

.github/CODEOWNERS

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,3 +2,4 @@
22
# Each line is a file pattern followed by one or more owners.
33

44
/intel_extension_for_pytorch/ @zejun-chen
5+
requirements.txt @blzheng

docs/_templates/footer.html

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,4 @@
1+
{% extends '!footer.html' %} {% block extrafooter %} {{super}}
2+
<p>*Other names and brands may be claimed as the property of others. <a href="http://www.intel.com/content/www/us/en/legal/trademarks.html">Trademarks</a></p>
3+
<p></p><div><a href='https://www.intel.com/content/www/us/en/privacy/intel-cookie-notice.html' data-cookie-notice='true'>Cookies</a> <a href='https://www.intel.com/content/www/us/en/privacy/intel-privacy-notice.html'>| Privacy</a></div>
4+
{% endblock %}

docs/_templates/layout.html

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,16 @@
1+
{%- extends "!layout.html" %}
2+
{% block scripts %}
3+
<script type="text/javascript">
4+
// Configure TMS settings
5+
window.wapProfile = 'profile-microsite'; // This is mapped by WAP authorize value
6+
window.wapLocalCode = 'us-en'; // Dynamically set per localized site, see mapping table for values
7+
window.wapSection = "intel-extension-for-pytorch"; // WAP team will give you a unique section for your site
8+
window.wapEnv = 'prod'; // environment to be use in Adobe Tags.
9+
// Load TMS
10+
(() => {
11+
let url = 'https://www.intel.com/content/dam/www/global/wap/main/wap-microsite.js';
12+
let po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = url;
13+
let s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s);
14+
}) ();
15+
</script>
16+
{% endblock %}

docs/conf.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@
1818

1919
# -- Project information -----------------------------------------------------
2020

21-
project = 'intel_extension_for_pytorch'
21+
project = 'Intel&#174 Extension for PyTorch*'
2222
copyright = 'Intel(R)'
2323
author = ''
2424

docs/index.rst

Lines changed: 66 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -2,14 +2,30 @@
22
:description: This website introduces Intel® Extension for PyTorch*
33
:keywords: Intel optimization, PyTorch, Intel® Extension for PyTorch*, GPU, discrete GPU, Intel discrete GPU
44

5-
Welcome to Intel® Extension for PyTorch* Documentation
6-
######################################################
5+
Intel® Extension for PyTorch*
6+
#############################
77

8-
Intel® Extension for PyTorch* extends PyTorch* with up-to-date features optimizations for an extra performance boost on Intel hardware. Optimizations take advantage of AVX-512 Vector Neural Network Instructions (AVX512 VNNI) and Intel® Advanced Matrix Extensions (Intel® AMX) on Intel CPUs as well as Intel X\ :sup:`e`\ Matrix Extensions (XMX) AI engines on Intel discrete GPUs. Moreover, through PyTorch* `xpu` device, Intel® Extension for PyTorch* provides easy GPU acceleration for Intel discrete GPUs with PyTorch*.
8+
Intel® Extension for PyTorch* extends PyTorch* with the latest performance optimizations for Intel hardware.
9+
Optimizations take advantage of Intel® Advanced Vector Extensions 512 (Intel® AVX-512) Vector Neural Network Instructions (VNNI) and Intel® Advanced Matrix Extensions (Intel® AMX) on Intel CPUs as well as Intel X\ :sup:`e`\ Matrix Extensions (XMX) AI engines on Intel discrete GPUs.
10+
Moreover, Intel® Extension for PyTorch* provides easy GPU acceleration for Intel discrete GPUs through the PyTorch* ``xpu`` device.
911

10-
Intel® Extension for PyTorch* provides optimizations for both eager mode and graph mode, however, compared to eager mode, graph mode in PyTorch* normally yields better performance from optimization techniques, such as operation fusion. Intel® Extension for PyTorch* amplifies them with more comprehensive graph optimizations. Therefore we recommend you to take advantage of Intel® Extension for PyTorch* with `TorchScript <https://pytorch.org/docs/stable/jit.html>`_ whenever your workload supports it. You could choose to run with `torch.jit.trace()` function or `torch.jit.script()` function, but based on our evaluation, `torch.jit.trace()` supports more workloads so we recommend you to use `torch.jit.trace()` as your first choice.
12+
The extension can be loaded as a Python module for Python programs or linked as a C++ library for C++ programs. In Python scripts, users can enable it dynamically by importing ``intel_extension_for_pytorch``.
1113

12-
The extension can be loaded as a Python module for Python programs or linked as a C++ library for C++ programs. In Python scripts users can enable it dynamically by importing `intel_extension_for_pytorch`.
14+
.. note::
15+
16+
- GPU features are not included in CPU-only packages.
17+
- Optimizations for CPU-only may have a newer code base due to different development schedules.
18+
19+
In the current technological landscape, Generative AI (GenAI) workloads and models have gained widespread attention and popularity. Large Language Models (LLMs) have emerged as the dominant models driving these GenAI applications. Starting from 2.1.0, specific optimizations for certain LLM models are introduced in the Intel® Extension for PyTorch*.
20+
21+
Intel® Extension for PyTorch* has been released as an open–source project at `Github <https://github.com/intel/intel-extension-for-pytorch>`_. You can find the source code and instructions on how to get started at:
22+
23+
- **CPU**: `CPU master branch <https://github.com/intel/intel-extension-for-pytorch/tree/master>`_ | `Get Started <https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/getting_started>`_
24+
- **XPU**: `XPU master branch <https://github.com/intel/intel-extension-for-pytorch/tree/xpu-master>`_ | `Get Started <https://intel.github.io/intel-extension-for-pytorch/xpu/latest/tutorials/getting_started>`_
25+
26+
27+
Architecture
28+
------------
1329

1430
Intel® Extension for PyTorch* is structured as shown in the following figure:
1531

@@ -18,26 +34,60 @@ Intel® Extension for PyTorch* is structured as shown in the following figure:
1834
:align: center
1935
:alt: Architecture of Intel® Extension for PyTorch*
2036

21-
|
37+
- **Eager Mode**: In the eager mode, the PyTorch frontend is extended with custom Python modules (such as fusion modules), optimal optimizers, and INT8 quantization APIs. Further performance improvement is achieved by converting eager-mode models into graph mode using extended graph fusion passes.
38+
- **Graph Mode**: In the graph mode, fusions reduce operator/kernel invocation overhead, resulting in improved performance. Compared to the eager mode, the graph mode in PyTorch* normally yields better performance from the optimization techniques like operation fusion. Intel® Entension for PyTorch* amplifies them with more comprehensive graph optimizations. Both PyTorch ``Torchscript`` and ``TorchDynamo`` graph modes are supported. With ``Torchscript``, we recommend using ``torch.jit.trace()`` as your preferred option, as it generally supports a wider range of workloads compared to ``torch.jit.script()``. With ``TorchDynamo``, ipex backend is available to provide good performances.
39+
- **CPU Optimization**: On CPU, Intel® Extension for PyTorch* automatically dispatches operators to underlying kernels based on detected ISA. The extension leverages vectorization and matrix acceleration units available on Intel hardware. The runtime extension offers finer-grained thread runtime control and weight sharing for increased efficiency.
40+
- **GPU Optimization**: On GPU, optimized operators and kernels are implemented and registered through PyTorch dispatching mechanism. These operators and kernels are accelerated from native vectorization feature and matrix calculation feature of Intel GPU hardware. Intel® Extension for PyTorch* for GPU utilizes the `DPC++ <https://github.com/intel/llvm#oneapi-dpc-compiler>`_ compiler that supports the latest `SYCL* <https://registry.khronos.org/SYCL/specs/sycl-2020/html/sycl-2020.html>`_ standard and also a number of extensions to the SYCL* standard, which can be found in the `sycl/doc/extensions <https://github.com/intel/llvm/tree/sycl/sycl/doc/extensions>`_ directory.
2241

23-
Optimizations for both eager mode and graph mode contribute to extra performance accelerations with the extension. In eager mode, the PyTorch frontend is extended with custom Python modules (such as fusion modules), optimal optimizers, and INT8 quantization APIs. Further performance boost is available by converting the eager-mode model into graph mode via extended graph fusion passes. In the graph mode, the fusions reduce operator/kernel invocation overheads, and thus increase performance. On CPU, Intel® Extension for PyTorch* dispatches the operators into their underlying kernels automatically based on ISA that it detects and leverages vectorization and matrix acceleration units available on Intel hardware. Intel® Extension for PyTorch* runtime extension brings better efficiency with finer-grained thread runtime control and weight sharing. On GPU, optimized operators and kernels are implemented and registered through PyTorch dispatching mechanism. These operators and kernels are accelerated from native vectorization feature and matrix calculation feature of Intel GPU hardware. Intel® Extension for PyTorch* for GPU utilizes the `DPC++ <https://github.com/intel/llvm#oneapi-dpc-compiler>`_ compiler that supports the latest `SYCL* <https://registry.khronos.org/SYCL/specs/sycl-2020/html/sycl-2020.html>`_ standard and also a number of extensions to the SYCL* standard, which can be found in the `sycl/doc/extensions <https://github.com/intel/llvm/tree/sycl/sycl/doc/extensions>`_ directory.
2442

25-
.. note:: GPU features are not included in CPU only packages.
43+
Support
44+
-------
45+
The team tracks bugs and enhancement requests using `GitHub issues <https://github.com/intel/intel-extension-for-pytorch/issues/>`_. Before submitting a suggestion or bug report, search the existing GitHub issues to see if your issue has already been reported.
2646

27-
Intel® Extension for PyTorch* has been released as an open–source project at `Github <https://github.com/intel/intel-extension-for-pytorch>`_. Source code is available at `xpu-master branch <https://github.com/intel/intel-extension-for-pytorch/tree/xpu-master>`_. Check `the tutorial <https://intel.github.io/intel-extension-for-pytorch/xpu/latest/>`_ for detailed information. Due to different development schedule, optimizations for CPU only might have a newer code base. Source code is available at `master branch <https://github.com/intel/intel-extension-for-pytorch/tree/master>`_. Check `the CPU tutorial <https://intel.github.io/intel-extension-for-pytorch/cpu/latest/>`_ for detailed information on the CPU side.
47+
.. toctree::
48+
:caption: ABOUT
49+
:maxdepth: 3
50+
:hidden:
51+
52+
tutorials/introduction
53+
tutorials/performance
54+
tutorials/releases
55+
tutorials/known_issues
56+
tutorials/blogs_publications
57+
tutorials/license
2858

2959
.. toctree::
60+
:maxdepth: 3
61+
:caption: GET STARTED
3062
:hidden:
31-
:maxdepth: 1
3263

33-
tutorials/getting_started
3464
tutorials/features
35-
tutorials/releases
65+
LLM<tutorials/llm>
3666
tutorials/installation
67+
tutorials/getting_started
3768
tutorials/examples
69+
tutorials/cheat_sheet
70+
71+
.. toctree::
72+
:maxdepth: 3
73+
:caption: DEVELOPER REFERENCE
74+
:hidden:
75+
3876
tutorials/api_doc
39-
tutorials/performance_tuning
40-
tutorials/performance
41-
tutorials/blogs_publications
77+
78+
.. toctree::
79+
:maxdepth: 3
80+
:caption: PERFORMANCE TUNING
81+
:hidden:
82+
83+
tutorials/performance_tuning/tuning_guide
84+
tutorials/performance_tuning/launch_script
85+
tutorials/performance_tuning/torchserve
86+
87+
.. toctree::
88+
:maxdepth: 3
89+
:caption: CONTRIBUTING GUIDE
90+
:hidden:
91+
4292
tutorials/contribution
43-
tutorials/license
93+

docs/tutorials/api_doc.rst

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,7 @@ General
66

77
.. currentmodule:: intel_extension_for_pytorch
88
.. autofunction:: optimize
9+
.. autofunction:: optimize_transformers
910
.. autoclass:: verbose
1011

1112
Fast Bert (Experimental)
@@ -24,6 +25,7 @@ Quantization
2425
************
2526

2627
.. automodule:: intel_extension_for_pytorch.quantization
28+
.. autofunction:: get_smooth_quant_qconfig_mapping
2729
.. autofunction:: prepare
2830
.. autofunction:: convert
2931

docs/tutorials/blogs_publications.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,8 @@
11
Blogs & Publications
22
====================
33

4+
* [Accelerate Llama 2 with Intel AI Hardware and Software Optimizations, Jul 2023](https://www.intel.com/content/www/us/en/developer/articles/news/llama2.html)
5+
* [Accelerate PyTorch\* Training and Inference Performance using Intel® AMX, Jul 2023](https://www.intel.com/content/www/us/en/developer/articles/technical/accelerate-pytorch-training-inference-on-amx.html)
46
* [Intel® Deep Learning Boost (Intel® DL Boost) - Improve Inference Performance of Hugging Face BERT Base Model in Google Cloud Platform (GCP) Technology Guide, Apr 2023](https://networkbuilders.intel.com/solutionslibrary/intel-deep-learning-boost-intel-dl-boost-improve-inference-performance-of-hugging-face-bert-base-model-in-google-cloud-platform-gcp-technology-guide)
57
* [Get Started with Intel® Extension for PyTorch\* on GPU | Intel Software, Mar 2023](https://www.youtube.com/watch?v=Id-rE2Q7xZ0&t=1s)
68
* [Accelerate PyTorch\* INT8 Inference with New “X86” Quantization Backend on X86 CPUs, Mar 2023](https://www.intel.com/content/www/us/en/developer/articles/technical/accelerate-pytorch-int8-inf-with-new-x86-backend.html)

docs/tutorials/cheat_sheet.md

Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
Cheat Sheet
2+
===========
3+
4+
Get started with Intel® Extension for PyTorch\* using the following commands:
5+
6+
|Description | Command |
7+
| -------- | ------- |
8+
| Basic CPU Installation | `python -m pip install intel_extension_for_pytorch` |
9+
| Import Intel® Extension for PyTorch\* | `import intel_extension_for_pytorch as ipex`|
10+
| Capture a Verbose Log (Command Prompt) | `export ONEDNN_VERBOSE=1` |
11+
| Optimization During Training | `model = ...`<br>`optimizer = ...`<br>`model.train()`<br>`model, optimizer = ipex.optimize(model, optimizer=optimizer)`|
12+
| Optimization During Inference | `model = ...`<br>`model.eval()`<br>`model = ipex.optimize(model)` |
13+
| Optimization Using the Low-Precision Data Type bfloat16 <br>During Training (Default FP32) | `model = ...`<br>`optimizer = ...`<br>`model.train()`<br/><br/>`model, optimizer = ipex.optimize(model, optimizer=optimizer, dtype=torch.bfloat16)`<br/><br/>`with torch.no_grad():`<br>` with torch.cpu.amp.autocast():`<br>` model(data)` |
14+
| Optimization Using the Low-Precision Data Type bfloat16 <br>During Inference (Default FP32) | `model = ...`<br>`model.eval()`<br/><br/>`model = ipex.optimize(model, dtype=torch.bfloat16)`<br/><br/>`with torch.cpu.amp.autocast():`<br>` model(data)`
15+
| [Experimental] Fast BERT Optimization | `from transformers import BertModel`<br>`model = BertModel.from_pretrained("bert-base-uncased")`<br>`model.eval()`<br/><br/>`model = ipex.fast_bert(model, dtype=torch.bfloat16)`|
16+
| Run CPU Launch Script (Command Prompt): <br>Automate Configuration Settings for Performance | `ipexrun [knobs] <your_pytorch_script> [args]`|
17+
| [Experimental] Run HyperTune to perform hyperparameter/execution configuration search | `python -m intel_extension_for_pytorch.cpu.hypertune --conf-file <your_conf_file> <your_python_script> [args]`|
18+
| [Experimental] Enable Graph capture | `model = …`<br>`model.eval()`<br>`model = ipex.optimize(model, graph_mode=True)`|
19+
| Post-Training INT8 Quantization (Static) | `model = …`<br>`model.eval()`<br>`data = …`<br/><br/>`qconfig = ipex.quantization.default_static_qconfig`<br/><br/>`prepared_model = ipex.quantization.prepare(model, qconfig, example_inputs=data, anyplace=False)`<br/><br/>`for d in calibration_data_loader():`<br>` prepared_model(d)`<br/><br/>`converted_model = ipex.quantization.convert(prepared_model)`|
20+
| Post-Training INT8 Quantization (Dynamic) | `model = …`<br>`model.eval()`<br>`data = …`<br/><br/>`qconfig = ipex.quantization.default_dynamic_qconfig`<br/><br/>`prepared_model = ipex.quantization.prepare(model, qconfig, example_inputs=data)`<br/><br/>`converted_model = ipex.quantization.convert(prepared_model)` |
21+
| [Experimental] Post-Training INT8 Quantization (Tuning Recipe): | `model = …`<br>`model.eval()`<br>`data = …`<br/><br/>`qconfig = ipex.quantization.default_static_qconfig`<br/><br/>`prepared_model = ipex.quantization.prepare(model, qconfig, example_inputs=data, inplace=False)`<br/><br/>`tuned_model = ipex.quantization.autotune(prepared_model, calibration_data_loader, eval_function, sampling_sizes=[100],`<br>` accuracy_criterion={'relative': .01}, tuning_time=0)`<br/><br/>`convert_model = ipex.quantization.convert(tuned_model)`|

0 commit comments

Comments
 (0)