Skip to content

Commit 67feb83

Browse files
committed
Merge remote-tracking branch 'origin' into kylesayrs/gptq-actorder-default
2 parents 155d120 + 6af0778 commit 67feb83

File tree

108 files changed

+2453
-762
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

108 files changed

+2453
-762
lines changed

.github/ISSUE_TEMPLATE/bug_report.md

Lines changed: 0 additions & 31 deletions
This file was deleted.
Lines changed: 47 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,47 @@
1+
name: 🐛 Bug report
2+
description: Raise an issue here if you find a bug.
3+
labels: bug
4+
title: "[Bug]: "
5+
6+
body:
7+
- type: markdown
8+
attributes:
9+
value: >
10+
#### Before submitting an issue, please make sure the issue hasn't been already addressed by searching through [the existing and past issues](https://github.com/vllm-project/llm-compressor/issues?q=is%3Aissue+sort%3Acreated-desc+).
11+
12+
#### ⚠️ For any issues related vLLM which are not related to quantization or compressed models, please create an issue in [vllm-project/vllm](https://github.com/vllm-project/vllm/issues).
13+
- type: textarea
14+
attributes:
15+
label: ⚙️ Your current environment
16+
description: |
17+
Please run the following and paste the output below.
18+
```bash
19+
wget https://raw.githubusercontent.com/vllm-project/llm-compressor/main/tools/collect_env.py
20+
# For security purposes, please feel free to check the contents of collect_env.py before running it.
21+
python collect_env.py
22+
```
23+
value: |
24+
<details>
25+
<summary>The output of <code>python collect_env.py</code></summary>
26+
27+
```text
28+
Your output of `python collect_env.py` here
29+
```
30+
31+
</details>
32+
validations:
33+
required: true
34+
- type: textarea
35+
attributes:
36+
label: 🐛 Describe the bug
37+
description: |
38+
Please provide a clear and concise description of what the bug is.
39+
validations:
40+
required: true
41+
- type: textarea
42+
attributes:
43+
label: 🛠️ Steps to reproduce
44+
description: |
45+
If applicable, please describe any steps required to reproduce. If you can share an applicable huggingface model stub, please do so here.
46+
validations:
47+
required: false

.gitignore

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -93,9 +93,6 @@ instance/
9393
# Scrapy stuff:
9494
.scrapy
9595

96-
# Sphinx documentation
97-
docs/_build/
98-
9996
# PyBuilder
10097
target/
10198

@@ -129,6 +126,7 @@ venv.bak/
129126

130127
# mkdocs documentation
131128
/site
129+
docs/.cache/
132130

133131
# mypy
134132
.mypy_cache/

README.md

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -18,10 +18,11 @@ Big updates have landed in LLM Compressor! To get a more in-depth look, check ou
1818

1919
Some of the exciting new features include:
2020

21+
* **QuIP and SpinQuant-style Transforms**: The newly added [`QuIPModifier`](examples/transform/quip_example.py) and [`SpinQuantModifier`](examples/transform/spinquant_example.py) allow users to quantize their models after injecting hadamard weights into the computation graph, reducing quantization error and greatly improving accuracy recovery for low bit weight and activation quantization.
22+
* **DeepSeekV3-style Block Quantization Support**: This allows for more efficient compression of large language models without needing a calibration dataset. Quantize a Qwen3 model to [W8A8](examples/quantization_w8a8_fp8/fp8_block_example.py).
2123
* **Llama4 Quantization Support**: Quantize a Llama4 model to [W4A16](examples/multimodal_vision/llama4_example.py) or [NVFP4](examples/quantization_w4a4_fp4/llama4_example.py). The checkpoint produced can seamlessly run in vLLM.
24+
* **FP4 Quantization - now with MoE and non-uniform support:** Quantize weights and activations to FP4 and seamlessly run the compressed model in vLLM. Model weights and activations are quantized following the NVFP4 [configuration](https://github.com/neuralmagic/compressed-tensors/blob/f5dbfc336b9c9c361b9fe7ae085d5cb0673e56eb/src/compressed_tensors/quantization/quant_scheme.py#L104). See examples of [fp4 activation support](examples/quantization_w4a4_fp4/llama3_example.py), [MoE support](examples/quantization_w4a4_fp4/qwen_30b_a3b.py), and [Non-uniform quantization support](examples/quantization_non_uniform) where some layers are selectively quantized to fp8 for better recovery. You can also mix other quantization schemes, such as int8 and int4.
2225
* **Large Model Support with Sequential Onloading**: As of llm-compressor>=0.6.0, you can now quantize very large language models on a single GPU. Models are broken into disjoint layers which are then onloaded to the GPU one layer at a time. For more information on sequential onloading, see [Big Modeling with Sequential Onloading](examples/big_models_with_sequential_onloading/README.md) as well as the [DeepSeek-R1 Example](examples/quantizing_moe/deepseek_r1_example.py).
23-
* **Preliminary FP4 Quantization Support:** Quantize weights and activations to FP4 and seamlessly run the compressed model in vLLM. Model weights and activations are quantized following the NVFP4 [configuration](https://github.com/neuralmagic/compressed-tensors/blob/f5dbfc336b9c9c361b9fe7ae085d5cb0673e56eb/src/compressed_tensors/quantization/quant_scheme.py#L104). See examples of [weight-only quantization](examples/quantization_w4a16_fp4/llama3_example.py) and [fp4 activation support](examples/quantization_w4a4_fp4/llama3_example.py). Support is currently preliminary and additional support will be added for MoEs.
24-
* **Updated AWQ Support:** Improved support for MoEs with better handling of larger models
2526
* **Axolotl Sparse Finetuning Integration:** Seamlessly finetune sparse LLMs with our Axolotl integration. Learn how to create [fast sparse open-source models with Axolotl and LLM Compressor](https://developers.redhat.com/articles/2025/06/17/axolotl-meets-llm-compressor-fast-sparse-open). See also the [Axolotl integration docs](https://docs.axolotl.ai/docs/custom_integrations.html#llmcompressor).
2627

2728
### Supported Formats
@@ -38,7 +39,7 @@ Some of the exciting new features include:
3839

3940
### When to Use Which Optimization
4041

41-
Please refer to [docs/schemes.md](./docs/schemes.md) for detailed information about available optimization schemes and their use cases.
42+
Please refer to [compression_schemes.md](./docs/guides/compression_schemes.md) for detailed information about available optimization schemes and their use cases.
4243

4344

4445
## Installation
@@ -61,6 +62,7 @@ Applying quantization with `llmcompressor`:
6162
* [Quantizing MoE LLMs](examples/quantizing_moe/README.md)
6263
* [Quantizing Vision-Language Models](examples/multimodal_vision/README.md)
6364
* [Quantizing Audio-Language Models](examples/multimodal_audio/README.md)
65+
* [Quantizing Models Non-uniformly](examples/quantization_non_uniform/README.md)
6466

6567
### User Guides
6668
Deep dives into advanced usage of `llmcompressor`:

docs/Makefile

Lines changed: 26 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,26 @@
1+
# Minimal mkdocs makefile
2+
3+
PYTHON := python3
4+
MKDOCS_CMD := mkdocs
5+
MKDOCS_CONF := ../mkdocs.yml
6+
7+
.PHONY: help install serve build clean
8+
9+
help:
10+
@echo "Available targets:"
11+
@echo " install Install dependencies globally"
12+
@echo " serve Serve docs locally"
13+
@echo " build Build static site"
14+
@echo " clean Remove build artifacts"
15+
16+
install:
17+
pip install -e "../[dev]"
18+
19+
serve:
20+
$(MKDOCS_CMD) serve --livereload -f $(MKDOCS_CONF)
21+
22+
build:
23+
$(MKDOCS_CMD) build -f $(MKDOCS_CONF)
24+
25+
clean:
26+
rm -rf site/ .cache/

docs/README.md

Lines changed: 25 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,25 @@
1+
# Getting started with LLM Compressor docs
2+
3+
```bash
4+
cd docs
5+
```
6+
7+
- Install the dependencies:
8+
9+
```bash
10+
make install
11+
```
12+
13+
- Clean the previous build (optional but recommended):
14+
15+
```bash
16+
make clean
17+
```
18+
19+
- Serve the docs:
20+
21+
```bash
22+
make serve
23+
```
24+
25+
This will start a local server at http://localhost:8000. You can now open your browser and view the documentation.

docs/developer/code-of-conduct.md

Lines changed: 82 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,82 @@
1+
---
2+
title: Code of Conduct
3+
weight: -10
4+
---
5+
6+
# LLM Compressor Code of Conduct
7+
8+
## Our Pledge
9+
10+
We as members, contributors, and leaders pledge to make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation.
11+
12+
We pledge to act and interact in ways that contribute to an open, welcoming, diverse, inclusive, and healthy community.
13+
14+
## Our Standards
15+
16+
Examples of behavior that contributes to a positive environment for our community include:
17+
18+
- Demonstrating empathy and kindness toward other people
19+
- Being respectful of differing opinions, viewpoints, and experiences
20+
- Giving and gracefully accepting constructive feedback
21+
- Accepting responsibility and apologizing to those affected by our mistakes, and learning from the experience
22+
- Focusing on what is best not just for us as individuals, but for the overall community
23+
24+
Examples of unacceptable behavior include:
25+
26+
- The use of sexualized language or imagery, and sexual attention or advances of any kind
27+
- Trolling, insulting or derogatory comments, and personal or political attacks
28+
- Public or private harassment
29+
- Publishing others’ private information, such as a physical or email address, without their explicit permission
30+
- Other conduct which could reasonably be considered inappropriate in a professional setting
31+
32+
## Enforcement Responsibilities
33+
34+
Community leaders are responsible for clarifying and enforcing our standards of acceptable behavior and will take appropriate and fair corrective action in response to any behavior that they deem inappropriate, threatening, offensive, or harmful.
35+
36+
Community leaders have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, and will communicate reasons for moderation decisions when appropriate.
37+
38+
## Scope
39+
40+
This Code of Conduct applies within all community spaces, and also applies when an individual is officially representing the community in public spaces. Examples of representing our community include using an official e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event.
41+
42+
## Enforcement
43+
44+
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported to the community leaders responsible for enforcement through GitHub, Slack, or Email. All complaints will be reviewed and investigated promptly and fairly.
45+
46+
All community leaders are obligated to respect the privacy and security of the reporter of any incident.
47+
48+
## Enforcement Guidelines
49+
50+
Community leaders will follow these Community Impact Guidelines in determining the consequences for any action they deem in violation of this Code of Conduct:
51+
52+
### 1. Correction
53+
54+
**Community Impact**: Use of inappropriate language or other behavior deemed unprofessional or unwelcome in the community.
55+
56+
**Consequence**: A private, written warning from community leaders, providing clarity around the nature of the violation and an explanation of why the behavior was inappropriate. A public apology may be requested.
57+
58+
### 2. Warning
59+
60+
**Community Impact**: A violation through a single incident or series of actions.
61+
62+
**Consequence**: A warning with consequences for continued behavior. No interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, for a specified period of time. This includes avoiding interactions in community spaces as well as external channels like social media. Violating these terms may lead to a temporary or permanent ban.
63+
64+
### 3. Temporary Ban
65+
66+
**Community Impact**: A serious violation of community standards, including sustained inappropriate behavior.
67+
68+
**Consequence**: A temporary ban from any sort of interaction or public communication with the community for a specified period of time. No public or private interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, is allowed during this period. Violating these terms may lead to a permanent ban.
69+
70+
### 4. Permanent Ban
71+
72+
**Community Impact**: Demonstrating a pattern of violation of community standards, including sustained inappropriate behavior, harassment of an individual, or aggression toward or disparagement of classes of individuals.
73+
74+
**Consequence**: A permanent ban from any sort of public interaction within the community.
75+
76+
## Attribution
77+
78+
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 2.1, available at https://www.contributor-covenant.org/version/2/1/code_of_conduct.html.
79+
80+
Community Impact Guidelines were inspired by [Mozilla’s code of conduct enforcement ladder](https://github.com/mozilla/diversity).
81+
82+
For answers to common questions about this code of conduct, see the FAQ at https://www.contributor-covenant.org/faq. Translations are available at https://www.contributor-covenant.org/translations.

docs/developer/contributing.md

Lines changed: 58 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,58 @@
1+
---
2+
title: Contributing Guide
3+
weight: -8
4+
---
5+
6+
# Contributing to LLM Compressor
7+
8+
Thank you for your interest in contributing to LLM Compressor!
9+
Our community is open to everyone and welcomes all kinds of contributions, no matter how small or large.
10+
There are several ways you can contribute to the project:
11+
12+
- Identify and report any issues or bugs.
13+
- Request or add new compression methods or research.
14+
- Suggest or implement new features.
15+
16+
However, remember that contributions aren't just about code.
17+
We believe in the power of community support; thus, answering queries, assisting others, and enhancing the documentation are highly regarded and beneficial contributions.
18+
19+
Finally, one of the most impactful ways to support us is by raising awareness about LLM Compressor and the vLLM community.
20+
Talk about it in your blog posts, highlighting how it's driving your incredible projects.
21+
Express your support on Twitter if vLLM aids you, or simply offer your appreciation by starring our repository.
22+
23+
## Setup for development
24+
25+
### Install from source
26+
27+
```bash
28+
pip install -e ./[dev]
29+
```
30+
31+
### Code Styling and Formatting checks
32+
33+
```bash
34+
make style
35+
make quality
36+
```
37+
38+
### Testing
39+
40+
```bash
41+
make test
42+
```
43+
44+
## Contributing Guidelines
45+
46+
### Issue Reporting
47+
48+
If you encounter a bug or have a feature request, please check our issues page first to see if someone else has already reported it.
49+
If not, please file a new issue, providing as much relevant information as possible.
50+
51+
### Pull Requests & Code Reviews
52+
53+
Please check the PR checklist in the [PR template](.github/PULL_REQUEST_TEMPLATE.md) for detailed guide for contribution.
54+
55+
### Thank You
56+
57+
Finally, thank you for taking the time to read these guidelines and for your interest in contributing to LLM Compressor.
58+
Your contributions make LLM Compressor a great tool for everyone!

0 commit comments

Comments
 (0)