Skip to content

Commit 755288c

Browse files
committed
add tests, remove dead code
Signed-off-by: Kyle Sayers <[email protected]>
1 parent 45da803 commit 755288c

File tree

4 files changed

+29
-3
lines changed

4 files changed

+29
-3
lines changed

src/llmcompressor/modifiers/quantization/gptq/base.py

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -137,6 +137,8 @@ def resolve_actorder(existing):
137137
# user-provided value always attempts to override
138138
if existing is None or self.actorder == existing:
139139
return self.actorder
140+
141+
# if existing provided and conflicts
140142
raise ValueError(
141143
"Cannot resolve activation ordering when both "
142144
"`GPTQModifier.actorder` and `QuantizationScheme.actorder` "
@@ -145,9 +147,6 @@ def resolve_actorder(existing):
145147
"remove `actorder` from config groups."
146148
)
147149

148-
# setting `GPTQModifier.actorder = None` does nothing
149-
return existing
150-
151150
for scheme in config.config_groups.values():
152151
assert isinstance(scheme, QuantizationScheme)
153152
if scheme.weights is not None:
Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
cadence: "nightly"
2+
test_type: "regression"
3+
model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
4+
recipe: tests/e2e/vLLM/recipes/actorder/recipe_w4a16_actorder_none.yaml
5+
dataset_id: openai/gsm8k
6+
dataset_config: main
7+
dataset_split: train
8+
scheme: W4A16_actorder_none
9+
save_dir: TinyLlama-1.1B-Chat-v1.0-actorder-group
Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
cadence: "nightly"
2+
test_type: "regression"
3+
model: Qwen/Qwen2.5-0.5B
4+
recipe: tests/e2e/vLLM/recipes/actorder/recipe_w4a16_actorder_none.yaml
5+
dataset_id: neuralmagic/LLM_compression_calibration
6+
dataset_split: train
7+
scheme: W4A16_actorder_none
8+
save_dir: Qwen2.5-0.5B-actorder-none
Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
cadence: "weekly"
2+
model: meta-llama/Meta-Llama-3-8B-Instruct
3+
scheme: W4A16_actorder_none
4+
recipe: tests/e2e/vLLM/recipes/actorder/recipe_w4a16_actorder_none.yaml
5+
dataset_id: HuggingFaceH4/ultrachat_200k
6+
dataset_split: train_sft
7+
lmeval:
8+
metrics:
9+
exact_match,flexible-extract: 0.72
10+
exact_match,strict-match: 0.72

0 commit comments

Comments
 (0)