Skip to content

Remove mamba-ssm package #22409

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 2 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 0 additions & 1 deletion docker/Dockerfile.cpu
Original file line number Diff line number Diff line change
Expand Up @@ -113,7 +113,6 @@ WORKDIR /workspace/vllm

RUN --mount=type=bind,src=requirements/test.in,target=requirements/test.in \
cp requirements/test.in requirements/cpu-test.in && \
sed -i '/mamba_ssm/d' requirements/cpu-test.in && \
sed -i 's/^torch==.*/torch==2.6.0/g' requirements/cpu-test.in && \
sed -i 's/torchaudio.*/torchaudio/g' requirements/cpu-test.in && \
sed -i 's/torchvision.*/torchvision/g' requirements/cpu-test.in && \
Expand Down
13 changes: 0 additions & 13 deletions docs/contributing/ci/update_pytorch_version.md
Original file line number Diff line number Diff line change
Expand Up @@ -131,19 +131,6 @@ MAX_JOBS=16 uv pip install --system \
--no-build-isolation "git+https://github.com/facebookresearch/[email protected]"
```

### Mamba

```bash
uv pip install --system \
--no-build-isolation "git+https://github.com/state-spaces/[email protected]"
```

### causal-conv1d

```bash
uv pip install 'git+https://github.com/Dao-AILab/[email protected]'
```

## Update all the different vLLM platforms

Rather than attempting to update all vLLM platforms in a single pull request, it's more manageable
Expand Down
3 changes: 1 addition & 2 deletions requirements/test.in
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,6 @@ torch==2.7.1
torchaudio==2.7.1
torchvision==0.22.1
transformers_stream_generator # required for qwen-vl test
mamba_ssm==2.2.5 # required for plamo2 test
matplotlib # required for qwen-vl test
mistral_common[image,audio] >= 1.8.2 # required for voxtral test
num2words # required for smolvlm test
Expand Down Expand Up @@ -54,4 +53,4 @@ runai-model-streamer==0.11.0
runai-model-streamer-s3==0.11.0
fastsafetensors>=0.1.10
pydantic>=2.10 # 2.9 leads to error on python 3.10
terratorch==1.1rc2 # required for PrithviMAE test
terratorch==1.1rc2 # required for PrithviMAE test
13 changes: 1 addition & 12 deletions requirements/test.txt
Original file line number Diff line number Diff line change
Expand Up @@ -178,7 +178,6 @@ einops==0.8.1
# via
# -r requirements/test.in
# encodec
# mamba-ssm
# terratorch
# torchgeo
# vector-quantize-pytorch
Expand Down Expand Up @@ -418,8 +417,6 @@ lxml==5.3.0
# sacrebleu
mako==1.3.10
# via alembic
mamba-ssm==2.2.5
# via -r requirements/test.in
markdown==3.8.2
# via mlflow
markdown-it-py==3.0.0
Expand Down Expand Up @@ -476,8 +473,6 @@ networkx==3.2.1
# via
# scikit-image
# torch
ninja==1.11.1.3
# via mamba-ssm
nltk==3.9.1
# via rouge-score
num2words==0.5.14
Expand Down Expand Up @@ -630,7 +625,6 @@ packaging==24.2
# lazy-loader
# lightning
# lightning-utilities
# mamba-ssm
# matplotlib
# mlflow-skinny
# peft
Expand Down Expand Up @@ -974,7 +968,6 @@ sentencepiece==0.2.0
setuptools==77.0.3
# via
# lightning-utilities
# mamba-ssm
# pytablewriter
# torch
# triton
Expand Down Expand Up @@ -1086,7 +1079,6 @@ torch==2.7.1+cu128
# lightly
# lightning
# lm-eval
# mamba-ssm
# mteb
# open-clip-torch
# peft
Expand Down Expand Up @@ -1153,16 +1145,13 @@ transformers==4.55.0
# -r requirements/test.in
# genai-perf
# lm-eval
# mamba-ssm
# peft
# sentence-transformers
# transformers-stream-generator
transformers-stream-generator==0.0.5
# via -r requirements/test.in
triton==3.3.1
# via
# mamba-ssm
# torch
# via torch
tritonclient==2.51.0
# via
# -r requirements/test.in
Expand Down
7 changes: 4 additions & 3 deletions tests/models/language/generation/test_hybrid.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,9 +25,6 @@

HYBRID_MODELS = [
"ai21labs/Jamba-tiny-dev",
# NOTE: Running Plamo2 in transformers implementation requires to install
# causal-conv1d package, which is not listed as a test dependency as it's
# not compatible with pip-compile.
"pfnet/plamo-2-1b",
"Zyphra/Zamba2-1.2B-instruct",
"hmellor/tiny-random-BambaForCausalLM",
Expand All @@ -50,6 +47,10 @@
# https://github.com/huggingface/transformers/pull/39033
# We will enable vLLM test for Granite after next HF transformers release.
"ibm-granite/granite-4.0-tiny-preview",
# NOTE: Plamo2 requires both mamba_ssm and causal-conv1d libraries
# (see https://huggingface.co/pfnet/plamo-2-1b/blob/main/modeling_plamo.py),
# Don't compare it to HF, to avoid managing the dependency.
"pfnet/plamo-2-1b",
]

V1_SUPPORTED_MODELS = [
Expand Down