Skip to content

Conversation

@ParagEkbote
Copy link
Contributor

@ParagEkbote ParagEkbote commented Oct 24, 2025

Description

As described in the issue, this PR pins the torchao version to 0.12.0. If we want to support a version higher, we will need to bump the torch version as well. After pinning to this version, the warning does not appear. Could you please review?

cc: @davidberenstein1957

Related Issue

Fixes #411

Type of Change

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update

How Has This Been Tested?

Checklist

  • My code follows the style guidelines of this project
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes

Additional Notes


Note

Pins torchao to version 0.12.0 to avoid the PyTorch ABI warning.

Written by Cursor Bugbot for commit 474202c. This will update automatically on new commits. Configure here.

@github-actions
Copy link

github-actions bot commented Nov 4, 2025

This PR has been inactive for 10 days and is now marked as stale.

@github-actions github-actions bot added the stale label Nov 4, 2025
@ParagEkbote
Copy link
Contributor Author

Not stale.

@ParagEkbote
Copy link
Contributor Author

ParagEkbote commented Nov 8, 2025

I've pinned the sphinx version to avoid a conflict with jinja. Currently, the janus test is failing since the version supporting it is not being installed while testing. Can I add a step like pip install --upgrade transformers in the CI?

Secondly, the linting with ty seems to be failing due to updates in the newer version of ty, could we pin this package as well. WDYT?

cc: @davidberenstein1957

@github-actions github-actions bot removed the stale label Nov 9, 2025
@ParagEkbote
Copy link
Contributor Author

Could you please review?

cc: @davidberenstein1957

Copy link
Member

@davidberenstein1957 davidberenstein1957 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ParagEkbote, could you rebase on main_ Also, it seems one of the tests is failing because the Janus architecture is not being recognised? Perhaps we should see if this is caused by wrong transformer versions.

pyproject.toml Outdated
"hqq==0.2.7.post1",
"torchao",
"torchao==0.12.0",
"Sphinx>=4.5,<7.0",

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why are we doing this?

Copy link
Contributor Author

@ParagEkbote ParagEkbote Nov 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to pin the sphinx error to prevent a version conflict with jinja. The GH Action logs for the same:https://github.com/PrunaAI/pruna/actions/runs/18781274712/job/53587851292

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Furthermore, I have also pinned the ty version and transformers version, so the test and linting checks are passing. Could you please review?

cc: @davidberenstein1957

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sphinx is only a dev dependency so we shouldn't pin it here. I don't understand why we're pinning it in the first place and the logs are not available anymore (sorry for the delay), could you re-explain / re-run the logs please?

Copy link
Member

@davidberenstein1957 davidberenstein1957 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hi @ParagEkbote sorry for the outcome of this PR. I feel that the resctrictions w.r.t. the versioning don't outweigh the benefits of skipping the warning message in this case. WDYT @johannaSommer ?

@ParagEkbote
Copy link
Contributor Author

ParagEkbote commented Nov 13, 2025

hi @ParagEkbote sorry for the outcome of this PR. I feel that the resctrictions w.r.t. the versioning don't outweigh the benefits of skipping the warning message in this case. WDYT @johannaSommer ?

Can we consider updating the torch version so that a newer version of torchao can be used?

cc: @davidberenstein1957

@johannaSommer
Copy link
Member

hi @ParagEkbote sorry for the outcome of this PR. I feel that the resctrictions w.r.t. the versioning don't outweigh the benefits of skipping the warning message in this case. WDYT @johannaSommer ?

Can we consider updating the torch version so that a newer version of torchao can be used?

cc: @davidberenstein1957

Hey @ParagEkbote @davidberenstein1957 thanks for the progress so far! Only catching up now but definitely yes, we can upgrade the torch version and then we should be able to support also newer versions of torchao. Infact @gsprochette is currently working on this torch update, maybe he can give a quick heads up as to when this will be merged.

@ParagEkbote ParagEkbote changed the title Pin torchao version to avoid warning of PyTorch ABI Pin torchao version to avoid warning of PyTorch ABI, update transformers version and pin some deps Nov 18, 2025
@github-actions
Copy link

This PR has been inactive for 10 days and is now marked as stale.

@github-actions github-actions bot added the stale label Nov 29, 2025
@ParagEkbote
Copy link
Contributor Author

Not Stale.

@github-actions github-actions bot removed the stale label Dec 1, 2025
Copy link
Collaborator

@gsprochette gsprochette left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot for fixing the torchao version. I think all the pinned dependencies requires a justification. We should try not to restrict the environment too much, and especially for transformers, >=4.55.0 seems restrictive.

pyproject.toml Outdated
"coverage",
"docutils",
"ty",
"ty==0.0.1a20",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a reason for pinning the specific version of ty?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since ty is a pre-release package, the number of rules and their enforcement levels vary. The pinned version helps to pass the majority of the linting checks in line with the configuration defined in pyproject.toml .

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense, can we pin it to 0.0.1a21 then? This is the version already in the current uv.lock

pyproject.toml Outdated
"torchmetrics[image]==1.7.4",
"requests>=2.31.0",
"transformers",
"transformers>=4.55.0",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why are we restraining the transformer version this much?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As seen in this pipdeptree output, older versions of gliner had a minimum requirement of transformers>=4.51.0 , but newer versions support a higher version, should I update it and omit transformers v5 for now?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand that pruna is compatible with transformers>=4.55.0 but here this is not the side that is restrained, you're declaring it's not compatible with transformers<4.55.0 and I don't understand why.
In the issue you linked it sounds like you want to use more recent model architecture, but this is a dependency external to pruna, right? I don't think this should justify restraining the compatibility of pruna with other packages.

pyproject.toml Outdated
"hqq==0.2.7.post1",
"torchao",
"torchao==0.12.0",
"Sphinx>=4.5,<7.0",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sphinx is only a dev dependency so we shouldn't pin it here. I don't understand why we're pinning it in the first place and the logs are not available anymore (sorry for the delay), could you re-explain / re-run the logs please?

@ParagEkbote
Copy link
Contributor Author

ParagEkbote commented Dec 3, 2025

The logs for tests if we don't pin Sphinx are as follows:

=========================== short test summary info ============================
ERROR tests/algorithms/test_algorithms.py - sphinx.errors.VersionRequirementError: 
Sphinx<4.0.2 is incompatible with Jinja2>=3.1.
If you wish to continue using sphinx<4.0.2 you need to pin Jinja2<3.1.
ERROR tests/algorithms/test_algorithms.py - sphinx.errors.VersionRequirementError: 
Sphinx<4.0.2 is incompatible with Jinja2>=3.1.
If you wish to continue using sphinx<4.0.2 you need to pin Jinja2<3.1.
ERROR tests/algorithms/test_algorithms.py - sphinx.errors.VersionRequirementError: 
Sphinx<4.0.2 is incompatible with Jinja2>=3.1.
If you wish to continue using sphinx<4.0.2 you need to pin Jinja2<3.1.
ERROR tests/algorithms/test_algorithms.py - sphinx.errors.VersionRequirementError: 
Sphinx<4.0.2 is incompatible with Jinja2>=3.1.
If you wish to continue using sphinx<4.0.2 you need to pin Jinja2<3.1.
ERROR tests/algorithms/test_algorithms.py - sphinx.errors.VersionRequirementError: 
Sphinx<4.0.2 is incompatible with Jinja2>=3.1.
If you wish to continue using sphinx<4.0.2 you need to pin Jinja2<3.1.
ERROR tests/algorithms/test_algorithms.py - sphinx.errors.VersionRequirementError: 
Sphinx<4.0.2 is incompatible with Jinja2>=3.1.
If you wish to continue using sphinx<4.0.2 you need to pin Jinja2<3.1.
ERROR tests/algorithms/test_algorithms.py - sphinx.errors.VersionRequirementError: 
Sphinx<4.0.2 is incompatible with Jinja2>=3.1.
If you wish to continue using sphinx<4.0.2 you need to pin Jinja2<3.1.
!!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 7 failures !!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!! xdist.dsession.Interrupted: stopping after 1 failures !!!!!!!!!!!!!
======================== 8 warnings, 7 errors in 28.94s ========================

Could we pin sphinx as a dev dependency instead? WDYT?

cc: @gsprochette

Copy link
Collaborator

@gsprochette gsprochette left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the Sphinx and Jinja2 issue, I would be in favor of pinning numpydoc>=1.6.0 which seems to be the first version with a pyproject.toml, the CI should already be using something above that in the uv.lock and numpydoc>=1.6.0 relies on Sphinx>=5. If you can leave a quick comment stating that this is fixing dependencies in numpydoc-validation that would be great :) Thanks a lot!

pyproject.toml Outdated
"torchmetrics[image]==1.7.4",
"requests>=2.31.0",
"transformers",
"transformers>=4.55.0",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand that pruna is compatible with transformers>=4.55.0 but here this is not the side that is restrained, you're declaring it's not compatible with transformers<4.55.0 and I don't understand why.
In the issue you linked it sounds like you want to use more recent model architecture, but this is a dependency external to pruna, right? I don't think this should justify restraining the compatibility of pruna with other packages.

pyproject.toml Outdated
"coverage",
"docutils",
"ty",
"ty==0.0.1a20",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense, can we pin it to 0.0.1a21 then? This is the version already in the current uv.lock

@ParagEkbote
Copy link
Contributor Author

For the Sphinx and Jinja2 issue, I would be in favor of pinning numpydoc>=1.6.0 which seems to be the first version with a pyproject.toml, the CI should already be using something above that in the uv.lock and numpydoc>=1.6.0 relies on Sphinx>=5. If you can leave a quick comment stating that this is fixing dependencies in numpydoc-validation that would be great :) Thanks a lot!

Pinning the numpydoc version has fixed the jinja version conflict and CI, thanks for getting back to me quickly 👍

cc: @gsprochette

@ParagEkbote
Copy link
Contributor Author

I understand that pruna is compatible with transformers>=4.55.0 but here this is not the side that is restrained, you're declaring it's not compatible with transformers<4.55.0 and I don't understand why.
In the issue you linked it sounds like you want to use more recent model architecture, but this is a dependency external to pruna, right? I don't think this should justify restraining the compatibility of pruna with other packages.

Based on the pipdeptree graph for the given deps, we could bump the min. version to 4.57.3 based on the minimum requirement of gliner and restrict installation of v5, something like:

transformers >= 4.57.3, <5.0

pipdeptree graph for minimum version constraints :

transformers==4.57.3
├── llmcompressor==0.6.0 [requires: transformers>4.0,<5.0]
│   └── pruna==0.3.0 [requires: llmcompressor]
├── hqq==0.2.7.post1 [requires: transformers>=4.36.1]
│   └── pruna==0.3.0 [requires: hqq==0.2.7.post1]
├── compressed-tensors==0.10.2 [requires: transformers]
│   └── llmcompressor==0.6.0 [requires: compressed-tensors==0.10.2]
│       └── pruna==0.3.0 [requires: llmcompressor]
├── whisper-s2t==1.3.1 [requires: transformers]
│   └── pruna==0.3.0 [requires: whisper-s2t==1.3.1]
├── gliner==0.2.24 [requires: transformers>=4.57.3]
│   └── pruna==0.3.0 [requires: gliner]
├── pruna==0.3.0 [requires: transformers]
├── DeepCache==0.1.1 [requires: transformers]
│   └── pruna==0.3.0 [requires: DeepCache]
├── optimum==2.0.0 [requires: transformers>=4.29]
│   └── whisper-s2t==1.3.1 [requires: optimum]
│       └── pruna==0.3.0 [requires: whisper-s2t==1.3.1]
└── vbench-pruna==0.0.1 [requires: transformers]
    └── pruna==0.3.0 [requires: vbench-pruna]

WDYT?

cc: @gsprochette

@gsprochette
Copy link
Collaborator

I think you only see this requirement of transformer>=4.57.3 because gliner resolved to 0.2.24 on your side, but lower versions are compatible with the requirements of pruna. Running uv pip install --dry-run transformers==4.37 pruna to check compatibility resolves, meaning the transformers version can go much lower than you expect.

We should not constraint the transformers version unless the code in pruna is incompatible with these anterior versions, so I'm in favor of not touching the transformers version in this PR.

@ParagEkbote
Copy link
Contributor Author

ParagEkbote commented Dec 4, 2025

I think you only see this requirement of transformer>=4.57.3 because gliner resolved to 0.2.24 on your side, but lower versions are compatible with the requirements of pruna. Running uv pip install --dry-run transformers==4.37 pruna to check compatibility resolves, meaning the transformers version can go much lower than you expect.

We should not constraint the transformers version unless the code in pruna is incompatible with these anterior versions, so I'm in favor of not touching the transformers version in this PR.

Do we need to guard against a future transformers v5 for breaking changes or not touch that as well?

cc: @gsprochette

@gsprochette
Copy link
Collaborator

I don't think it's necessary for now to guard against the future transformers versions, we'll restrict compatibility if necessary when the version comes out

@ParagEkbote ParagEkbote changed the title Pin torchao version to avoid warning of PyTorch ABI, update transformers version and pin some deps Pin torchao==0.12.0 to avoid PyTorch ABI warnings, also pin numpydoc>=1.6.0 and ty==0.0.1a21 for compatibility. Dec 5, 2025
@ParagEkbote
Copy link
Contributor Author

I have unpinned transformers as suggested. Could you please review?

cc: @gsprochette

Copy link
Collaborator

@gsprochette gsprochette left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me, thanks for your work :) @johannaSommer it would be cool to merge this soon to stabilize the CI

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[BUG] Pin torchao to avoid warning of PyTorch ABI

4 participants