-
Notifications
You must be signed in to change notification settings - Fork 19
[BUG] Isolation of tests in pet #882
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
6c5f766 to
060e9ae
Compare
|
Now that the tests are isolated, they fail 😆 If someone knows how to fix them please go ahead. |
|
I did a little bit of digging into this... FWIW there are only two tests which need to be run in strict order, the rest can be run independently: tox -e pet-tests -- src/metatrain/pet/tests/test_{finetuning,regression}.py -vvv -sSo So maybe we need a more targeted fix than scattering deep copies, because that's a major performance hit. |
|
Is deepcopying a very shallow dict really a "major performance hit"? I don't see how. I think the fact that one unit test depends on having ran some other tests on some other file in a given order is much more worrying... |
Oh definitely the order dependence is much worse. I just meant the fix should probably be targeted at the two tests I suggested Edit : can't remember what OTOH but we should use a test order randomizer |
|
@abmazitov can you try to fix the PET tests? (while keeping them isolated) I don't know if the PET tests were designed to be ran in a given sequence (in which case we would need to rethink how we define that order) or it was just by pure chance that the tests were passing |
|
What do you mean by "keeping them isolated"? |
|
That the fact that a test passes does not depend on having ran some other test previously, unless this order is specified somehow. The isolation is already done, we just needed to do deep copies of the |
|
For full context I was modifying a functionality that is tested by a unit test. After the modification, this test continues to pass. But if I run the full suite of tests it doesn't because there is leakage from one test to another (through this |
The tests for PET were all using and some of them modifying global variables (
DEFAULT_HYPERSandMODEL_HYPERS) which is of course very bad. I realised because running the tests of a single file was passing but running the tests of all files was failing (in a branch).I just took the approach that some of the tests were already taking, which is to deepcopy the hypers before modifying them.
📚 Documentation preview 📚: https://metatrain--882.org.readthedocs.build/en/882/