-
-
Notifications
You must be signed in to change notification settings - Fork 652
Implement specified timeout for slow doctests #39746
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Conversation
e76243e
to
c10ad41
Compare
c10ad41
to
f7e0cc0
Compare
Documentation preview for this PR (built with commit ae9f5a7; changes) is ready! 🎉 |
Note that https://doc.sagemath.org/html/en/developer/coding_basics.html#special-markup-to-influence-doctests requires |
ee7707a
to
ea574a8
Compare
I just left a comment on #39569, but will summarize it here by saying that I think we should prefer to fix these tests rather than hide the (accurate) warnings. According to our documentation, even long tests should complete in about 5s. If we have a test that takes 100s, it's a problem and should be dealt with. In #36226 I switched the runtime calculation to use CPU time for a more objective measure, and we lowered the warning threshold, though it is still far more lenient than the 1s and 5s recommendations in the developer guide. The whole point was to shine light upon the tests that are too slow, rather than me having to find them one at a time on my laptop when the doctests time out and create false positives. (The PR was started before we moved to Github, but now the CI has the same problem with random timeouts.) Adding the framework for and exceptions to all of these tests will just revert us back to where we were -- with no warnings for tests that are in gross violation of our policy -- while requiring more code to do it. If there really are tests that require something like 100s to complete and there's no faster way to exercise the same code paths, then some other solution is called for (pre-release tests)? |
In theory, that's obviously the right thing to do. In practice, suppose that it takes one year to clean up all these pull requests. Within that time, let's say we have to collectively review 1000 pull requests. The overhead of repeatedly looking at 1000 × 40 warnings can easily leads to people not looking at the warnings newly-introduced by the pull request and make the problem worse. Adding (The alternative is you volunteer to quickly fix them, then yes, problem solved) For comparison, there are 51 |
The problem with this analogy is that 40 is not a huge number. I was regularly fixing them, but it was futile before the CPU time branch was merged because if you make it possible to ignore the warnings, people ignore the warnings. Most slow tests were added because the author had a fast CPU, was testing on an unloaded system, and simply didn't realize it was slow. In those cases a smaller Many random tests are slow -- I was just looking at one of these in |
sure, nice.
sounds like we have a problem... still, I suppose you can fix the test first and we see what to do later. Worst case
isn't this quite trivial i.e. the ratio of speed of any two CPU in existence at a given time is Θ(1)?
what's the relation here? or you mean the pull request also has the extra feature of raising the warning (?)
we're agreeing on this (the current situation is that there are too many warnings, which leads to people ignore them) |
What about simply not adding the github annotation for the known 40 too long tests and tracking those instead in a new issue? Would prevent people from adding new too long tests without annoying anyone of too many unrelated warnings. |
this is exactly the same as |
Not quite, since this long time with specified time is quite easy to add - and it's not clear to an average dev that it should not be used. On the other hand, a hard coded list somewhere in the doctester with a big warning header is less obvious and provides a better education. Alternatively, we could also just tag these tests as "known bug". |
@DaveWitteMorris complains about this option in #39569 .
like the baseline failure json? The disadvantage of having the list far from code is that when the code is fixed, nobody remembers to remove the entry from the list. If it's just for educational purpose, we can either
|
Yes, but perhaps just a hard-coded array in the doctester would be sufficient in this case. I don't think it would be that bad in this case if the entry is not immediately removed from the list. I share the sentiment that these warnings are a bit annoying and thus people just ignore them. But I don't have a strong opinion about the best way forward. |
Can we at least eliminate the low-hanging fruit first? If a random test is slow, In fact, now that I type it, moving any tests that are hard to fix into pytest would be a nice interim solution. It eliminates the warnings, sidesteps the timeout issues, and we could add comments like "if you want this back in the documentation you have to speed it up first." But it's not even clear yet which ones would be hard to fix. |
easier said than done though (looks like most of the low-hanging fruits are addressed now). Sometimes there's just a single function taking no parameter whatsoever (so you cannot reduce it), and is still slow for example this one:
(in reality it sometimes takes a little more than 30s) it's an example, not a test, so you can't just "move to pytest" either. cf. #40443 |
This function is a constant. I can construct the same graph in much less time:
(and can similarly speed up most functions that take no arguments). We can pickle the vertices/edges as python ints, lists, and tuples -- and then load them if the user wants to construct this graph. For a test (suitable for pytest) we could then verify that the algorithm produces a graph isomorphic to the pickled one. This leaves the doctest example untouched, but running 25x faster. |
sagemathgh-40558: Add long time marker to several slow tests This gets rid of about half of the warnings, until someone figure out whether they're intended to be slow, or how they can be sped up. Reference: sagemath#39569, sagemath#39746 ### 📝 Checklist <!-- Put an `x` in all the boxes that apply. --> - [ ] The title is concise and informative. - [ ] The description explains in detail what this PR is about. - [ ] I have linked a relevant issue or discussion. - [ ] I have created tests covering the changes. - [ ] I have updated the documentation and checked the documentation preview. ### ⌛ Dependencies <!-- List all open PRs that this PR logically depends on. For example, --> <!-- - sagemath#12345: short description why this is a dependency --> <!-- - sagemath#34567: ... --> URL: sagemath#40558 Reported by: user202729 Reviewer(s): Michael Orlitzky, user202729
sagemathgh-40558: Add long time marker to several slow tests This gets rid of about half of the warnings, until someone figure out whether they're intended to be slow, or how they can be sped up. Reference: sagemath#39569, sagemath#39746 ### 📝 Checklist <!-- Put an `x` in all the boxes that apply. --> - [ ] The title is concise and informative. - [ ] The description explains in detail what this PR is about. - [ ] I have linked a relevant issue or discussion. - [ ] I have created tests covering the changes. - [ ] I have updated the documentation and checked the documentation preview. ### ⌛ Dependencies <!-- List all open PRs that this PR logically depends on. For example, --> <!-- - sagemath#12345: short description why this is a dependency --> <!-- - sagemath#34567: ... --> URL: sagemath#40558 Reported by: user202729 Reviewer(s): Michael Orlitzky, user202729
sagemathgh-40558: Add long time marker to several slow tests This gets rid of about half of the warnings, until someone figure out whether they're intended to be slow, or how they can be sped up. Reference: sagemath#39569, sagemath#39746 ### 📝 Checklist <!-- Put an `x` in all the boxes that apply. --> - [ ] The title is concise and informative. - [ ] The description explains in detail what this PR is about. - [ ] I have linked a relevant issue or discussion. - [ ] I have created tests covering the changes. - [ ] I have updated the documentation and checked the documentation preview. ### ⌛ Dependencies <!-- List all open PRs that this PR logically depends on. For example, --> <!-- - sagemath#12345: short description why this is a dependency --> <!-- - sagemath#34567: ... --> URL: sagemath#40558 Reported by: user202729 Reviewer(s): Michael Orlitzky, user202729
sagemathgh-40558: Add long time marker to several slow tests This gets rid of about half of the warnings, until someone figure out whether they're intended to be slow, or how they can be sped up. Reference: sagemath#39569, sagemath#39746 ### 📝 Checklist <!-- Put an `x` in all the boxes that apply. --> - [ ] The title is concise and informative. - [ ] The description explains in detail what this PR is about. - [ ] I have linked a relevant issue or discussion. - [ ] I have created tests covering the changes. - [ ] I have updated the documentation and checked the documentation preview. ### ⌛ Dependencies <!-- List all open PRs that this PR logically depends on. For example, --> <!-- - sagemath#12345: short description why this is a dependency --> <!-- - sagemath#34567: ... --> URL: sagemath#40558 Reported by: user202729 Reviewer(s): Michael Orlitzky, user202729
Fixes #39569 . Now a single doctest may add
# long time (limit 100s)
to set the time limit, if the actual time taken is below that then no warning will be raised.I have added a few such comments as demonstration, but not all of them are added.
Also add some doctest and show the time taken on GitHub annotation, for convenience. (hopefully someone would look at it once the false positive/noise are dealt with…)
📝 Checklist
⌛ Dependencies