-
-
Notifications
You must be signed in to change notification settings - Fork 114
add has_init
#1157
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add has_init
#1157
Conversation
dd769b5
to
d4b5ce2
Compare
@ChrisRackauckas given that |
I don't see the difference. You can |
Ah, sorry, typo I was referring to stepping with |
Optimization caches should also have |
That would dramatically change how we use the cache though. I think only the optimizes from OptimizationOptimisers allow stepping, I'm not aware of any others that allow that. Right now the Optimization caches are closer to the LinearSolve caches and they mainly cache AD function generation results as far as I understand. |
So should we get rid of https://github.com/SciML/Optimization.jl/blob/d201417e12a5ad7f47b12c15028c8e2ff5afb09b/lib/OptimizationBase/src/solve.jl#L95 and just call |
We should have a trait for |
No not to the user. To the user it's just a cache where |
so the trait introduced in this PR should be renamed to |
no those are two different things. Any |
Checklist
contributor guidelines, in particular the SciML Style Guide and
COLPRAC.
Additional context
Add any other context about the problem here.