-
Notifications
You must be signed in to change notification settings - Fork 17
Type stability in ADNLPModels #352
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Type stability in ADNLPModels #352
Conversation
src/ADNLPModels.jl
Outdated
args = [] | ||
for field in fieldnames(ADNLPModels.ADModelBackend) | ||
push!(args, if field in keys(kwargs) && typeof(kwargs[field]) <: ADBackend | ||
kwargs[field] | ||
elseif field in keys(kwargs) && typeof(kwargs[field]) <: DataType | ||
elseif field in keys(kwargs) && typeof(kwargs[field]) <: Union{DataType, UnionAll} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@MaxenceGollier Why do you need UnionAll
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because the type ReverseDiffADGradient
or ForwardDiffADGradient
are now parametric types and it is no longer true that ReverseDiffADGradient <: DataType
...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@tmigot Do you remember why you tested typeof(kwargs[field]) <: DataType
?
The condition typeof(kwargs[field]) <: Type
is always true
so I don't know why we are testing that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yup, it doesn't make sense as is. Maybe it should be kwargs[field] <: Type
to make sure the constructor call next make sense.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not kwargs[field] <: ADBackend
?? This seems to work locally, we are really trying to see if we can create a backend just from kwargs[field]
as per the next line...
It would also make more sense with the previous check where we check if kwargs[field]
is already an ADBackend
by using typeof
...
Co-authored-by: Alexis Montoison <[email protected]>
@tmigot Are you fine with the new |
I will check that today |
test/utils.jl
Outdated
@test typeof(get_adbackend(newer_nlp).hessian_backend) <: ADNLPModels.ReverseDiffADHessian | ||
end | ||
|
||
function test_allocations(nlp) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you create an issue in NLPModelsTest.jl to add this in all models?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@MaxenceGollier can you put the link here once it's done, thanks!
src/ADNLPModels.jl
Outdated
args = [] | ||
for field in fieldnames(ADNLPModels.ADModelBackend) | ||
push!(args, if field in keys(kwargs) && typeof(kwargs[field]) <: ADBackend | ||
kwargs[field] | ||
elseif field in keys(kwargs) && typeof(kwargs[field]) <: DataType | ||
elseif field in keys(kwargs) && typeof(kwargs[field]) <: Union{DataType, UnionAll} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yup, it doesn't make sense as is. Maybe it should be kwargs[field] <: Type
to make sure the constructor call next make sense.
Why the tests are failing? Is it just because NLS grad! function uses other function behind and should not be tested or is there some typing issue in NLPModels? Your benchmarks are indeed surprising @MaxenceGollier , what were the tests problems? |
@MaxenceGollier @amontoison I moved the PR from draft to review to run the package benchmark by curiosity |
That's what I have been trying to figure out, some Hessian (with JET I mean) tests also seem to fail for some reason so there might be other unstabilities hidden in this repo...
What do you mean ? as I said, I still have to figure things out, I have a lot of urgent work to do on something else next week, I will be available to work this out the week after that...
Yes good idea, I found similar issues on |
What do you think of this version of |
I think tests keep failing due to inherent type instabilities in
The remaining type instabilities should be solved by having a closer look to those of |
@tmigot |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot @MaxenceGollier for the great work! I made some comments, but we are going in a good direction.
function ADNLPModel(nlp::ADNLPModel, new_adbackend::ADModelBackend) | ||
return _set_adbackend(nlp, new_adbackend) | ||
end | ||
|
||
function ADNLPModel(nlp::ADNLPModel; kwargs...) | ||
return _set_adbackend(nlp; kwargs...) | ||
end | ||
|
||
function ADNLSModel(nlp::ADNLSModel; kwargs...) | ||
return _set_adbackend(nlp; kwargs...) | ||
end | ||
|
||
function ADNLSModel(nlp::ADNLSModel, new_adbackend::ADModelBackend) | ||
return _set_adbackend(nlp, new_adbackend) | ||
end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good idea, but it doesn't work with this
ADNLPModels.jl/src/ADNLPModels.jl
Line 35 in fd0d248
function ADNLPModel!(model::AbstractNLPModel; kwargs...) |
Because, we are already building mixed models.
Do we really need the ADNLPModel(nlp::ADNLPModel; kwargs...) and ADNLSModel(nlp::ADNLSModel; kwargs...) ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No perhaps we don't.
We need to remove the test_getter_setter
test then which basically test these functions (these are causing the tests to fail on my branch).
I am not sure though if it is a good idea to remove the functions. Let me know what you think. I we do remove them, I will juste remove the test_getter_setter
thing and tests should pass.
I am unfortunately not too surprised by this. Thanks to your analysis we will improve our part of the code. |
bump |
I agree. Perhaps the best we can do is to call the |
Co-authored-by: Tangi Migot <[email protected]>
#347
@amontoison @tmigot This is where I am so far.
Attached are my benchmark results...
grad_benchmarks.zip