-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Uniform returned type in the NLPModel API #454
Comments
|
Oh sorry, I fixed my suggestion. I liked ii) better as it allows to have a model and still evaluate in another precision. Is there a particular reason @amontoison ? |
grad(nlp::AbstractNLPModel{T, S}, x::S) where {T, S} # -> return a value of type S
EDIT: No, nevermind. Now that I think about it, we allowed |
The views are a good point. The current implementation of
to keep the type of However, I suspect the compiler will prefer what you both suggested
|
What makes you say that the compiler would prefer Just to think things through, let’s say
|
From https://docs.julialang.org/en/v1/manual/performance-tips/#Be-aware-of-when-Julia-avoids-specializing I think using I think in general chosing This example sort of illustrates it. Tthe btime's are exactly the same, it is just compilation that changes.
I think the conclusion, I am trying to reach is Beyond that, I think it's a choice. The following table is testing
but it's definitely worth having this discussion and documenting the solution. |
This seems related JuliaSmoothOptimizers/JSOSolvers.jl#135 @paraynaud (in case you have some feedback too) |
From what I read, the option iii) should satisfy everyone. |
The PR #455 applies what we are discussing here. |
I think we should clarify the returned type in the API.
Essentially, when calling
grad(nlp::AbstractNLPModel{T, S}, x::V) where {T, S, V}
we have two options:In general, I think assuming S == V is too restrictive.
Personally, I prefer option ii) as it is easier to implement multi-precision.
@amontoison @abelsiqueira @dpo What do you think? i) or ii) or ... ?
Illustration
There are some incoherences right now, for instance, the functions
grad(nlp, x)
orresidual(nls,x)
do not have the same behavior.This is connected to JuliaSmoothOptimizers/NLPModelsTest.jl#105
The text was updated successfully, but these errors were encountered: