From b377aedab43dc5ecfaf9b690f9962fad8fe17b26 Mon Sep 17 00:00:00 2001 From: Abel Date: Sat, 13 Mar 2021 18:04:17 -0300 Subject: [PATCH] Update the documentation Change Documenter version and add push_preview --- .github/workflows/TagBot.yml | 8 +- docs/Project.toml | 2 +- docs/make.jl | 6 +- docs/src/api.md | 9 - docs/src/guidelines.md | 42 +---- docs/src/index.md | 47 +---- docs/src/models.md | 139 +++------------ docs/src/tools.md | 26 +-- docs/src/tutorial.md | 330 ----------------------------------- 9 files changed, 56 insertions(+), 553 deletions(-) delete mode 100644 docs/src/tutorial.md diff --git a/.github/workflows/TagBot.yml b/.github/workflows/TagBot.yml index d77d3a0c..f49313b6 100644 --- a/.github/workflows/TagBot.yml +++ b/.github/workflows/TagBot.yml @@ -1,11 +1,15 @@ name: TagBot on: - schedule: - - cron: 0 * * * * + issue_comment: + types: + - created + workflow_dispatch: jobs: TagBot: + if: github.event_name == 'workflow_dispatch' || github.actor == 'JuliaTagBot' runs-on: ubuntu-latest steps: - uses: JuliaRegistries/TagBot@v1 with: token: ${{ secrets.GITHUB_TOKEN }} + ssh: ${{ secrets.DOCUMENTER_KEY }} diff --git a/docs/Project.toml b/docs/Project.toml index ed025f5a..0d8fbc73 100644 --- a/docs/Project.toml +++ b/docs/Project.toml @@ -2,4 +2,4 @@ Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4" [compat] -Documenter = "~0.25" +Documenter = "~0.26" diff --git a/docs/make.jl b/docs/make.jl index 112d8856..9f0b6eb3 100644 --- a/docs/make.jl +++ b/docs/make.jl @@ -11,10 +11,12 @@ makedocs( "Models" => "models.md", "Guidelines" => "guidelines.md", "Tools" => "tools.md", - "Tutorial" => "tutorial.md", "API" => "api.md", "Reference" => "reference.md" ] ) -deploydocs(repo = "github.com/JuliaSmoothOptimizers/NLPModels.jl.git") +deploydocs( + repo = "github.com/JuliaSmoothOptimizers/NLPModels.jl.git", + push_preview = true +) diff --git a/docs/src/api.md b/docs/src/api.md index 524e94ad..a16b13e9 100644 --- a/docs/src/api.md +++ b/docs/src/api.md @@ -126,15 +126,6 @@ hess_op_residual hess_op_residual! ``` -## Derivative Checker - -```@docs -gradient_check -jacobian_check -hessian_check -hessian_check_from_grad -``` - ## Internal ```@docs diff --git a/docs/src/guidelines.md b/docs/src/guidelines.md index 74d562c1..d085bdac 100644 --- a/docs/src/guidelines.md +++ b/docs/src/guidelines.md @@ -132,43 +132,5 @@ Furthermore, the `show` method has to be updated with the correct direction of ` ## [Advanced tests](@id advanced-tests) -To test your model, in addition to writing specific test functions, it is also advised to write consistency checks. -If your model can implement general problems, you can use the 6 problems in our `test/problems` folder implemented both as `ADNLPModel` and by explicitly defining these problem as models. -These can be used to verify that the implementation of your model is correct through the `consistent_nlps` function. -The simplest way to use these would be something like -```julia -for problem in ["BROWNDEN", "HS5", "HS6", "HS10", "HS11", "HS14"] - @printf("Checking problem %-20s", problem) - nlp_ad = eval(Meta.parse(lowercase(problem) * "_autodiff"))() # e.g. hs5_autodiff() - nlp_man = eval(Meta.parse(problem))() # e.g. HS5() - nlp_your = ... - nlps = [nlp_ad, nlp_man, nlp_your] - consistent_nlps(nlps) -end -``` - -Models with specific purposes can make use of the consistency checks by defining equivalent problems with `ADNLPModel` and testing them. -For instance, the following model is a regularization model defined by an existing model `inner`, a regularization parameter `ρ`, and a fixed point `z`: -```julia -mutable struct RegNLP <: AbstractNLPModel - meta :: NLPModelMeta - inner :: AbstractNLPModel - ρ - z -end -``` -Assuming that all unconstrained functions are defined, the following tests will make sure that `RegNLP` is consistent with a specific `ADNLPModel`. -```julia -include(joinpath(dirname(pathof(NLPModels)), "..", "test", "consistency.jl")) - -f(x) = (x[1] - 1)^2 + 100 * (x[2] - x[1]^2)^2 -nlp = ADNLPModel(f, [-1.2; 1.0]) -ρ = rand() -z = rand(2) -rnlp = RegNLP(nlp, ρ, z) -manual = ADNLPModel(x -> f(x) + ρ * norm(x - z)^2 / 2, [-1.2; 1.0]) - -consistent_nlps([rnlp, manual]) -``` -The complete example is available in the repository [RegularizationModel.jl](https://github.com/JuliaSmoothOptimizers/RegularizationModel.jl). - +We have created the package [NLPModelsTest.jl](https://github.com/JuliaSmoothOptimizers/NLPModelsTest.jl) which defines test functions and problems. +To make sure that your model is robust, we recommend using that package. \ No newline at end of file diff --git a/docs/src/index.md b/docs/src/index.md index 928536b7..b9583760 100644 --- a/docs/src/index.md +++ b/docs/src/index.md @@ -66,51 +66,14 @@ Install NLPModels.jl with the following command. ```julia pkg> add NLPModels ``` -This will enable a simple model and a model with automatic differentiation using -`ForwardDiff`. For models using JuMP see -[NLPModelsJuMP.jl](https://github.com/JuliaSmoothOptimizers/NLPModelsJuMP.jl). + +This will enable the use of the API and the tools described here, and it allows the creation of a manually written model. +Look into [Models](@ref) for more information on that subject, and on a list of packages implementing ready-to-use models. ## Usage -See the [Models](@ref), the [Tools](@ref tools-section), the [Tutorial](@ref), or the [API](@ref). - -## Internal Interfaces - - - [`ADNLPModel`](@ref): Uses - [`ForwardDiff`](https://github.com/JuliaDiff/ForwardDiff.jl) to compute the - derivatives. It has a very simple interface, though it isn't very efficient - for larger problems. - - [`SlackModel`](@ref): Creates an equality constrained problem with bounds - on the variables using an existing NLPModel. - - [`LBFGSModel`](@ref): Creates a model using a LBFGS approximation to - the Hessian using an existing NLPModel. - - [`LSR1Model`](@ref): Creates a model using a LSR1 approximation to - the Hessian using an existing NLPModel. - - [`ADNLSModel`](@ref): Similar to `ADNLPModel`, but for nonlinear - least squares. - - [`FeasibilityResidual`](@ref): Creates a nonlinear least squares - model from an equality constrained problem in which the residual - function is the constraints function. - - [`LLSModel`](@ref): Creates a linear least squares model. - - [`SlackNLSModel`](@ref): Creates an equality constrained nonlinear least squares - problem with bounds on the variables using an existing NLSModel. - - [`FeasibilityFormNLS`](@ref): Creates residual variables and constraints, so that the residual - is linear. - -## External Interfaces - - - `AmplModel`: Defined in - [`AmplNLReader.jl`](https://github.com/JuliaSmoothOptimizers/AmplNLReader.jl) - for problems modeled using [AMPL](https://ampl.com) - - `CUTEstModel`: Defined in - [`CUTEst.jl`](https://github.com/JuliaSmoothOptimizers/CUTEst.jl) for - problems from [CUTEst](https://github.com/ralna/CUTEst/wiki). - - [`MathOptNLPModel`](https://github.com/JuliaSmoothOptimizers/NLPModelsJuMP.jl) and [`MathOptNLSModel`](https://github.com/JuliaSmoothOptimizers/NLPModelsJuMP.jl) - for problems modeled using [JuMP.jl](https://github.com/jump-dev/JuMP.jl) and [MathOptInterface.jl](https://github.com/jump-dev/MathOptInterface.jl). - -If you want your interface here, open a PR. - -If you want to create your own interface, check these [Guidelines](@ref). +See the [Models](@ref), the [Tools](@ref tools-section), or the [API](@ref). + ## Attributes diff --git a/docs/src/models.md b/docs/src/models.md index 78dfac3c..b4e3110e 100644 --- a/docs/src/models.md +++ b/docs/src/models.md @@ -1,117 +1,26 @@ # Models -The following general models are implemented in this package: -- [ADNLPModel](@ref) -- [Derived Models](@ref) - - [SlackModel](@ref) - - [LBFGSModel](@ref) - - [LSR1Model](@ref) - -In addition, the following nonlinear least squares models are -implemented in this package: -- [ADNLSModel](@ref) -- [FeasibilityResidual](@ref) -- [LLSModel](@ref) -- [SlackNLSModel](@ref) -- [FeasibilityFormNLS](@ref) - -There are other external models implemented. In particular, -- [AmplModel](https://github.com/JuliaSmoothOptimizers/AmplNLReader.jl) -- [CUTEstModel](https://github.com/JuliaSmoothOptimizers/CUTEst.jl) -- [MathOptNLPModel](https://github.com/JuliaSmoothOptimizers/NLPModelsJuMP.jl) and [MathOptNLSModel](https://github.com/JuliaSmoothOptimizers/NLPModelsJuMP.jl) - using `JuMP/MOI`. - -There are currently two models implemented in this package, besides the -external ones. - -# NLPModels - -## ADNLPModel - -```@docs -NLPModels.ADNLPModel -``` - -### Example - -```@example -using NLPModels -f(x) = sum(x.^4) -x = [1.0; 0.5; 0.25; 0.125] -nlp = ADNLPModel(f, x) -grad(nlp, x) -``` - -## Derived Models - -The following models are created from any given model, making some -modification to that model. - -### SlackModel - -```@docs -NLPModels.SlackModel -``` - -### Example - -```@example -using NLPModels -f(x) = x[1]^2 + 4x[2]^2 -c(x) = [x[1]*x[2] - 1] -x = [2.0; 2.0] -nlp = ADNLPModel(f, x, c, [0.0], [0.0]) -nlp_slack = SlackModel(nlp) -nlp_slack.meta.lvar -``` - -### LBFGSModel - -```@docs -NLPModels.LBFGSModel -``` - -### LSR1Model - -```@docs -NLPModels.LSR1Model -``` - -# NLSModels - -## ADNLSModel - -```@docs -NLPModels.ADNLSModel -``` - -```@example -using NLPModels -F(x) = [x[1] - 1; 10*(x[2] - x[1]^2)] -nlp = ADNLSModel(F, [-1.2; 1.0], 2) -residual(nlp, nlp.meta.x0) -``` - -## FeasibilityResidual - -```@docs -NLPModels.FeasibilityResidual -``` - -## LLSModel - -```@docs -NLPModels.LLSModel -``` - -## SlackNLSModel - -```@docs -NLPModels.SlackNLSModel -``` - -## FeasibilityFormNLS - -```@docs -NLPModels.FeasibilityFormNLS -``` +The following is a list of packages implement the NLPModels API. + +If you want your package listed here, open a Pull Request. + +If you want to create your own interface, check these [Guidelines](@ref). +## Packages + +- [NLPModelsModifiers.jl](https://github.com/JuliaSmoothOptimizers/NLPModelsModifiers.jl): + Models that modify existing models. + For instance, creating slack variables, or moving constraints into the objective functions, or using Quasi-Newton LBFSG approximations to the Hessian. +- [ADNLPModels.jl](https://github.com/JuliaSmoothOptimizers/ADNLPModels.jl): + Models with automatic differentiation. It has a very simple interface, although it isn't very efficient for larger problems. +- [CUTEst.jl](https://github.com/JuliaSmoothOptimizers/CUTEst.jl): + For problems from [CUTEst](https://github.com/ralna/CUTEst/wiki). +- [AmplNLReader.jl](https://github.com/JuliaSmoothOptimizers/AmplNLReader.jl): + For problems modeled using [AMPL](https://ampl.com) +- [NLPModelsJuMP.jl](https://github.com/JuliaSmoothOptimizers/NLPModelsJuMP.jl): + For problems modeled using [JuMP.jl](https://github.com/jump-dev/JuMP.jl). +- [QuadraticModels.jl](https://github.com/JuliaSmoothOptimizers/QuadraticModels.jl): + For problems with quadratic and linear structure. +- [LLSModels.jl](https://github.com/JuliaSmoothOptimizers/LLSModels.jl): + Creates a linear least squares model. +- [PDENLPModels.jl](https://github.com/JuliaSmoothOptimizers/PDENLPModels.jl): + For PDE problems. diff --git a/docs/src/tools.md b/docs/src/tools.md index 999bbe27..b181c855 100644 --- a/docs/src/tools.md +++ b/docs/src/tools.md @@ -7,12 +7,13 @@ number of times that function was called is stored inside the `NLPModel`. For instance ```@example -using NLPModels, LinearAlgebra -nlp = ADNLPModel(x -> dot(x, x), zeros(2)) -for i = 1:100 - obj(nlp, rand(2)) -end -neval_obj(nlp) +# TODO: Reenable this example +# using NLPModels, ADNLPModels, LinearAlgebra +# nlp = ADNLPModel(x -> dot(x, x), zeros(2)) +# for i = 1:100 +# obj(nlp, rand(2)) +# end +# neval_obj(nlp) ``` Some counters are available for all models, some are specific. In @@ -44,23 +45,24 @@ To get the sum of all counters called for a problem, use [`sum_counters`](@ref). ```@example -using NLPModels, LinearAlgebra -nlp = ADNLPModel(x -> dot(x, x), zeros(2)) -obj(nlp, rand(2)) -grad(nlp, rand(2)) -sum_counters(nlp) +# TODO: Reenable this example +# using NLPModels, LinearAlgebra +# nlp = ADNLPModel(x -> dot(x, x), zeros(2)) +# obj(nlp, rand(2)) +# grad(nlp, rand(2)) +# sum_counters(nlp) ``` ## Querying problem type There are some variable for querying the problem type: +- [`has_bounds`](@ref): True when not all variables are free. - [`bound_constrained`](@ref): True for problems with bounded variables and no other constraints. - [`equality_constrained`](@ref): True when problem is constrained only by equalities. - [`has_equalities`](@ref): True when problem has at least one equality constraint. -- [`has_bounds`](@ref): True when not all variables are free. - [`inequality_constrained`](@ref): True when problem is constrained by inequalities. - [`has_inequalities`](@ref): True when problem has at least one inequality constraint that isn't a bound. diff --git a/docs/src/tutorial.md b/docs/src/tutorial.md deleted file mode 100644 index c97ef05a..00000000 --- a/docs/src/tutorial.md +++ /dev/null @@ -1,330 +0,0 @@ -# Tutorial - -```@contents -Pages = ["tutorial.md"] -``` - -NLPModels.jl was created for two purposes: - - - Allow users to access problem databases in an unified way. - Mainly, this means - [CUTEst.jl](https://github.com/JuliaSmoothOptimizers/CUTEst.jl), - but it also gives access to [AMPL - problems](https://github.com/JuliaSmoothOptimizers/AmplNLReader.jl), - as well as JuMP defined problems (e.g. as in - [OptimizationProblems.jl](https://github.com/JuliaSmoothOptimizers/OptimizationProblems.jl)). - - Allow users to create their own problems in the same way. - As a consequence, optimization methods designed according to the NLPModels API - will accept NLPModels of any provenance. - See, for instance, - [JSOSolvers.jl](https://github.com/JuliaSmoothOptimizers/JSOSolvers.jl) and - [NLPModelsIpopt.jl](https://github.com/JuliaSmoothOptimizers/NLPModelsIpopt.jl). - -The main interface for user defined problems is [ADNLPModel](@ref), which defines a -model easily, using automatic differentiation. - -## ADNLPModel Tutorial - -ADNLPModel is simple to use and is useful for classrooms. -It only needs the objective function ``f`` and a starting point ``x^0`` to be -well-defined. -For constrained problems, you'll also need the constraints function ``c``, and -the constraints vectors ``c_L`` and ``c_U``, such that ``c_L \leq c(x) \leq c_U``. -Equality constraints will be automatically identified as those indices ``i`` for -which ``c_{L_i} = c_{U_i}``. - -Let's define the famous Rosenbrock function -```math -f(x) = (x_1 - 1)^2 + 100(x_2 - x_1^2)^2, -``` -with starting point ``x^0 = (-1.2,1.0)``. - -```@example adnlp -using NLPModels - -nlp = ADNLPModel(x->(x[1] - 1.0)^2 + 100*(x[2] - x[1]^2)^2 , [-1.2; 1.0]) -``` - -This is enough to define the model. -Let's get the objective function value at ``x^0``, using only `nlp`. - -```@example adnlp -fx = obj(nlp, nlp.meta.x0) -println("fx = $fx") -``` - -Done. -Let's try the gradient and Hessian. - -```@example adnlp -gx = grad(nlp, nlp.meta.x0) -Hx = hess(nlp, nlp.meta.x0) -println("gx = $gx") -println("Hx = $Hx") -``` - -Notice how only the lower triangle of the Hessian is stored. -Also notice that it is *dense*. This is a current limitation of this model. It -doesn't return sparse matrices, so use it with care. - -Let's do something a little more complex here, defining a function to try to -solve this problem through steepest descent method with Armijo search. -Namely, the method - -1. Given ``x^0``, ``\varepsilon > 0``, and ``\eta \in (0,1)``. Set ``k = 0``; -2. If ``\Vert \nabla f(x^k) \Vert < \varepsilon`` STOP with ``x^* = x^k``; -3. Compute ``d^k = -\nabla f(x^k)``; -4. Compute ``\alpha_k \in (0,1]`` such that ``f(x^k + \alpha_kd^k) < f(x^k) + \alpha_k\eta \nabla f(x^k)^Td^k`` -5. Define ``x^{k+1} = x^k + \alpha_kx^k`` -6. Update ``k = k + 1`` and go to step 2. - -```@example adnlp -using LinearAlgebra - -function steepest(nlp; itmax=100000, eta=1e-4, eps=1e-6, sigma=0.66) - x = nlp.meta.x0 - fx = obj(nlp, x) - ∇fx = grad(nlp, x) - slope = dot(∇fx, ∇fx) - ∇f_norm = sqrt(slope) - iter = 0 - while ∇f_norm > eps && iter < itmax - t = 1.0 - x_trial = x - t * ∇fx - f_trial = obj(nlp, x_trial) - while f_trial > fx - eta * t * slope - t *= sigma - x_trial = x - t * ∇fx - f_trial = obj(nlp, x_trial) - end - x = x_trial - fx = f_trial - ∇fx = grad(nlp, x) - slope = dot(∇fx, ∇fx) - ∇f_norm = sqrt(slope) - iter += 1 - end - optimal = ∇f_norm <= eps - return x, fx, ∇f_norm, optimal, iter -end - -x, fx, ngx, optimal, iter = steepest(nlp) -println("x = $x") -println("fx = $fx") -println("ngx = $ngx") -println("optimal = $optimal") -println("iter = $iter") -``` - -Maybe this code is too complicated? If you're in a class you just want to show a -Newton step. - -```@example adnlp -g(x) = grad(nlp, x) -H(x) = Symmetric(hess(nlp, x), :L) -x = nlp.meta.x0 -d = -H(x)\g(x) -``` - -or a few - -```@example adnlp -for i = 1:5 - global x - x = x - H(x)\g(x) - println("x = $x") -end -``` - -Also, notice how we can reuse the method. - -```@example adnlp -f(x) = (x[1]^2 + x[2]^2 - 5)^2 + (x[1]*x[2] - 2)^2 -x0 = [3.0; 2.0] -nlp = ADNLPModel(f, x0) - -x, fx, ngx, optimal, iter = steepest(nlp) -``` - -External models can be tested with `steepest` as well, as long as they implement `obj` and `grad`. - -For constrained minimization, you need the constraints vector and bounds too. -Bounds on the variables can be passed through a new vector. - -```@example adnlp2 -using NLPModels # hide -f(x) = (x[1] - 1.0)^2 + 100*(x[2] - x[1]^2)^2 -x0 = [-1.2; 1.0] -lvar = [-Inf; 0.1] -uvar = [0.5; 0.5] -c(x) = [x[1] + x[2] - 2; x[1]^2 + x[2]^2] -lcon = [0.0; -Inf] -ucon = [Inf; 1.0] -nlp = ADNLPModel(f, x0, lvar, uvar, c, lcon, ucon) - -println("cx = $(cons(nlp, nlp.meta.x0))") -println("Jx = $(jac(nlp, nlp.meta.x0))") -``` - -## Manual model - -Sometimes you want or need to input your derivatives by hand the easier way to do so is -to define a new model. Which functions you want to define depend on which solver you are -using. In out `test` folder, we have the files `hs5.jl`, `hs6.jl`, `hs10.jl`, `hs11.jl`, -`hs14.jl` and `brownden.jl` as examples. We present the relevant part of `hs6.jl` here -as well: - -```@example hs6 -import NLPModels: increment! -using NLPModels - -mutable struct HS6 <: AbstractNLPModel - meta :: NLPModelMeta - counters :: Counters -end - -function HS6() - meta = NLPModelMeta(2, ncon=1, nnzh=1, nnzj=2, x0=[-1.2; 1.0], lcon=[0.0], ucon=[0.0], name="hs6") - - return HS6(meta, Counters()) -end - -function NLPModels.obj(nlp :: HS6, x :: AbstractVector) - increment!(nlp, :neval_obj) - return (1 - x[1])^2 -end - -function NLPModels.grad!(nlp :: HS6, x :: AbstractVector, gx :: AbstractVector) - increment!(nlp, :neval_grad) - gx .= [2 * (x[1] - 1); 0.0] - return gx -end - -function NLPModels.hess(nlp :: HS6, x :: AbstractVector; obj_weight=1.0, y=Float64[]) - increment!(nlp, :neval_hess) - w = length(y) > 0 ? y[1] : 0.0 - return [2.0 * obj_weight - 20 * w 0.0; 0.0 0.0] -end - -function NLPModels.hess_coord(nlp :: HS6, x :: AbstractVector; obj_weight=1.0, y=Float64[]) - increment!(nlp, :neval_hess) - w = length(y) > 0 ? y[1] : 0.0 - return ([1], [1], [2.0 * obj_weight - 20 * w]) -end - -function NLPModels.hprod!(nlp :: HS6, x :: AbstractVector, v :: AbstractVector, Hv :: AbstractVector; obj_weight=1.0, y=Float64[]) - increment!(nlp, :neval_hprod) - w = length(y) > 0 ? y[1] : 0.0 - Hv .= [(2.0 * obj_weight - 20 * w) * v[1]; 0.0] - return Hv -end - -function NLPModels.cons!(nlp :: HS6, x :: AbstractVector, cx :: AbstractVector) - increment!(nlp, :neval_cons) - cx[1] = 10 * (x[2] - x[1]^2) - return cx -end - -function NLPModels.jac(nlp :: HS6, x :: AbstractVector) - increment!(nlp, :neval_jac) - return [-20 * x[1] 10.0] -end - -function NLPModels.jac_coord(nlp :: HS6, x :: AbstractVector) - increment!(nlp, :neval_jac) - return ([1, 1], [1, 2], [-20 * x[1], 10.0]) -end - -function NLPModels.jprod!(nlp :: HS6, x :: AbstractVector, v :: AbstractVector, Jv :: AbstractVector) - increment!(nlp, :neval_jprod) - Jv .= [-20 * x[1] * v[1] + 10 * v[2]] - return Jv -end - -function NLPModels.jtprod!(nlp :: HS6, x :: AbstractVector, v :: AbstractVector, Jtv :: AbstractVector) - increment!(nlp, :neval_jtprod) - Jtv .= [-20 * x[1]; 10] * v[1] - return Jtv -end -``` - -```@example hs6 -hs6 = HS6() -x = hs6.meta.x0 -(obj(hs6, x), grad(hs6, x)) -``` - -```@example hs6 -cons(hs6, x) -``` - -Notice that we did not define `grad` nor `cons`, but `grad!` and `cons!` were defined. -The default `grad` and `cons` uses the inplace version, so there's no need to redefine -them. - -## Nonlinear least squares models - -In addition to the general nonlinear model, we can define the residual function for a -nonlinear least-squares problem. In other words, the objective function of the problem -is of the form ``f(x) = \tfrac{1}{2}\|F(x)\|^2``, and we can define the function ``F`` -and its derivatives. - -A simple way to define an NLS problem is with `ADNLSModel`, which uses automatic -differentiation. - -```@example nls -using NLPModels # hide -F(x) = [x[1] - 1.0; 10 * (x[2] - x[1]^2)] -x0 = [-1.2; 1.0] -nls = ADNLSModel(F, x0, 2) # 2 nonlinear equations -``` - -```@example nls -residual(nls, x0) -``` - -```@example nls -jac_residual(nls, x0) -``` - -We can also define a linear least squares by passing the matrices that define the -problem -```math -\begin{aligned} -\min \quad & \tfrac{1}{2}\|Ax - b\|^2 \\ -& c_L \leq Cx \leq c_U \\ -& \ell \leq x \leq u. -\end{aligned} -``` -```@example nls -using LinearAlgebra # hide -A = rand(10, 3) -b = rand(10) -C = rand(2, 3) -nls = LLSModel(A, b, C=C, lcon=zeros(2), ucon=zeros(2), lvar=-ones(3), uvar=ones(3)) -``` - -```@example nls -@info norm(jac_residual(nls, zeros(3)) - A) -@info norm(jac(nls, zeros(3)) - C) -``` - -Another way to define a nonlinear least squares is using `FeasibilityResidual` to -consider the constraints of a general nonlinear problem as the residual of the NLS. -```@example nls -nlp = ADNLPModel(x->0, # objective doesn't matter, - ones(2), - x->[x[1] + x[2] - 1; x[1] * x[2] - 2], # c(x) - zeros(2), zeros(2)) # lcon, ucon -nls = FeasibilityResidual(nlp) -``` - -```@example nls -s = 0.0 -for t = 1:100 - global s - x = rand(2) - s += norm(residual(nls, x) - cons(nlp, x)) -end -@info "s = $s" -```