Skip to content

Commit

Permalink
Documentation tweaks (fixes #460) (#462)
Browse files Browse the repository at this point in the history
* Documentation tweaks (fixes #460)

This eliminates the inconsistency in `c_i = 0` vs `c_L <= c_i <= c_U` notes in #460.
It also:
- adds Manifest.toml to `.gitignore`
- standardizes on `y` rather than `λ` for Lagrange multipliers
- clarifies or polishes wording in various places
- adds `NLPModels` to `docs/Project.toml` (this is standard and might
  allow simplification of your `workflow`s)

* Update docs/src/models.md

Co-authored-by: Tangi Migot <[email protected]>

* Use E instead of V and use caligraphic for indices

---------

Co-authored-by: Tangi Migot <[email protected]>
  • Loading branch information
timholy and tmigot authored Jul 24, 2024
1 parent bfd4cea commit 584bdcb
Show file tree
Hide file tree
Showing 9 changed files with 20 additions and 18 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -2,3 +2,4 @@
*.jl.mem
docs/build
docs/site
Manifest.toml
1 change: 1 addition & 0 deletions docs/Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
ADNLPModels = "54578032-b7ea-4c30-94aa-7cbd1cce6c9a"
Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
NLPModels = "a4795742-8479-5a88-8948-cc11e1c8c1a6"

[compat]
ADNLPModels = "0.7"
Expand Down
4 changes: 2 additions & 2 deletions docs/src/api.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,8 +16,8 @@ Namely,
- ``\nabla f(x)``, the gradient of ``f`` at the point ``x``;
- ``\nabla^2 f(x)``, the Hessian of ``f`` at the point ``x``;
- ``J(x) = \nabla c(x)^T``, the Jacobian of ``c`` at the point ``x``;
- ``\nabla^2 f(x) + \sum_{i=1}^m \lambda_i \nabla^2 c_i(x)``,
the Hessian of the Lagrangian function at the point ``(x,\lambda)``.
- ``\nabla^2 f(x) + \sum_{i=1}^m y_i \nabla^2 c_i(x)``,
the Hessian of the Lagrangian function at the point ``(x,y)``.

There are many ways to access some of these values, so here is a little
reference guide.
Expand Down
4 changes: 2 additions & 2 deletions docs/src/guidelines.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ There are about 30 functions in the NLPModels API, and a few with more than one
Luckily, many have a default implementation.
We collect here the list of functions that should be implemented for a complete API.

Here, the following notation apply:
Here, the following notation applies:
- `nlp` is your instance of `MyModel <: AbstractNLPModel`
- `x` is the point where the function is evaluated
- `y` is the vector of Lagrange multipliers (for constrained problems only)
Expand Down Expand Up @@ -143,4 +143,4 @@ Furthermore, the `show` method has to be updated with the correct direction of `
## [Advanced tests](@id advanced-tests)

We have created the package [NLPModelsTest.jl](https://github.com/JuliaSmoothOptimizers/NLPModelsTest.jl) which defines test functions and problems.
To make sure that your model is robust, we recommend using that package.
To make sure that your model is robust, we recommend using it in the test suite of your package.
16 changes: 8 additions & 8 deletions docs/src/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,16 +11,16 @@ The general form of the optimization problem is
```math
\begin{aligned}
\min \quad & f(x) \\
& c_i(x) = 0, \quad i \in E, \\
& c_{L_i} \leq c_i(x) \leq c_{U_i}, \quad i \in I, \\
& c_i(x) = c_{E_i}, \quad i \in {\cal E}, \\
& c_{L_i} \leq c_i(x) \leq c_{U_i}, \quad i \in {\cal I}, \\
& \ell \leq x \leq u,
\end{aligned}
```
where ``f:\mathbb{R}^n\rightarrow\mathbb{R}``,
``c:\mathbb{R}^n\rightarrow\mathbb{R}^m``,
``E\cup I = \{1,2,\dots,m\}``, ``E\cap I = \emptyset``,
``{\cal E}\cup {\cal I} = \{1,2,\dots,m\}``, ``{\cal E}\cap {\cal I} = \emptyset``,
and
``c_{L_i}, c_{U_i}, \ell_j, u_j \in \mathbb{R}\cup\{\pm\infty\}``
``c_{E_i}, c_{L_i}, c_{U_i}, \ell_j, u_j \in \mathbb{R}\cup\{\pm\infty\}``
for ``i = 1,\dots,m`` and ``j = 1,\dots,n``.

For computational reasons, we write
Expand All @@ -31,13 +31,13 @@ For computational reasons, we write
& \ell \leq x \leq u,
\end{aligned}
```
defining ``c_{L_i} = c_{U_i}`` for all ``i \in E``.
defining ``c_{L_i} = c_{U_i} = c_{E_i}`` for all ``i \in {\cal E}``.
The Lagrangian of this problem is defined as
```math
L(x,\lambda,z^L,z^U;\sigma) = \sigma f(x) + c(x)^T\lambda + \sum_{i=1}^n z_i^L(x_i-l_i) + \sum_{i=1}^nz_i^U(u_i-x_i),
L(x,y,z^L,z^U;\sigma) = \sigma f(x) + c(x)^T y + \sum_{i=1}^n z_{L_i}(x_i-l_i) + \sum_{i=1}^n z_{U_i}(u_i-x_i),
```
where ``\sigma`` is a scaling parameter included for computational reasons.
Notice that, for the Hessian, the variables ``z^L`` and ``z^U`` are not used.
Since the final two sums are linear in ``x``, the variables ``z_L`` and ``z_U`` do not appear in the Hessian ``\nabla^2 L(x,y)``.

Optimization problems are represented by an instance/subtype of `AbstractNLPModel`.
Such instances are composed of
Expand All @@ -48,7 +48,7 @@ Such instances are composed of

## Nonlinear Least Squares

A special type of `NLPModels` are the `NLSModels`, i.e., Nonlinear Least
A special subtype of `AbstractNLPModel` is `AbstractNLSModel`, i.e., Nonlinear Least
Squares models. In these problems, the function ``f(x)`` is given by
``\tfrac{1}{2}\Vert F(x)\Vert^2``, where ``F`` is referred as the residual function.
The individual value of ``F``, as well as of its derivatives, is also
Expand Down
4 changes: 2 additions & 2 deletions docs/src/models.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Models

The following is a list of packages implement the NLPModels API.
The following is a list of packages implementing the NLPModels API.

If you want your package listed here, open a Pull Request.

Expand All @@ -19,7 +19,7 @@ If you want to create your own interface, check these [Guidelines](@ref).
- [NLPModelsJuMP.jl](https://github.com/JuliaSmoothOptimizers/NLPModelsJuMP.jl):
For problems modeled using [JuMP.jl](https://github.com/jump-dev/JuMP.jl).
- [QuadraticModels.jl](https://github.com/JuliaSmoothOptimizers/QuadraticModels.jl):
For problems with quadratic and linear structure.
For problems with linear constraints and a quadratic objective (LCQP).
- [LLSModels.jl](https://github.com/JuliaSmoothOptimizers/LLSModels.jl):
Creates a linear least squares model.
- [PDENLPModels.jl](https://github.com/JuliaSmoothOptimizers/PDENLPModels.jl):
Expand Down
4 changes: 2 additions & 2 deletions docs/src/tools.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ neval_obj(nlp)

Some counters are available for all models, some are specific. In
particular, there are additional specific counters for the nonlinear
least squares models.
least squares models (the ones with `residual` below).

| Counter | Description |
|---|---|
Expand Down Expand Up @@ -62,7 +62,7 @@ sum_counters(nlp)

## Querying problem type

There are some variable for querying the problem type:
There are some utility functions for querying the problem type:

- [`has_bounds`](@ref): True when not all variables are free.
- [`bound_constrained`](@ref): True for problems with bounded variables
Expand Down
2 changes: 1 addition & 1 deletion src/nlp/meta.jl
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ The following keyword arguments are accepted:
- `islp`: true if the problem is a linear program
- `name`: problem name
`NLPModelMeta` also contains the following attributes:
`NLPModelMeta` also contains the following attributes, which are computed from the variables above:
- `nvar`: number of variables
- `ifix`: indices of fixed variables
- `ilow`: indices of variables with lower bound only
Expand Down
2 changes: 1 addition & 1 deletion src/nls/meta.jl
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ The following keyword arguments are accepted:
- `nnzh`: number of elements needed to store the nonzeros of the sum of Hessians of the residuals
- `lin`: indices of linear residuals
`NLSMeta` also contains the following attributes:
`NLSMeta` also contains the following attributes, which are computed from the variables above:
- `nequ`: size of the residual
- `nvar`: number of variables
- `nln`: indices of nonlinear residuals
Expand Down

0 comments on commit 584bdcb

Please sign in to comment.