diff --git a/dev/index.html b/dev/index.html index d1e44545..364d867d 100644 --- a/dev/index.html +++ b/dev/index.html @@ -1,2 +1,2 @@ -
Settings
This document was generated with Documenter.jl on Wednesday 21 February 2024. Using Julia version 1.10.1.
Settings
This document was generated with Documenter.jl on Tuesday 26 March 2024. Using Julia version 1.10.2.
RegularizedOptimization.FISTA
RegularizedOptimization.LM
RegularizedOptimization.LMTR
RegularizedOptimization.PG
RegularizedOptimization.R2
RegularizedOptimization.TR
RegularizedOptimization.TRDH
RegularizedOptimization.prox_split_1w
RegularizedOptimization.prox_split_2w
RegularizedOptimization.FISTA
— MethodFISTA for min_x ϕ(x) = f(x) + g(x), with f(x) cvx and β-smooth, g(x) closed cvx
Input: f: function handle that returns f(x) and ∇f(x) h: function handle that returns g(x) s: initial point proxG: function handle that calculates prox_{νg} options: see descentopts.jl Output: s⁺: s update s : s^(k-1) his : function history feval : number of function evals (total objective)
RegularizedOptimization.LM
— MethodLM(nls, h, options; kwargs...)
A Levenberg-Marquardt method for the problem
min ½ ‖F(x)‖² + h(x)
where F: ℝⁿ → ℝᵐ and its Jacobian J are Lipschitz continuous and h: ℝⁿ → ℝ is lower semi-continuous, proper and prox-bounded.
At each iteration, a step s is computed as an approximate solution of
min ½ ‖J(x) s + F(x)‖² + ½ σ ‖s‖² + ψ(s; x)
where F(x) and J(x) are the residual and its Jacobian at x, respectively, ψ(s; x) = h(x + s), and σ > 0 is a regularization parameter.
Arguments
nls::AbstractNLSModel
: a smooth nonlinear least-squares problemh
: a regularizer such as those defined in ProximalOperatorsoptions::ROSolverOptions
: a structure containing algorithmic parametersKeyword arguments
x0::AbstractVector
: an initial guess (default: nls.meta.x0
)subsolver_logger::AbstractLogger
: a logger to pass to the subproblem solversubsolver
: the procedure used to compute a step (PG
or R2
)subsolver_options::ROSolverOptions
: default options to pass to the subsolver.selected::AbstractVector{<:Integer}
: (default 1:f.meta.nvar
).Return values
xk
: the final iterateFobj_hist
: an array with the history of values of the smooth objectiveHobj_hist
: an array with the history of values of the nonsmooth objectiveComplex_hist
: an array with the history of number of inner iterations.RegularizedOptimization.LMTR
— MethodLMTR(nls, h, χ, options; kwargs...)
A trust-region Levenberg-Marquardt method for the problem
min ½ ‖F(x)‖² + h(x)
where F: ℝⁿ → ℝᵐ and its Jacobian J are Lipschitz continuous and h: ℝⁿ → ℝ is lower semi-continuous and proper.
At each iteration, a step s is computed as an approximate solution of
min ½ ‖J(x) s + F(x)‖₂² + ψ(s; x) subject to ‖s‖ ≤ Δ
where F(x) and J(x) are the residual and its Jacobian at x, respectively, ψ(s; x) = h(x + s), ‖⋅‖ is a user-defined norm and Δ > 0 is a trust-region radius.
Arguments
nls::AbstractNLSModel
: a smooth nonlinear least-squares problemh
: a regularizer such as those defined in ProximalOperatorsχ
: a norm used to define the trust region in the form of a regularizeroptions::ROSolverOptions
: a structure containing algorithmic parametersKeyword arguments
x0::AbstractVector
: an initial guess (default: nls.meta.x0
)subsolver_logger::AbstractLogger
: a logger to pass to the subproblem solversubsolver
: the procedure used to compute a step (PG
or R2
)subsolver_options::ROSolverOptions
: default options to pass to the subsolver.selected::AbstractVector{<:Integer}
: (default 1:f.meta.nvar
).Return values
xk
: the final iterateFobj_hist
: an array with the history of values of the smooth objectiveHobj_hist
: an array with the history of values of the nonsmooth objectiveComplex_hist
: an array with the history of number of inner iterations.RegularizedOptimization.PG
— MethodProximal Gradient Descent for
min_x ϕ(x) = f(x) + g(x), with f(x) β-smooth, g(x) closed, lsc
Input: f: function handle that returns f(x) and ∇f(x) h: function handle that returns g(x) s: initial point proxG: function handle that calculates prox_{νg} options: see descentopts.jl Output: s⁺: s update s : s^(k-1) his : function history feval : number of function evals (total objective )
RegularizedOptimization.R2
— MethodR2(nlp, h, options)
-R2(f, ∇f!, h, options, x0)
A first-order quadratic regularization method for the problem
min f(x) + h(x)
where f: ℝⁿ → ℝ has a Lipschitz-continuous gradient, and h: ℝⁿ → ℝ is lower semi-continuous, proper and prox-bounded.
About each iterate xₖ, a step sₖ is computed as a solution of
min φ(s; xₖ) + ½ σₖ ‖s‖² + ψ(s; xₖ)
where φ(s ; xₖ) = f(xₖ) + ∇f(xₖ)ᵀs is the Taylor linear approximation of f about xₖ, ψ(s; xₖ) = h(xₖ + s), ‖⋅‖ is a user-defined norm and σₖ > 0 is the regularization parameter.
Arguments
nlp::AbstractNLPModel
: a smooth optimization problemh
: a regularizer such as those defined in ProximalOperatorsoptions::ROSolverOptions
: a structure containing algorithmic parametersx0::AbstractVector
: an initial guess (in the second calling form)Keyword Arguments
x0::AbstractVector
: an initial guess (in the first calling form: default = nlp.meta.x0
)selected::AbstractVector{<:Integer}
: (default 1:length(x0)
).The objective and gradient of nlp
will be accessed.
In the second form, instead of nlp
, the user may pass in
f
a function such that f(x)
returns the value of f at x∇f!
a function to evaluate the gradient in place, i.e., such that ∇f!(g, x)
store ∇f(x) in g
.Return values
xk
: the final iterateFobj_hist
: an array with the history of values of the smooth objectiveHobj_hist
: an array with the history of values of the nonsmooth objectiveComplex_hist
: an array with the history of number of inner iterations.RegularizedOptimization.TR
— MethodTR(nlp, h, χ, options; kwargs...)
A trust-region method for the problem
min f(x) + h(x)
where f: ℝⁿ → ℝ has a Lipschitz-continuous Jacobian, and h: ℝⁿ → ℝ is lower semi-continuous and proper.
About each iterate xₖ, a step sₖ is computed as an approximate solution of
min φ(s; xₖ) + ψ(s; xₖ) subject to ‖s‖ ≤ Δₖ
where φ(s ; xₖ) = f(xₖ) + ∇f(xₖ)ᵀs + ½ sᵀ Bₖ s is a quadratic approximation of f about xₖ, ψ(s; xₖ) = h(xₖ + s), ‖⋅‖ is a user-defined norm and Δₖ > 0 is the trust-region radius. The subproblem is solved inexactly by way of a first-order method such as the proximal-gradient method or the quadratic regularization method.
Arguments
nlp::AbstractNLPModel
: a smooth optimization problemh
: a regularizer such as those defined in ProximalOperatorsχ
: a norm used to define the trust region in the form of a regularizeroptions::ROSolverOptions
: a structure containing algorithmic parametersThe objective, gradient and Hessian of nlp
will be accessed. The Hessian is accessed as an abstract operator and need not be the exact Hessian.
Keyword arguments
x0::AbstractVector
: an initial guess (default: nlp.meta.x0
)subsolver_logger::AbstractLogger
: a logger to pass to the subproblem solver (default: the null logger)subsolver
: the procedure used to compute a step (PG
or R2
)subsolver_options::ROSolverOptions
: default options to pass to the subsolver (default: all defaut options)selected::AbstractVector{<:Integer}
: (default 1:f.meta.nvar
).Return values
xk
: the final iterateFobj_hist
: an array with the history of values of the smooth objectiveHobj_hist
: an array with the history of values of the nonsmooth objectiveComplex_hist
: an array with the history of number of inner iterations.RegularizedOptimization.TRDH
— MethodTRDH(nlp, h, χ, options; kwargs...)
-TRDH(f, ∇f!, h, options, x0)
A trust-region method with diagonal Hessian approximation for the problem
min f(x) + h(x)
where f: ℝⁿ → ℝ has a Lipschitz-continuous Jacobian, and h: ℝⁿ → ℝ is lower semi-continuous and proper.
About each iterate xₖ, a step sₖ is computed as an approximate solution of
min φ(s; xₖ) + ψ(s; xₖ) subject to ‖s‖ ≤ Δₖ
where φ(s ; xₖ) = f(xₖ) + ∇f(xₖ)ᵀs + ½ sᵀ Dₖ s is a quadratic approximation of f about xₖ, ψ(s; xₖ) = h(xₖ + s), ‖⋅‖ is a user-defined norm, Dₖ is a diagonal Hessian approximation and Δₖ > 0 is the trust-region radius.
Arguments
nlp::AbstractNLPModel
: a smooth optimization problemh
: a regularizer such as those defined in ProximalOperatorsχ
: a norm used to define the trust region in the form of a regularizeroptions::ROSolverOptions
: a structure containing algorithmic parametersThe objective and gradient of nlp
will be accessed.
In the second form, instead of nlp
, the user may pass in
f
a function such that f(x)
returns the value of f at x∇f!
a function to evaluate the gradient in place, i.e., such that ∇f!(g, x)
store ∇f(x) in g
x0::AbstractVector
: an initial guess.Keyword arguments
x0::AbstractVector
: an initial guess (default: nlp.meta.x0
)selected::AbstractVector{<:Integer}
: (default 1:f.meta.nvar
)Bk
: initial diagonal Hessian approximation (default: (one(R) / options.ν) * I
).Return values
xk
: the final iterateFobj_hist
: an array with the history of values of the smooth objectiveHobj_hist
: an array with the history of values of the nonsmooth objectiveComplex_hist
: an array with the history of number of inner iterations.RegularizedOptimization.prox_split_1w
— MethodSolves descent direction s for some objective function with the structure mins qk(s) + ψ(x+s) s.t. ||s||q⩽ Δ for some Δ provided Arguments ––––– proxp : prox method for p-norm takes in z (vector), a (λ||⋅||p), p is norm for ψ I think s0 : Vector{Float64,1} Initial guess for the descent direction projq : generic that projects onto ||⋅||q⩽Δ norm ball options : mutable structure pparams
Returns
s : Vector{Float64,1} Final value of Algorithm 6.1 descent direction w : Vector{Float64,1} relaxation variable of Algorithm 6.1 descent direction
RegularizedOptimization.prox_split_2w
— MethodSolves descent direction s for some objective function with the structure mins qk(s) + ψ(x+s) s.t. ||s||q⩽ Δ for some Δ provided Arguments ––––– proxp : prox method for p-norm takes in z (vector), a (λ||⋅||p), p is norm for ψ I think s0 : Vector{Float64,1} Initial guess for the descent direction projq : generic that projects onto ||⋅||q⩽Δ norm ball options : mutable structure pparams
Returns
s : Vector{Float64,1} Final value of Algorithm 6.2 descent direction w : Vector{Float64,1} relaxation variable of Algorithm 6.2 descent direction
Settings
This document was generated with Documenter.jl on Wednesday 21 February 2024. Using Julia version 1.10.1.
RegularizedOptimization.FISTA
RegularizedOptimization.LM
RegularizedOptimization.LMTR
RegularizedOptimization.PG
RegularizedOptimization.R2
RegularizedOptimization.TR
RegularizedOptimization.TRDH
RegularizedOptimization.prox_split_1w
RegularizedOptimization.prox_split_2w
RegularizedOptimization.FISTA
— MethodFISTA for min_x ϕ(x) = f(x) + g(x), with f(x) cvx and β-smooth, g(x) closed cvx
Input: f: function handle that returns f(x) and ∇f(x) h: function handle that returns g(x) s: initial point proxG: function handle that calculates prox_{νg} options: see descentopts.jl Output: s⁺: s update s : s^(k-1) his : function history feval : number of function evals (total objective)
RegularizedOptimization.LM
— MethodLM(nls, h, options; kwargs...)
A Levenberg-Marquardt method for the problem
min ½ ‖F(x)‖² + h(x)
where F: ℝⁿ → ℝᵐ and its Jacobian J are Lipschitz continuous and h: ℝⁿ → ℝ is lower semi-continuous, proper and prox-bounded.
At each iteration, a step s is computed as an approximate solution of
min ½ ‖J(x) s + F(x)‖² + ½ σ ‖s‖² + ψ(s; x)
where F(x) and J(x) are the residual and its Jacobian at x, respectively, ψ(s; x) = h(x + s), and σ > 0 is a regularization parameter.
Arguments
nls::AbstractNLSModel
: a smooth nonlinear least-squares problemh
: a regularizer such as those defined in ProximalOperatorsoptions::ROSolverOptions
: a structure containing algorithmic parametersKeyword arguments
x0::AbstractVector
: an initial guess (default: nls.meta.x0
)subsolver_logger::AbstractLogger
: a logger to pass to the subproblem solversubsolver
: the procedure used to compute a step (PG
, R2
or TRDH
)subsolver_options::ROSolverOptions
: default options to pass to the subsolver.selected::AbstractVector{<:Integer}
: (default 1:nls.meta.nvar
).Return values
xk
: the final iterateFobj_hist
: an array with the history of values of the smooth objectiveHobj_hist
: an array with the history of values of the nonsmooth objectiveComplex_hist
: an array with the history of number of inner iterations.RegularizedOptimization.LMTR
— MethodLMTR(nls, h, χ, options; kwargs...)
A trust-region Levenberg-Marquardt method for the problem
min ½ ‖F(x)‖² + h(x)
where F: ℝⁿ → ℝᵐ and its Jacobian J are Lipschitz continuous and h: ℝⁿ → ℝ is lower semi-continuous and proper.
At each iteration, a step s is computed as an approximate solution of
min ½ ‖J(x) s + F(x)‖₂² + ψ(s; x) subject to ‖s‖ ≤ Δ
where F(x) and J(x) are the residual and its Jacobian at x, respectively, ψ(s; x) = h(x + s), ‖⋅‖ is a user-defined norm and Δ > 0 is a trust-region radius.
Arguments
nls::AbstractNLSModel
: a smooth nonlinear least-squares problemh
: a regularizer such as those defined in ProximalOperatorsχ
: a norm used to define the trust region in the form of a regularizeroptions::ROSolverOptions
: a structure containing algorithmic parametersKeyword arguments
x0::AbstractVector
: an initial guess (default: nls.meta.x0
)subsolver_logger::AbstractLogger
: a logger to pass to the subproblem solversubsolver
: the procedure used to compute a step (PG
, R2
or TRDH
)subsolver_options::ROSolverOptions
: default options to pass to the subsolver.selected::AbstractVector{<:Integer}
: (default 1:nls.meta.nvar
).Return values
xk
: the final iterateFobj_hist
: an array with the history of values of the smooth objectiveHobj_hist
: an array with the history of values of the nonsmooth objectiveComplex_hist
: an array with the history of number of inner iterations.RegularizedOptimization.PG
— MethodProximal Gradient Descent for
min_x ϕ(x) = f(x) + g(x), with f(x) β-smooth, g(x) closed, lsc
Input: f: function handle that returns f(x) and ∇f(x) h: function handle that returns g(x) s: initial point proxG: function handle that calculates prox_{νg} options: see descentopts.jl Output: s⁺: s update s : s^(k-1) his : function history feval : number of function evals (total objective )
RegularizedOptimization.R2
— MethodR2(nlp, h, options)
+R2(f, ∇f!, h, options, x0)
A first-order quadratic regularization method for the problem
min f(x) + h(x)
where f: ℝⁿ → ℝ has a Lipschitz-continuous gradient, and h: ℝⁿ → ℝ is lower semi-continuous, proper and prox-bounded.
About each iterate xₖ, a step sₖ is computed as a solution of
min φ(s; xₖ) + ½ σₖ ‖s‖² + ψ(s; xₖ)
where φ(s ; xₖ) = f(xₖ) + ∇f(xₖ)ᵀs is the Taylor linear approximation of f about xₖ, ψ(s; xₖ) = h(xₖ + s), ‖⋅‖ is a user-defined norm and σₖ > 0 is the regularization parameter.
Arguments
nlp::AbstractNLPModel
: a smooth optimization problemh
: a regularizer such as those defined in ProximalOperatorsoptions::ROSolverOptions
: a structure containing algorithmic parametersx0::AbstractVector
: an initial guess (in the second calling form)Keyword Arguments
x0::AbstractVector
: an initial guess (in the first calling form: default = nlp.meta.x0
)selected::AbstractVector{<:Integer}
: (default 1:length(x0)
).The objective and gradient of nlp
will be accessed.
In the second form, instead of nlp
, the user may pass in
f
a function such that f(x)
returns the value of f at x∇f!
a function to evaluate the gradient in place, i.e., such that ∇f!(g, x)
store ∇f(x) in g
.Return values
xk
: the final iterateFobj_hist
: an array with the history of values of the smooth objectiveHobj_hist
: an array with the history of values of the nonsmooth objectiveComplex_hist
: an array with the history of number of inner iterations.RegularizedOptimization.TR
— MethodTR(nlp, h, χ, options; kwargs...)
A trust-region method for the problem
min f(x) + h(x)
where f: ℝⁿ → ℝ has a Lipschitz-continuous Jacobian, and h: ℝⁿ → ℝ is lower semi-continuous and proper.
About each iterate xₖ, a step sₖ is computed as an approximate solution of
min φ(s; xₖ) + ψ(s; xₖ) subject to ‖s‖ ≤ Δₖ
where φ(s ; xₖ) = f(xₖ) + ∇f(xₖ)ᵀs + ½ sᵀ Bₖ s is a quadratic approximation of f about xₖ, ψ(s; xₖ) = h(xₖ + s), ‖⋅‖ is a user-defined norm and Δₖ > 0 is the trust-region radius. The subproblem is solved inexactly by way of a first-order method such as the proximal-gradient method or the quadratic regularization method.
Arguments
nlp::AbstractNLPModel
: a smooth optimization problemh
: a regularizer such as those defined in ProximalOperatorsχ
: a norm used to define the trust region in the form of a regularizeroptions::ROSolverOptions
: a structure containing algorithmic parametersThe objective, gradient and Hessian of nlp
will be accessed. The Hessian is accessed as an abstract operator and need not be the exact Hessian.
Keyword arguments
x0::AbstractVector
: an initial guess (default: nlp.meta.x0
)subsolver_logger::AbstractLogger
: a logger to pass to the subproblem solver (default: the null logger)subsolver
: the procedure used to compute a step (PG
, R2
or TRDH
)subsolver_options::ROSolverOptions
: default options to pass to the subsolver (default: all defaut options)selected::AbstractVector{<:Integer}
: (default 1:f.meta.nvar
).Return values
xk
: the final iterateFobj_hist
: an array with the history of values of the smooth objectiveHobj_hist
: an array with the history of values of the nonsmooth objectiveComplex_hist
: an array with the history of number of inner iterations.RegularizedOptimization.TRDH
— MethodTRDH(nlp, h, χ, options; kwargs...)
+TRDH(f, ∇f!, h, options, x0)
A trust-region method with diagonal Hessian approximation for the problem
min f(x) + h(x)
where f: ℝⁿ → ℝ has a Lipschitz-continuous Jacobian, and h: ℝⁿ → ℝ is lower semi-continuous and proper.
About each iterate xₖ, a step sₖ is computed as an approximate solution of
min φ(s; xₖ) + ψ(s; xₖ) subject to ‖s‖ ≤ Δₖ
where φ(s ; xₖ) = f(xₖ) + ∇f(xₖ)ᵀs + ½ sᵀ Dₖ s is a quadratic approximation of f about xₖ, ψ(s; xₖ) = h(xₖ + s), ‖⋅‖ is a user-defined norm, Dₖ is a diagonal Hessian approximation and Δₖ > 0 is the trust-region radius.
Arguments
nlp::AbstractDiagonalQNModel
: a smooth optimization problemh
: a regularizer such as those defined in ProximalOperatorsχ
: a norm used to define the trust region in the form of a regularizeroptions::ROSolverOptions
: a structure containing algorithmic parametersThe objective and gradient of nlp
will be accessed.
In the second form, instead of nlp
, the user may pass in
f
a function such that f(x)
returns the value of f at x∇f!
a function to evaluate the gradient in place, i.e., such that ∇f!(g, x)
store ∇f(x) in g
x0::AbstractVector
: an initial guess.Keyword arguments
x0::AbstractVector
: an initial guess (default: nlp.meta.x0
)selected::AbstractVector{<:Integer}
: (default 1:f.meta.nvar
)Return values
xk
: the final iterateFobj_hist
: an array with the history of values of the smooth objectiveHobj_hist
: an array with the history of values of the nonsmooth objectiveComplex_hist
: an array with the history of number of inner iterations.RegularizedOptimization.prox_split_1w
— MethodSolves descent direction s for some objective function with the structure mins qk(s) + ψ(x+s) s.t. ||s||q⩽ Δ for some Δ provided Arguments ––––– proxp : prox method for p-norm takes in z (vector), a (λ||⋅||p), p is norm for ψ I think s0 : Vector{Float64,1} Initial guess for the descent direction projq : generic that projects onto ||⋅||q⩽Δ norm ball options : mutable structure pparams
Returns
s : Vector{Float64,1} Final value of Algorithm 6.1 descent direction w : Vector{Float64,1} relaxation variable of Algorithm 6.1 descent direction
RegularizedOptimization.prox_split_2w
— MethodSolves descent direction s for some objective function with the structure mins qk(s) + ψ(x+s) s.t. ||s||q⩽ Δ for some Δ provided Arguments ––––– proxp : prox method for p-norm takes in z (vector), a (λ||⋅||p), p is norm for ψ I think s0 : Vector{Float64,1} Initial guess for the descent direction projq : generic that projects onto ||⋅||q⩽Δ norm ball options : mutable structure pparams
Returns
s : Vector{Float64,1} Final value of Algorithm 6.2 descent direction w : Vector{Float64,1} relaxation variable of Algorithm 6.2 descent direction
Settings
This document was generated with Documenter.jl on Tuesday 26 March 2024. Using Julia version 1.10.2.
Settings
This document was generated with Documenter.jl on Wednesday 21 February 2024. Using Julia version 1.10.1.
Settings
This document was generated with Documenter.jl on Tuesday 26 March 2024. Using Julia version 1.10.2.
Settings
This document was generated with Documenter.jl on Wednesday 21 February 2024. Using Julia version 1.10.1.
Settings
This document was generated with Documenter.jl on Tuesday 26 March 2024. Using Julia version 1.10.2.