diff --git a/dev/index.html b/dev/index.html index d1e44545..364d867d 100644 --- a/dev/index.html +++ b/dev/index.html @@ -1,2 +1,2 @@ -Home · RegularizedOptimization.jl
+Home · RegularizedOptimization.jl
diff --git a/dev/reference/index.html b/dev/reference/index.html index 29fc8a87..873c92fa 100644 --- a/dev/reference/index.html +++ b/dev/reference/index.html @@ -1,4 +1,4 @@ -Reference · RegularizedOptimization.jl

Reference

Contents

Index

RegularizedOptimization.FISTAMethod

FISTA for min_x ϕ(x) = f(x) + g(x), with f(x) cvx and β-smooth, g(x) closed cvx

Input: f: function handle that returns f(x) and ∇f(x) h: function handle that returns g(x) s: initial point proxG: function handle that calculates prox_{νg} options: see descentopts.jl Output: s⁺: s update s : s^(k-1) his : function history feval : number of function evals (total objective)

source
RegularizedOptimization.LMMethod
LM(nls, h, options; kwargs...)

A Levenberg-Marquardt method for the problem

min ½ ‖F(x)‖² + h(x)

where F: ℝⁿ → ℝᵐ and its Jacobian J are Lipschitz continuous and h: ℝⁿ → ℝ is lower semi-continuous, proper and prox-bounded.

At each iteration, a step s is computed as an approximate solution of

min  ½ ‖J(x) s + F(x)‖² + ½ σ ‖s‖² + ψ(s; x)

where F(x) and J(x) are the residual and its Jacobian at x, respectively, ψ(s; x) = h(x + s), and σ > 0 is a regularization parameter.

Arguments

  • nls::AbstractNLSModel: a smooth nonlinear least-squares problem
  • h: a regularizer such as those defined in ProximalOperators
  • options::ROSolverOptions: a structure containing algorithmic parameters

Keyword arguments

  • x0::AbstractVector: an initial guess (default: nls.meta.x0)
  • subsolver_logger::AbstractLogger: a logger to pass to the subproblem solver
  • subsolver: the procedure used to compute a step (PG or R2)
  • subsolver_options::ROSolverOptions: default options to pass to the subsolver.
  • selected::AbstractVector{<:Integer}: (default 1:f.meta.nvar).

Return values

  • xk: the final iterate
  • Fobj_hist: an array with the history of values of the smooth objective
  • Hobj_hist: an array with the history of values of the nonsmooth objective
  • Complex_hist: an array with the history of number of inner iterations.
source
RegularizedOptimization.LMTRMethod
LMTR(nls, h, χ, options; kwargs...)

A trust-region Levenberg-Marquardt method for the problem

min ½ ‖F(x)‖² + h(x)

where F: ℝⁿ → ℝᵐ and its Jacobian J are Lipschitz continuous and h: ℝⁿ → ℝ is lower semi-continuous and proper.

At each iteration, a step s is computed as an approximate solution of

min  ½ ‖J(x) s + F(x)‖₂² + ψ(s; x)  subject to  ‖s‖ ≤ Δ

where F(x) and J(x) are the residual and its Jacobian at x, respectively, ψ(s; x) = h(x + s), ‖⋅‖ is a user-defined norm and Δ > 0 is a trust-region radius.

Arguments

  • nls::AbstractNLSModel: a smooth nonlinear least-squares problem
  • h: a regularizer such as those defined in ProximalOperators
  • χ: a norm used to define the trust region in the form of a regularizer
  • options::ROSolverOptions: a structure containing algorithmic parameters

Keyword arguments

  • x0::AbstractVector: an initial guess (default: nls.meta.x0)
  • subsolver_logger::AbstractLogger: a logger to pass to the subproblem solver
  • subsolver: the procedure used to compute a step (PG or R2)
  • subsolver_options::ROSolverOptions: default options to pass to the subsolver.
  • selected::AbstractVector{<:Integer}: (default 1:f.meta.nvar).

Return values

  • xk: the final iterate
  • Fobj_hist: an array with the history of values of the smooth objective
  • Hobj_hist: an array with the history of values of the nonsmooth objective
  • Complex_hist: an array with the history of number of inner iterations.
source
RegularizedOptimization.PGMethod

Proximal Gradient Descent for

min_x ϕ(x) = f(x) + g(x), with f(x) β-smooth, g(x) closed, lsc

Input: f: function handle that returns f(x) and ∇f(x) h: function handle that returns g(x) s: initial point proxG: function handle that calculates prox_{νg} options: see descentopts.jl Output: s⁺: s update s : s^(k-1) his : function history feval : number of function evals (total objective )

source
RegularizedOptimization.R2Method
R2(nlp, h, options)
-R2(f, ∇f!, h, options, x0)

A first-order quadratic regularization method for the problem

min f(x) + h(x)

where f: ℝⁿ → ℝ has a Lipschitz-continuous gradient, and h: ℝⁿ → ℝ is lower semi-continuous, proper and prox-bounded.

About each iterate xₖ, a step sₖ is computed as a solution of

min  φ(s; xₖ) + ½ σₖ ‖s‖² + ψ(s; xₖ)

where φ(s ; xₖ) = f(xₖ) + ∇f(xₖ)ᵀs is the Taylor linear approximation of f about xₖ, ψ(s; xₖ) = h(xₖ + s), ‖⋅‖ is a user-defined norm and σₖ > 0 is the regularization parameter.

Arguments

  • nlp::AbstractNLPModel: a smooth optimization problem
  • h: a regularizer such as those defined in ProximalOperators
  • options::ROSolverOptions: a structure containing algorithmic parameters
  • x0::AbstractVector: an initial guess (in the second calling form)

Keyword Arguments

  • x0::AbstractVector: an initial guess (in the first calling form: default = nlp.meta.x0)
  • selected::AbstractVector{<:Integer}: (default 1:length(x0)).

The objective and gradient of nlp will be accessed.

In the second form, instead of nlp, the user may pass in

  • f a function such that f(x) returns the value of f at x
  • ∇f! a function to evaluate the gradient in place, i.e., such that ∇f!(g, x) store ∇f(x) in g.

Return values

  • xk: the final iterate
  • Fobj_hist: an array with the history of values of the smooth objective
  • Hobj_hist: an array with the history of values of the nonsmooth objective
  • Complex_hist: an array with the history of number of inner iterations.
source
RegularizedOptimization.TRMethod
TR(nlp, h, χ, options; kwargs...)

A trust-region method for the problem

min f(x) + h(x)

where f: ℝⁿ → ℝ has a Lipschitz-continuous Jacobian, and h: ℝⁿ → ℝ is lower semi-continuous and proper.

About each iterate xₖ, a step sₖ is computed as an approximate solution of

min  φ(s; xₖ) + ψ(s; xₖ)  subject to  ‖s‖ ≤ Δₖ

where φ(s ; xₖ) = f(xₖ) + ∇f(xₖ)ᵀs + ½ sᵀ Bₖ s is a quadratic approximation of f about xₖ, ψ(s; xₖ) = h(xₖ + s), ‖⋅‖ is a user-defined norm and Δₖ > 0 is the trust-region radius. The subproblem is solved inexactly by way of a first-order method such as the proximal-gradient method or the quadratic regularization method.

Arguments

  • nlp::AbstractNLPModel: a smooth optimization problem
  • h: a regularizer such as those defined in ProximalOperators
  • χ: a norm used to define the trust region in the form of a regularizer
  • options::ROSolverOptions: a structure containing algorithmic parameters

The objective, gradient and Hessian of nlp will be accessed. The Hessian is accessed as an abstract operator and need not be the exact Hessian.

Keyword arguments

  • x0::AbstractVector: an initial guess (default: nlp.meta.x0)
  • subsolver_logger::AbstractLogger: a logger to pass to the subproblem solver (default: the null logger)
  • subsolver: the procedure used to compute a step (PG or R2)
  • subsolver_options::ROSolverOptions: default options to pass to the subsolver (default: all defaut options)
  • selected::AbstractVector{<:Integer}: (default 1:f.meta.nvar).

Return values

  • xk: the final iterate
  • Fobj_hist: an array with the history of values of the smooth objective
  • Hobj_hist: an array with the history of values of the nonsmooth objective
  • Complex_hist: an array with the history of number of inner iterations.
source
RegularizedOptimization.TRDHMethod
TRDH(nlp, h, χ, options; kwargs...)
-TRDH(f, ∇f!, h, options, x0)

A trust-region method with diagonal Hessian approximation for the problem

min f(x) + h(x)

where f: ℝⁿ → ℝ has a Lipschitz-continuous Jacobian, and h: ℝⁿ → ℝ is lower semi-continuous and proper.

About each iterate xₖ, a step sₖ is computed as an approximate solution of

min  φ(s; xₖ) + ψ(s; xₖ)  subject to  ‖s‖ ≤ Δₖ

where φ(s ; xₖ) = f(xₖ) + ∇f(xₖ)ᵀs + ½ sᵀ Dₖ s is a quadratic approximation of f about xₖ, ψ(s; xₖ) = h(xₖ + s), ‖⋅‖ is a user-defined norm, Dₖ is a diagonal Hessian approximation and Δₖ > 0 is the trust-region radius.

Arguments

  • nlp::AbstractNLPModel: a smooth optimization problem
  • h: a regularizer such as those defined in ProximalOperators
  • χ: a norm used to define the trust region in the form of a regularizer
  • options::ROSolverOptions: a structure containing algorithmic parameters

The objective and gradient of nlp will be accessed.

In the second form, instead of nlp, the user may pass in

  • f a function such that f(x) returns the value of f at x
  • ∇f! a function to evaluate the gradient in place, i.e., such that ∇f!(g, x) store ∇f(x) in g
  • x0::AbstractVector: an initial guess.

Keyword arguments

  • x0::AbstractVector: an initial guess (default: nlp.meta.x0)
  • selected::AbstractVector{<:Integer}: (default 1:f.meta.nvar)
  • Bk: initial diagonal Hessian approximation (default: (one(R) / options.ν) * I).

Return values

  • xk: the final iterate
  • Fobj_hist: an array with the history of values of the smooth objective
  • Hobj_hist: an array with the history of values of the nonsmooth objective
  • Complex_hist: an array with the history of number of inner iterations.
source
RegularizedOptimization.prox_split_1wMethod

Solves descent direction s for some objective function with the structure mins qk(s) + ψ(x+s) s.t. ||s||q⩽ Δ for some Δ provided Arguments ––––– proxp : prox method for p-norm takes in z (vector), a (λ||⋅||p), p is norm for ψ I think s0 : Vector{Float64,1} Initial guess for the descent direction projq : generic that projects onto ||⋅||q⩽Δ norm ball options : mutable structure pparams

Returns

s : Vector{Float64,1} Final value of Algorithm 6.1 descent direction w : Vector{Float64,1} relaxation variable of Algorithm 6.1 descent direction

source
RegularizedOptimization.prox_split_2wMethod

Solves descent direction s for some objective function with the structure mins qk(s) + ψ(x+s) s.t. ||s||q⩽ Δ for some Δ provided Arguments ––––– proxp : prox method for p-norm takes in z (vector), a (λ||⋅||p), p is norm for ψ I think s0 : Vector{Float64,1} Initial guess for the descent direction projq : generic that projects onto ||⋅||q⩽Δ norm ball options : mutable structure pparams

Returns

s : Vector{Float64,1} Final value of Algorithm 6.2 descent direction w : Vector{Float64,1} relaxation variable of Algorithm 6.2 descent direction

source
+Reference · RegularizedOptimization.jl

Reference

Contents

Index

RegularizedOptimization.FISTAMethod

FISTA for min_x ϕ(x) = f(x) + g(x), with f(x) cvx and β-smooth, g(x) closed cvx

Input: f: function handle that returns f(x) and ∇f(x) h: function handle that returns g(x) s: initial point proxG: function handle that calculates prox_{νg} options: see descentopts.jl Output: s⁺: s update s : s^(k-1) his : function history feval : number of function evals (total objective)

source
RegularizedOptimization.LMMethod
LM(nls, h, options; kwargs...)

A Levenberg-Marquardt method for the problem

min ½ ‖F(x)‖² + h(x)

where F: ℝⁿ → ℝᵐ and its Jacobian J are Lipschitz continuous and h: ℝⁿ → ℝ is lower semi-continuous, proper and prox-bounded.

At each iteration, a step s is computed as an approximate solution of

min  ½ ‖J(x) s + F(x)‖² + ½ σ ‖s‖² + ψ(s; x)

where F(x) and J(x) are the residual and its Jacobian at x, respectively, ψ(s; x) = h(x + s), and σ > 0 is a regularization parameter.

Arguments

  • nls::AbstractNLSModel: a smooth nonlinear least-squares problem
  • h: a regularizer such as those defined in ProximalOperators
  • options::ROSolverOptions: a structure containing algorithmic parameters

Keyword arguments

  • x0::AbstractVector: an initial guess (default: nls.meta.x0)
  • subsolver_logger::AbstractLogger: a logger to pass to the subproblem solver
  • subsolver: the procedure used to compute a step (PG, R2 or TRDH)
  • subsolver_options::ROSolverOptions: default options to pass to the subsolver.
  • selected::AbstractVector{<:Integer}: (default 1:nls.meta.nvar).

Return values

  • xk: the final iterate
  • Fobj_hist: an array with the history of values of the smooth objective
  • Hobj_hist: an array with the history of values of the nonsmooth objective
  • Complex_hist: an array with the history of number of inner iterations.
source
RegularizedOptimization.LMTRMethod
LMTR(nls, h, χ, options; kwargs...)

A trust-region Levenberg-Marquardt method for the problem

min ½ ‖F(x)‖² + h(x)

where F: ℝⁿ → ℝᵐ and its Jacobian J are Lipschitz continuous and h: ℝⁿ → ℝ is lower semi-continuous and proper.

At each iteration, a step s is computed as an approximate solution of

min  ½ ‖J(x) s + F(x)‖₂² + ψ(s; x)  subject to  ‖s‖ ≤ Δ

where F(x) and J(x) are the residual and its Jacobian at x, respectively, ψ(s; x) = h(x + s), ‖⋅‖ is a user-defined norm and Δ > 0 is a trust-region radius.

Arguments

  • nls::AbstractNLSModel: a smooth nonlinear least-squares problem
  • h: a regularizer such as those defined in ProximalOperators
  • χ: a norm used to define the trust region in the form of a regularizer
  • options::ROSolverOptions: a structure containing algorithmic parameters

Keyword arguments

  • x0::AbstractVector: an initial guess (default: nls.meta.x0)
  • subsolver_logger::AbstractLogger: a logger to pass to the subproblem solver
  • subsolver: the procedure used to compute a step (PG, R2 or TRDH)
  • subsolver_options::ROSolverOptions: default options to pass to the subsolver.
  • selected::AbstractVector{<:Integer}: (default 1:nls.meta.nvar).

Return values

  • xk: the final iterate
  • Fobj_hist: an array with the history of values of the smooth objective
  • Hobj_hist: an array with the history of values of the nonsmooth objective
  • Complex_hist: an array with the history of number of inner iterations.
source
RegularizedOptimization.PGMethod

Proximal Gradient Descent for

min_x ϕ(x) = f(x) + g(x), with f(x) β-smooth, g(x) closed, lsc

Input: f: function handle that returns f(x) and ∇f(x) h: function handle that returns g(x) s: initial point proxG: function handle that calculates prox_{νg} options: see descentopts.jl Output: s⁺: s update s : s^(k-1) his : function history feval : number of function evals (total objective )

source
RegularizedOptimization.R2Method
R2(nlp, h, options)
+R2(f, ∇f!, h, options, x0)

A first-order quadratic regularization method for the problem

min f(x) + h(x)

where f: ℝⁿ → ℝ has a Lipschitz-continuous gradient, and h: ℝⁿ → ℝ is lower semi-continuous, proper and prox-bounded.

About each iterate xₖ, a step sₖ is computed as a solution of

min  φ(s; xₖ) + ½ σₖ ‖s‖² + ψ(s; xₖ)

where φ(s ; xₖ) = f(xₖ) + ∇f(xₖ)ᵀs is the Taylor linear approximation of f about xₖ, ψ(s; xₖ) = h(xₖ + s), ‖⋅‖ is a user-defined norm and σₖ > 0 is the regularization parameter.

Arguments

  • nlp::AbstractNLPModel: a smooth optimization problem
  • h: a regularizer such as those defined in ProximalOperators
  • options::ROSolverOptions: a structure containing algorithmic parameters
  • x0::AbstractVector: an initial guess (in the second calling form)

Keyword Arguments

  • x0::AbstractVector: an initial guess (in the first calling form: default = nlp.meta.x0)
  • selected::AbstractVector{<:Integer}: (default 1:length(x0)).

The objective and gradient of nlp will be accessed.

In the second form, instead of nlp, the user may pass in

  • f a function such that f(x) returns the value of f at x
  • ∇f! a function to evaluate the gradient in place, i.e., such that ∇f!(g, x) store ∇f(x) in g.

Return values

  • xk: the final iterate
  • Fobj_hist: an array with the history of values of the smooth objective
  • Hobj_hist: an array with the history of values of the nonsmooth objective
  • Complex_hist: an array with the history of number of inner iterations.
source
RegularizedOptimization.TRMethod
TR(nlp, h, χ, options; kwargs...)

A trust-region method for the problem

min f(x) + h(x)

where f: ℝⁿ → ℝ has a Lipschitz-continuous Jacobian, and h: ℝⁿ → ℝ is lower semi-continuous and proper.

About each iterate xₖ, a step sₖ is computed as an approximate solution of

min  φ(s; xₖ) + ψ(s; xₖ)  subject to  ‖s‖ ≤ Δₖ

where φ(s ; xₖ) = f(xₖ) + ∇f(xₖ)ᵀs + ½ sᵀ Bₖ s is a quadratic approximation of f about xₖ, ψ(s; xₖ) = h(xₖ + s), ‖⋅‖ is a user-defined norm and Δₖ > 0 is the trust-region radius. The subproblem is solved inexactly by way of a first-order method such as the proximal-gradient method or the quadratic regularization method.

Arguments

  • nlp::AbstractNLPModel: a smooth optimization problem
  • h: a regularizer such as those defined in ProximalOperators
  • χ: a norm used to define the trust region in the form of a regularizer
  • options::ROSolverOptions: a structure containing algorithmic parameters

The objective, gradient and Hessian of nlp will be accessed. The Hessian is accessed as an abstract operator and need not be the exact Hessian.

Keyword arguments

  • x0::AbstractVector: an initial guess (default: nlp.meta.x0)
  • subsolver_logger::AbstractLogger: a logger to pass to the subproblem solver (default: the null logger)
  • subsolver: the procedure used to compute a step (PG, R2 or TRDH)
  • subsolver_options::ROSolverOptions: default options to pass to the subsolver (default: all defaut options)
  • selected::AbstractVector{<:Integer}: (default 1:f.meta.nvar).

Return values

  • xk: the final iterate
  • Fobj_hist: an array with the history of values of the smooth objective
  • Hobj_hist: an array with the history of values of the nonsmooth objective
  • Complex_hist: an array with the history of number of inner iterations.
source
RegularizedOptimization.TRDHMethod
TRDH(nlp, h, χ, options; kwargs...)
+TRDH(f, ∇f!, h, options, x0)

A trust-region method with diagonal Hessian approximation for the problem

min f(x) + h(x)

where f: ℝⁿ → ℝ has a Lipschitz-continuous Jacobian, and h: ℝⁿ → ℝ is lower semi-continuous and proper.

About each iterate xₖ, a step sₖ is computed as an approximate solution of

min  φ(s; xₖ) + ψ(s; xₖ)  subject to  ‖s‖ ≤ Δₖ

where φ(s ; xₖ) = f(xₖ) + ∇f(xₖ)ᵀs + ½ sᵀ Dₖ s is a quadratic approximation of f about xₖ, ψ(s; xₖ) = h(xₖ + s), ‖⋅‖ is a user-defined norm, Dₖ is a diagonal Hessian approximation and Δₖ > 0 is the trust-region radius.

Arguments

  • nlp::AbstractDiagonalQNModel: a smooth optimization problem
  • h: a regularizer such as those defined in ProximalOperators
  • χ: a norm used to define the trust region in the form of a regularizer
  • options::ROSolverOptions: a structure containing algorithmic parameters

The objective and gradient of nlp will be accessed.

In the second form, instead of nlp, the user may pass in

  • f a function such that f(x) returns the value of f at x
  • ∇f! a function to evaluate the gradient in place, i.e., such that ∇f!(g, x) store ∇f(x) in g
  • x0::AbstractVector: an initial guess.

Keyword arguments

  • x0::AbstractVector: an initial guess (default: nlp.meta.x0)
  • selected::AbstractVector{<:Integer}: (default 1:f.meta.nvar)

Return values

  • xk: the final iterate
  • Fobj_hist: an array with the history of values of the smooth objective
  • Hobj_hist: an array with the history of values of the nonsmooth objective
  • Complex_hist: an array with the history of number of inner iterations.
source
RegularizedOptimization.prox_split_1wMethod

Solves descent direction s for some objective function with the structure mins qk(s) + ψ(x+s) s.t. ||s||q⩽ Δ for some Δ provided Arguments ––––– proxp : prox method for p-norm takes in z (vector), a (λ||⋅||p), p is norm for ψ I think s0 : Vector{Float64,1} Initial guess for the descent direction projq : generic that projects onto ||⋅||q⩽Δ norm ball options : mutable structure pparams

Returns

s : Vector{Float64,1} Final value of Algorithm 6.1 descent direction w : Vector{Float64,1} relaxation variable of Algorithm 6.1 descent direction

source
RegularizedOptimization.prox_split_2wMethod

Solves descent direction s for some objective function with the structure mins qk(s) + ψ(x+s) s.t. ||s||q⩽ Δ for some Δ provided Arguments ––––– proxp : prox method for p-norm takes in z (vector), a (λ||⋅||p), p is norm for ψ I think s0 : Vector{Float64,1} Initial guess for the descent direction projq : generic that projects onto ||⋅||q⩽Δ norm ball options : mutable structure pparams

Returns

s : Vector{Float64,1} Final value of Algorithm 6.2 descent direction w : Vector{Float64,1} relaxation variable of Algorithm 6.2 descent direction

source
diff --git a/dev/search/index.html b/dev/search/index.html index ea50117f..e254f55e 100644 --- a/dev/search/index.html +++ b/dev/search/index.html @@ -1,2 +1,2 @@ -Search · RegularizedOptimization.jl

Loading search...

    +Search · RegularizedOptimization.jl

    Loading search...

      diff --git a/dev/search_index.js b/dev/search_index.js index a21e4086..3f9157c1 100644 --- a/dev/search_index.js +++ b/dev/search_index.js @@ -1,3 +1,3 @@ var documenterSearchIndex = {"docs": -[{"location":"reference/#Reference","page":"Reference","title":"Reference","text":"","category":"section"},{"location":"reference/#Contents","page":"Reference","title":"Contents","text":"","category":"section"},{"location":"reference/","page":"Reference","title":"Reference","text":"Pages = [\"reference.md\"]","category":"page"},{"location":"reference/#Index","page":"Reference","title":"Index","text":"","category":"section"},{"location":"reference/","page":"Reference","title":"Reference","text":"Pages = [\"reference.md\"]","category":"page"},{"location":"reference/","page":"Reference","title":"Reference","text":"Modules = [RegularizedOptimization]","category":"page"},{"location":"reference/#RegularizedOptimization.FISTA-Tuple{NLPModels.AbstractNLPModel, Vararg{Any}}","page":"Reference","title":"RegularizedOptimization.FISTA","text":"FISTA for min_x ϕ(x) = f(x) + g(x), with f(x) cvx and β-smooth, g(x) closed cvx\n\nInput: f: function handle that returns f(x) and ∇f(x) h: function handle that returns g(x) s: initial point proxG: function handle that calculates prox_{νg} options: see descentopts.jl Output: s⁺: s update s : s^(k-1) his : function history feval : number of function evals (total objective)\n\n\n\n\n\n","category":"method"},{"location":"reference/#RegularizedOptimization.LM-Union{Tuple{H}, Tuple{NLPModels.AbstractNLSModel, H, ROSolverOptions}} where H","page":"Reference","title":"RegularizedOptimization.LM","text":"LM(nls, h, options; kwargs...)\n\nA Levenberg-Marquardt method for the problem\n\nmin ½ ‖F(x)‖² + h(x)\n\nwhere F: ℝⁿ → ℝᵐ and its Jacobian J are Lipschitz continuous and h: ℝⁿ → ℝ is lower semi-continuous, proper and prox-bounded.\n\nAt each iteration, a step s is computed as an approximate solution of\n\nmin ½ ‖J(x) s + F(x)‖² + ½ σ ‖s‖² + ψ(s; x)\n\nwhere F(x) and J(x) are the residual and its Jacobian at x, respectively, ψ(s; x) = h(x + s), and σ > 0 is a regularization parameter.\n\nArguments\n\nnls::AbstractNLSModel: a smooth nonlinear least-squares problem\nh: a regularizer such as those defined in ProximalOperators\noptions::ROSolverOptions: a structure containing algorithmic parameters\n\nKeyword arguments\n\nx0::AbstractVector: an initial guess (default: nls.meta.x0)\nsubsolver_logger::AbstractLogger: a logger to pass to the subproblem solver\nsubsolver: the procedure used to compute a step (PG or R2)\nsubsolver_options::ROSolverOptions: default options to pass to the subsolver.\nselected::AbstractVector{<:Integer}: (default 1:f.meta.nvar).\n\nReturn values\n\nxk: the final iterate\nFobj_hist: an array with the history of values of the smooth objective\nHobj_hist: an array with the history of values of the nonsmooth objective\nComplex_hist: an array with the history of number of inner iterations.\n\n\n\n\n\n","category":"method"},{"location":"reference/#RegularizedOptimization.LMTR-Union{Tuple{X}, Tuple{H}, Tuple{NLPModels.AbstractNLSModel, H, X, ROSolverOptions}} where {H, X}","page":"Reference","title":"RegularizedOptimization.LMTR","text":"LMTR(nls, h, χ, options; kwargs...)\n\nA trust-region Levenberg-Marquardt method for the problem\n\nmin ½ ‖F(x)‖² + h(x)\n\nwhere F: ℝⁿ → ℝᵐ and its Jacobian J are Lipschitz continuous and h: ℝⁿ → ℝ is lower semi-continuous and proper.\n\nAt each iteration, a step s is computed as an approximate solution of\n\nmin ½ ‖J(x) s + F(x)‖₂² + ψ(s; x) subject to ‖s‖ ≤ Δ\n\nwhere F(x) and J(x) are the residual and its Jacobian at x, respectively, ψ(s; x) = h(x + s), ‖⋅‖ is a user-defined norm and Δ > 0 is a trust-region radius.\n\nArguments\n\nnls::AbstractNLSModel: a smooth nonlinear least-squares problem\nh: a regularizer such as those defined in ProximalOperators\nχ: a norm used to define the trust region in the form of a regularizer\noptions::ROSolverOptions: a structure containing algorithmic parameters\n\nKeyword arguments\n\nx0::AbstractVector: an initial guess (default: nls.meta.x0)\nsubsolver_logger::AbstractLogger: a logger to pass to the subproblem solver\nsubsolver: the procedure used to compute a step (PG or R2)\nsubsolver_options::ROSolverOptions: default options to pass to the subsolver.\nselected::AbstractVector{<:Integer}: (default 1:f.meta.nvar).\n\nReturn values\n\nxk: the final iterate\nFobj_hist: an array with the history of values of the smooth objective\nHobj_hist: an array with the history of values of the nonsmooth objective\nComplex_hist: an array with the history of number of inner iterations.\n\n\n\n\n\n","category":"method"},{"location":"reference/#RegularizedOptimization.PG-Tuple{NLPModels.AbstractNLPModel, Vararg{Any}}","page":"Reference","title":"RegularizedOptimization.PG","text":"Proximal Gradient Descent for\n\nmin_x ϕ(x) = f(x) + g(x), with f(x) β-smooth, g(x) closed, lsc\n\nInput: f: function handle that returns f(x) and ∇f(x) h: function handle that returns g(x) s: initial point proxG: function handle that calculates prox_{νg} options: see descentopts.jl Output: s⁺: s update s : s^(k-1) his : function history feval : number of function evals (total objective )\n\n\n\n\n\n","category":"method"},{"location":"reference/#RegularizedOptimization.R2-Tuple{NLPModels.AbstractNLPModel, Vararg{Any}}","page":"Reference","title":"RegularizedOptimization.R2","text":"R2(nlp, h, options)\nR2(f, ∇f!, h, options, x0)\n\nA first-order quadratic regularization method for the problem\n\nmin f(x) + h(x)\n\nwhere f: ℝⁿ → ℝ has a Lipschitz-continuous gradient, and h: ℝⁿ → ℝ is lower semi-continuous, proper and prox-bounded.\n\nAbout each iterate xₖ, a step sₖ is computed as a solution of\n\nmin φ(s; xₖ) + ½ σₖ ‖s‖² + ψ(s; xₖ)\n\nwhere φ(s ; xₖ) = f(xₖ) + ∇f(xₖ)ᵀs is the Taylor linear approximation of f about xₖ, ψ(s; xₖ) = h(xₖ + s), ‖⋅‖ is a user-defined norm and σₖ > 0 is the regularization parameter.\n\nArguments\n\nnlp::AbstractNLPModel: a smooth optimization problem\nh: a regularizer such as those defined in ProximalOperators\noptions::ROSolverOptions: a structure containing algorithmic parameters\nx0::AbstractVector: an initial guess (in the second calling form)\n\nKeyword Arguments\n\nx0::AbstractVector: an initial guess (in the first calling form: default = nlp.meta.x0)\nselected::AbstractVector{<:Integer}: (default 1:length(x0)).\n\nThe objective and gradient of nlp will be accessed.\n\nIn the second form, instead of nlp, the user may pass in\n\nf a function such that f(x) returns the value of f at x\n∇f! a function to evaluate the gradient in place, i.e., such that ∇f!(g, x) store ∇f(x) in g.\n\nReturn values\n\nxk: the final iterate\nFobj_hist: an array with the history of values of the smooth objective\nHobj_hist: an array with the history of values of the nonsmooth objective\nComplex_hist: an array with the history of number of inner iterations.\n\n\n\n\n\n","category":"method"},{"location":"reference/#RegularizedOptimization.TR-Union{Tuple{R}, Tuple{X}, Tuple{H}, Tuple{NLPModels.AbstractNLPModel, H, X, ROSolverOptions{R}}} where {H, X, R}","page":"Reference","title":"RegularizedOptimization.TR","text":"TR(nlp, h, χ, options; kwargs...)\n\nA trust-region method for the problem\n\nmin f(x) + h(x)\n\nwhere f: ℝⁿ → ℝ has a Lipschitz-continuous Jacobian, and h: ℝⁿ → ℝ is lower semi-continuous and proper.\n\nAbout each iterate xₖ, a step sₖ is computed as an approximate solution of\n\nmin φ(s; xₖ) + ψ(s; xₖ) subject to ‖s‖ ≤ Δₖ\n\nwhere φ(s ; xₖ) = f(xₖ) + ∇f(xₖ)ᵀs + ½ sᵀ Bₖ s is a quadratic approximation of f about xₖ, ψ(s; xₖ) = h(xₖ + s), ‖⋅‖ is a user-defined norm and Δₖ > 0 is the trust-region radius. The subproblem is solved inexactly by way of a first-order method such as the proximal-gradient method or the quadratic regularization method.\n\nArguments\n\nnlp::AbstractNLPModel: a smooth optimization problem\nh: a regularizer such as those defined in ProximalOperators\nχ: a norm used to define the trust region in the form of a regularizer\noptions::ROSolverOptions: a structure containing algorithmic parameters\n\nThe objective, gradient and Hessian of nlp will be accessed. The Hessian is accessed as an abstract operator and need not be the exact Hessian.\n\nKeyword arguments\n\nx0::AbstractVector: an initial guess (default: nlp.meta.x0)\nsubsolver_logger::AbstractLogger: a logger to pass to the subproblem solver (default: the null logger)\nsubsolver: the procedure used to compute a step (PG or R2)\nsubsolver_options::ROSolverOptions: default options to pass to the subsolver (default: all defaut options)\nselected::AbstractVector{<:Integer}: (default 1:f.meta.nvar).\n\nReturn values\n\nxk: the final iterate\nFobj_hist: an array with the history of values of the smooth objective\nHobj_hist: an array with the history of values of the nonsmooth objective\nComplex_hist: an array with the history of number of inner iterations.\n\n\n\n\n\n","category":"method"},{"location":"reference/#RegularizedOptimization.TRDH-Union{Tuple{R}, Tuple{NLPModels.AbstractNLPModel{R}, Any, Any, ROSolverOptions{R}}} where R<:Real","page":"Reference","title":"RegularizedOptimization.TRDH","text":"TRDH(nlp, h, χ, options; kwargs...)\nTRDH(f, ∇f!, h, options, x0)\n\nA trust-region method with diagonal Hessian approximation for the problem\n\nmin f(x) + h(x)\n\nwhere f: ℝⁿ → ℝ has a Lipschitz-continuous Jacobian, and h: ℝⁿ → ℝ is lower semi-continuous and proper.\n\nAbout each iterate xₖ, a step sₖ is computed as an approximate solution of\n\nmin φ(s; xₖ) + ψ(s; xₖ) subject to ‖s‖ ≤ Δₖ\n\nwhere φ(s ; xₖ) = f(xₖ) + ∇f(xₖ)ᵀs + ½ sᵀ Dₖ s is a quadratic approximation of f about xₖ, ψ(s; xₖ) = h(xₖ + s), ‖⋅‖ is a user-defined norm, Dₖ is a diagonal Hessian approximation and Δₖ > 0 is the trust-region radius.\n\nArguments\n\nnlp::AbstractNLPModel: a smooth optimization problem\nh: a regularizer such as those defined in ProximalOperators\nχ: a norm used to define the trust region in the form of a regularizer\noptions::ROSolverOptions: a structure containing algorithmic parameters\n\nThe objective and gradient of nlp will be accessed.\n\nIn the second form, instead of nlp, the user may pass in\n\nf a function such that f(x) returns the value of f at x\n∇f! a function to evaluate the gradient in place, i.e., such that ∇f!(g, x) store ∇f(x) in g\nx0::AbstractVector: an initial guess.\n\nKeyword arguments\n\nx0::AbstractVector: an initial guess (default: nlp.meta.x0)\nselected::AbstractVector{<:Integer}: (default 1:f.meta.nvar)\nBk: initial diagonal Hessian approximation (default: (one(R) / options.ν) * I).\n\nReturn values\n\nxk: the final iterate\nFobj_hist: an array with the history of values of the smooth objective\nHobj_hist: an array with the history of values of the nonsmooth objective\nComplex_hist: an array with the history of number of inner iterations.\n\n\n\n\n\n","category":"method"},{"location":"reference/#RegularizedOptimization.prox_split_1w-NTuple{4, Any}","page":"Reference","title":"RegularizedOptimization.prox_split_1w","text":"Solves descent direction s for some objective function with the structure \tmins qk(s) + ψ(x+s) s.t. ||s||q⩽ Δ \tfor some Δ provided Arguments ––––– proxp : prox method for p-norm \ttakes in z (vector), a (λ||⋅||p), p is norm for ψ I think s0 : Vector{Float64,1} \tInitial guess for the descent direction projq : generic that projects onto ||⋅||q⩽Δ norm ball options : mutable structure pparams\n\nReturns\n\ns : Vector{Float64,1} \tFinal value of Algorithm 6.1 descent direction w : Vector{Float64,1} \trelaxation variable of Algorithm 6.1 descent direction\n\n\n\n\n\n","category":"method"},{"location":"reference/#RegularizedOptimization.prox_split_2w-NTuple{4, Any}","page":"Reference","title":"RegularizedOptimization.prox_split_2w","text":"Solves descent direction s for some objective function with the structure \tmins qk(s) + ψ(x+s) s.t. ||s||q⩽ Δ \tfor some Δ provided Arguments ––––– proxp : prox method for p-norm \ttakes in z (vector), a (λ||⋅||p), p is norm for ψ I think s0 : Vector{Float64,1} \tInitial guess for the descent direction projq : generic that projects onto ||⋅||q⩽Δ norm ball options : mutable structure pparams\n\nReturns\n\ns : Vector{Float64,1} \tFinal value of Algorithm 6.2 descent direction w : Vector{Float64,1} \trelaxation variable of Algorithm 6.2 descent direction\n\n\n\n\n\n","category":"method"},{"location":"#RegularizedOptimization.jl","page":"Home","title":"RegularizedOptimization.jl","text":"","category":"section"},{"location":"tutorial/#RegularizedOptimization-Tutorial","page":"Tutorial","title":"RegularizedOptimization Tutorial","text":"","category":"section"}] +[{"location":"reference/#Reference","page":"Reference","title":"Reference","text":"","category":"section"},{"location":"reference/#Contents","page":"Reference","title":"Contents","text":"","category":"section"},{"location":"reference/","page":"Reference","title":"Reference","text":"Pages = [\"reference.md\"]","category":"page"},{"location":"reference/#Index","page":"Reference","title":"Index","text":"","category":"section"},{"location":"reference/","page":"Reference","title":"Reference","text":"Pages = [\"reference.md\"]","category":"page"},{"location":"reference/","page":"Reference","title":"Reference","text":"Modules = [RegularizedOptimization]","category":"page"},{"location":"reference/#RegularizedOptimization.FISTA-Tuple{NLPModels.AbstractNLPModel, Vararg{Any}}","page":"Reference","title":"RegularizedOptimization.FISTA","text":"FISTA for min_x ϕ(x) = f(x) + g(x), with f(x) cvx and β-smooth, g(x) closed cvx\n\nInput: f: function handle that returns f(x) and ∇f(x) h: function handle that returns g(x) s: initial point proxG: function handle that calculates prox_{νg} options: see descentopts.jl Output: s⁺: s update s : s^(k-1) his : function history feval : number of function evals (total objective)\n\n\n\n\n\n","category":"method"},{"location":"reference/#RegularizedOptimization.LM-Union{Tuple{H}, Tuple{NLPModels.AbstractNLSModel, H, ROSolverOptions}} where H","page":"Reference","title":"RegularizedOptimization.LM","text":"LM(nls, h, options; kwargs...)\n\nA Levenberg-Marquardt method for the problem\n\nmin ½ ‖F(x)‖² + h(x)\n\nwhere F: ℝⁿ → ℝᵐ and its Jacobian J are Lipschitz continuous and h: ℝⁿ → ℝ is lower semi-continuous, proper and prox-bounded.\n\nAt each iteration, a step s is computed as an approximate solution of\n\nmin ½ ‖J(x) s + F(x)‖² + ½ σ ‖s‖² + ψ(s; x)\n\nwhere F(x) and J(x) are the residual and its Jacobian at x, respectively, ψ(s; x) = h(x + s), and σ > 0 is a regularization parameter.\n\nArguments\n\nnls::AbstractNLSModel: a smooth nonlinear least-squares problem\nh: a regularizer such as those defined in ProximalOperators\noptions::ROSolverOptions: a structure containing algorithmic parameters\n\nKeyword arguments\n\nx0::AbstractVector: an initial guess (default: nls.meta.x0)\nsubsolver_logger::AbstractLogger: a logger to pass to the subproblem solver\nsubsolver: the procedure used to compute a step (PG, R2 or TRDH)\nsubsolver_options::ROSolverOptions: default options to pass to the subsolver.\nselected::AbstractVector{<:Integer}: (default 1:nls.meta.nvar).\n\nReturn values\n\nxk: the final iterate\nFobj_hist: an array with the history of values of the smooth objective\nHobj_hist: an array with the history of values of the nonsmooth objective\nComplex_hist: an array with the history of number of inner iterations.\n\n\n\n\n\n","category":"method"},{"location":"reference/#RegularizedOptimization.LMTR-Union{Tuple{X}, Tuple{H}, Tuple{NLPModels.AbstractNLSModel, H, X, ROSolverOptions}} where {H, X}","page":"Reference","title":"RegularizedOptimization.LMTR","text":"LMTR(nls, h, χ, options; kwargs...)\n\nA trust-region Levenberg-Marquardt method for the problem\n\nmin ½ ‖F(x)‖² + h(x)\n\nwhere F: ℝⁿ → ℝᵐ and its Jacobian J are Lipschitz continuous and h: ℝⁿ → ℝ is lower semi-continuous and proper.\n\nAt each iteration, a step s is computed as an approximate solution of\n\nmin ½ ‖J(x) s + F(x)‖₂² + ψ(s; x) subject to ‖s‖ ≤ Δ\n\nwhere F(x) and J(x) are the residual and its Jacobian at x, respectively, ψ(s; x) = h(x + s), ‖⋅‖ is a user-defined norm and Δ > 0 is a trust-region radius.\n\nArguments\n\nnls::AbstractNLSModel: a smooth nonlinear least-squares problem\nh: a regularizer such as those defined in ProximalOperators\nχ: a norm used to define the trust region in the form of a regularizer\noptions::ROSolverOptions: a structure containing algorithmic parameters\n\nKeyword arguments\n\nx0::AbstractVector: an initial guess (default: nls.meta.x0)\nsubsolver_logger::AbstractLogger: a logger to pass to the subproblem solver\nsubsolver: the procedure used to compute a step (PG, R2 or TRDH)\nsubsolver_options::ROSolverOptions: default options to pass to the subsolver.\nselected::AbstractVector{<:Integer}: (default 1:nls.meta.nvar).\n\nReturn values\n\nxk: the final iterate\nFobj_hist: an array with the history of values of the smooth objective\nHobj_hist: an array with the history of values of the nonsmooth objective\nComplex_hist: an array with the history of number of inner iterations.\n\n\n\n\n\n","category":"method"},{"location":"reference/#RegularizedOptimization.PG-Tuple{NLPModels.AbstractNLPModel, Vararg{Any}}","page":"Reference","title":"RegularizedOptimization.PG","text":"Proximal Gradient Descent for\n\nmin_x ϕ(x) = f(x) + g(x), with f(x) β-smooth, g(x) closed, lsc\n\nInput: f: function handle that returns f(x) and ∇f(x) h: function handle that returns g(x) s: initial point proxG: function handle that calculates prox_{νg} options: see descentopts.jl Output: s⁺: s update s : s^(k-1) his : function history feval : number of function evals (total objective )\n\n\n\n\n\n","category":"method"},{"location":"reference/#RegularizedOptimization.R2-Tuple{NLPModels.AbstractNLPModel, Vararg{Any}}","page":"Reference","title":"RegularizedOptimization.R2","text":"R2(nlp, h, options)\nR2(f, ∇f!, h, options, x0)\n\nA first-order quadratic regularization method for the problem\n\nmin f(x) + h(x)\n\nwhere f: ℝⁿ → ℝ has a Lipschitz-continuous gradient, and h: ℝⁿ → ℝ is lower semi-continuous, proper and prox-bounded.\n\nAbout each iterate xₖ, a step sₖ is computed as a solution of\n\nmin φ(s; xₖ) + ½ σₖ ‖s‖² + ψ(s; xₖ)\n\nwhere φ(s ; xₖ) = f(xₖ) + ∇f(xₖ)ᵀs is the Taylor linear approximation of f about xₖ, ψ(s; xₖ) = h(xₖ + s), ‖⋅‖ is a user-defined norm and σₖ > 0 is the regularization parameter.\n\nArguments\n\nnlp::AbstractNLPModel: a smooth optimization problem\nh: a regularizer such as those defined in ProximalOperators\noptions::ROSolverOptions: a structure containing algorithmic parameters\nx0::AbstractVector: an initial guess (in the second calling form)\n\nKeyword Arguments\n\nx0::AbstractVector: an initial guess (in the first calling form: default = nlp.meta.x0)\nselected::AbstractVector{<:Integer}: (default 1:length(x0)).\n\nThe objective and gradient of nlp will be accessed.\n\nIn the second form, instead of nlp, the user may pass in\n\nf a function such that f(x) returns the value of f at x\n∇f! a function to evaluate the gradient in place, i.e., such that ∇f!(g, x) store ∇f(x) in g.\n\nReturn values\n\nxk: the final iterate\nFobj_hist: an array with the history of values of the smooth objective\nHobj_hist: an array with the history of values of the nonsmooth objective\nComplex_hist: an array with the history of number of inner iterations.\n\n\n\n\n\n","category":"method"},{"location":"reference/#RegularizedOptimization.TR-Union{Tuple{R}, Tuple{X}, Tuple{H}, Tuple{NLPModels.AbstractNLPModel, H, X, ROSolverOptions{R}}} where {H, X, R}","page":"Reference","title":"RegularizedOptimization.TR","text":"TR(nlp, h, χ, options; kwargs...)\n\nA trust-region method for the problem\n\nmin f(x) + h(x)\n\nwhere f: ℝⁿ → ℝ has a Lipschitz-continuous Jacobian, and h: ℝⁿ → ℝ is lower semi-continuous and proper.\n\nAbout each iterate xₖ, a step sₖ is computed as an approximate solution of\n\nmin φ(s; xₖ) + ψ(s; xₖ) subject to ‖s‖ ≤ Δₖ\n\nwhere φ(s ; xₖ) = f(xₖ) + ∇f(xₖ)ᵀs + ½ sᵀ Bₖ s is a quadratic approximation of f about xₖ, ψ(s; xₖ) = h(xₖ + s), ‖⋅‖ is a user-defined norm and Δₖ > 0 is the trust-region radius. The subproblem is solved inexactly by way of a first-order method such as the proximal-gradient method or the quadratic regularization method.\n\nArguments\n\nnlp::AbstractNLPModel: a smooth optimization problem\nh: a regularizer such as those defined in ProximalOperators\nχ: a norm used to define the trust region in the form of a regularizer\noptions::ROSolverOptions: a structure containing algorithmic parameters\n\nThe objective, gradient and Hessian of nlp will be accessed. The Hessian is accessed as an abstract operator and need not be the exact Hessian.\n\nKeyword arguments\n\nx0::AbstractVector: an initial guess (default: nlp.meta.x0)\nsubsolver_logger::AbstractLogger: a logger to pass to the subproblem solver (default: the null logger)\nsubsolver: the procedure used to compute a step (PG, R2 or TRDH)\nsubsolver_options::ROSolverOptions: default options to pass to the subsolver (default: all defaut options)\nselected::AbstractVector{<:Integer}: (default 1:f.meta.nvar).\n\nReturn values\n\nxk: the final iterate\nFobj_hist: an array with the history of values of the smooth objective\nHobj_hist: an array with the history of values of the nonsmooth objective\nComplex_hist: an array with the history of number of inner iterations.\n\n\n\n\n\n","category":"method"},{"location":"reference/#RegularizedOptimization.TRDH-Union{Tuple{S}, Tuple{R}, Tuple{NLPModelsModifiers.AbstractDiagonalQNModel{R, S}, Any, Any, ROSolverOptions{R}}} where {R<:Real, S}","page":"Reference","title":"RegularizedOptimization.TRDH","text":"TRDH(nlp, h, χ, options; kwargs...)\nTRDH(f, ∇f!, h, options, x0)\n\nA trust-region method with diagonal Hessian approximation for the problem\n\nmin f(x) + h(x)\n\nwhere f: ℝⁿ → ℝ has a Lipschitz-continuous Jacobian, and h: ℝⁿ → ℝ is lower semi-continuous and proper.\n\nAbout each iterate xₖ, a step sₖ is computed as an approximate solution of\n\nmin φ(s; xₖ) + ψ(s; xₖ) subject to ‖s‖ ≤ Δₖ\n\nwhere φ(s ; xₖ) = f(xₖ) + ∇f(xₖ)ᵀs + ½ sᵀ Dₖ s is a quadratic approximation of f about xₖ, ψ(s; xₖ) = h(xₖ + s), ‖⋅‖ is a user-defined norm, Dₖ is a diagonal Hessian approximation and Δₖ > 0 is the trust-region radius.\n\nArguments\n\nnlp::AbstractDiagonalQNModel: a smooth optimization problem\nh: a regularizer such as those defined in ProximalOperators\nχ: a norm used to define the trust region in the form of a regularizer\noptions::ROSolverOptions: a structure containing algorithmic parameters\n\nThe objective and gradient of nlp will be accessed.\n\nIn the second form, instead of nlp, the user may pass in\n\nf a function such that f(x) returns the value of f at x\n∇f! a function to evaluate the gradient in place, i.e., such that ∇f!(g, x) store ∇f(x) in g\nx0::AbstractVector: an initial guess.\n\nKeyword arguments\n\nx0::AbstractVector: an initial guess (default: nlp.meta.x0)\nselected::AbstractVector{<:Integer}: (default 1:f.meta.nvar)\n\nReturn values\n\nxk: the final iterate\nFobj_hist: an array with the history of values of the smooth objective\nHobj_hist: an array with the history of values of the nonsmooth objective\nComplex_hist: an array with the history of number of inner iterations.\n\n\n\n\n\n","category":"method"},{"location":"reference/#RegularizedOptimization.prox_split_1w-NTuple{4, Any}","page":"Reference","title":"RegularizedOptimization.prox_split_1w","text":"Solves descent direction s for some objective function with the structure \tmins qk(s) + ψ(x+s) s.t. ||s||q⩽ Δ \tfor some Δ provided Arguments ––––– proxp : prox method for p-norm \ttakes in z (vector), a (λ||⋅||p), p is norm for ψ I think s0 : Vector{Float64,1} \tInitial guess for the descent direction projq : generic that projects onto ||⋅||q⩽Δ norm ball options : mutable structure pparams\n\nReturns\n\ns : Vector{Float64,1} \tFinal value of Algorithm 6.1 descent direction w : Vector{Float64,1} \trelaxation variable of Algorithm 6.1 descent direction\n\n\n\n\n\n","category":"method"},{"location":"reference/#RegularizedOptimization.prox_split_2w-NTuple{4, Any}","page":"Reference","title":"RegularizedOptimization.prox_split_2w","text":"Solves descent direction s for some objective function with the structure \tmins qk(s) + ψ(x+s) s.t. ||s||q⩽ Δ \tfor some Δ provided Arguments ––––– proxp : prox method for p-norm \ttakes in z (vector), a (λ||⋅||p), p is norm for ψ I think s0 : Vector{Float64,1} \tInitial guess for the descent direction projq : generic that projects onto ||⋅||q⩽Δ norm ball options : mutable structure pparams\n\nReturns\n\ns : Vector{Float64,1} \tFinal value of Algorithm 6.2 descent direction w : Vector{Float64,1} \trelaxation variable of Algorithm 6.2 descent direction\n\n\n\n\n\n","category":"method"},{"location":"#RegularizedOptimization.jl","page":"Home","title":"RegularizedOptimization.jl","text":"","category":"section"},{"location":"tutorial/#RegularizedOptimization-Tutorial","page":"Tutorial","title":"RegularizedOptimization Tutorial","text":"","category":"section"}] } diff --git a/dev/tutorial/index.html b/dev/tutorial/index.html index 51727531..85122e9c 100644 --- a/dev/tutorial/index.html +++ b/dev/tutorial/index.html @@ -1,2 +1,2 @@ -Tutorial · RegularizedOptimization.jl
      +Tutorial · RegularizedOptimization.jl