API Reference
This page lists the public API of MathOptAI
.
This page is an unstructured list of the MathOptAI API. For a more structured overview, read the Manual or Tutorial parts of this documentation.
Load all of the public the API into the current scope with:
using MathOptAI
Alternatively, load only the module with:
import MathOptAI
and then prefix all calls with MathOptAI.
to create MathOptAI.<NAME>
.
AbstractPredictor
MathOptAI.AbstractPredictor
— Typeabstract type AbstractPredictor end
An abstract type representing different types of prediction models.
Methods
All subtypes must implement:
add_predictor
MathOptAI.add_predictor
— Functionadd_predictor(
+API Reference · MathOptAI.jl API Reference
This page lists the public API of MathOptAI
.
Info This page is an unstructured list of the MathOptAI API. For a more structured overview, read the Manual or Tutorial parts of this documentation.
Load all of the public the API into the current scope with:
using MathOptAI
Alternatively, load only the module with:
import MathOptAI
and then prefix all calls with MathOptAI.
to create MathOptAI.<NAME>
.
AbstractPredictor
MathOptAI.AbstractPredictor
— Typeabstract type AbstractPredictor end
An abstract type representing different types of prediction models.
Methods
All subtypes must implement:
sourceadd_predictor
MathOptAI.add_predictor
— Functionadd_predictor(
model::JuMP.AbstractModel,
predictor::AbstractPredictor,
x::Vector,
@@ -23,7 +23,7 @@
├ variables [1]
│ └ moai_Affine[1]
└ constraints [1]
- └ 2 x[1] + 3 x[2] - moai_Affine[1] = 0
sourcebuild_predictor
MathOptAI.build_predictor
— Methodbuild_predictor(extension; kwargs...)::AbstractPredictor
A uniform interface to convert various extension types to an AbstractPredictor
.
See the various extension docstrings for details.
sourceAffine
MathOptAI.Affine
— TypeAffine(
+ └ 2 x[1] + 3 x[2] - moai_Affine[1] = 0
sourcebuild_predictor
MathOptAI.build_predictor
— Methodbuild_predictor(extension; kwargs...)::AbstractPredictor
A uniform interface to convert various extension types to an AbstractPredictor
.
See the various extension docstrings for details.
sourceAffine
MathOptAI.Affine
— TypeAffine(
A::Matrix{T},
b::Vector{T} = zeros(T, size(A, 1)),
) where {T} <: AbstractPredictor
An AbstractPredictor
that represents the relationship:
\[y = A x + b\]
Example
julia> using JuMP, MathOptAI
@@ -60,7 +60,7 @@
julia> formulation
ReducedSpace(Affine(A, b) [input: 2, output: 1])
├ variables [0]
-└ constraints [0]
sourceBinaryDecisionTree
MathOptAI.BinaryDecisionTree
— TypeBinaryDecisionTree{K,V}(
+└ constraints [0]
sourceBinaryDecisionTree
MathOptAI.BinaryDecisionTree
— TypeBinaryDecisionTree{K,V}(
feat_id::Int,
feat_value::K,
lhs::Union{V,BinaryDecisionTree{K,V}},
@@ -99,7 +99,7 @@
├ moai_BinaryDecisionTree_z[2] --> {x[1] ≤ 1}
├ moai_BinaryDecisionTree_z[3] --> {x[1] ≥ 0}
├ moai_BinaryDecisionTree_z[3] --> {x[1] ≥ 1}
- └ moai_BinaryDecisionTree_z[1] - moai_BinaryDecisionTree_z[3] + moai_BinaryDecisionTree_value = 0
sourceGrayBox
MathOptAI.GrayBox
— TypeGrayBox(
+ └ moai_BinaryDecisionTree_z[1] - moai_BinaryDecisionTree_z[3] + moai_BinaryDecisionTree_value = 0
sourceGrayBox
MathOptAI.GrayBox
— TypeGrayBox(
output_size::Function,
callback::Function;
has_hessian::Bool = false,
@@ -141,7 +141,7 @@
julia> formulation
ReducedSpace(GrayBox)
├ variables [0]
-└ constraints [0]
sourcePipeline
MathOptAI.Pipeline
— TypePipeline(layers::Vector{AbstractPredictor}) <: AbstractPredictor
An AbstractPredictor
that represents the relationship:
\[y = (l_1 \circ \ldots \circ l_N)(x)\]
where $l_i$ are a list of other AbstractPredictor
s.
Example
julia> using JuMP, MathOptAI
+└ constraints [0]
sourcePipeline
MathOptAI.Pipeline
— TypePipeline(layers::Vector{AbstractPredictor}) <: AbstractPredictor
An AbstractPredictor
that represents the relationship:
\[y = (l_1 \circ \ldots \circ l_N)(x)\]
where $l_i$ are a list of other AbstractPredictor
s.
Example
julia> using JuMP, MathOptAI
julia> model = Model();
@@ -175,11 +175,11 @@
├ moai_ReLU[1] ≥ 0
├ moai_z[1] ≥ 0
├ moai_Affine[1] - moai_ReLU[1] + moai_z[1] = 0
- └ moai_ReLU[1]*moai_z[1] = 0
sourcePytorchModel
MathOptAI.PytorchModel
— TypePytorchModel(filename::String)
A wrapper struct for loading a PyTorch model.
The only supported file extension is .pt
, where the .pt
file has been created using torch.save(model, filename)
.
Warning To use PytorchModel
, your code must load the PythonCall
package:
import PythonCall
Example
julia> using MathOptAI
+ └ moai_ReLU[1]*moai_z[1] = 0
sourcePytorchModel
MathOptAI.PytorchModel
— TypePytorchModel(filename::String)
A wrapper struct for loading a PyTorch model.
The only supported file extension is .pt
, where the .pt
file has been created using torch.save(model, filename)
.
Warning To use PytorchModel
, your code must load the PythonCall
package:
import PythonCall
Example
julia> using MathOptAI
julia> using PythonCall # This line is important!
-julia> predictor = PytorchModel("model.pt");
sourceQuantile
MathOptAI.Quantile
— TypeQuantile{D}(distribution::D, quantiles::Vector{Float64}) where {D}
An AbstractPredictor
that represents the quantiles
of distribution
.
Example
julia> using JuMP, Distributions, MathOptAI
+julia> predictor = PytorchModel("model.pt");
sourceQuantile
MathOptAI.Quantile
— TypeQuantile{D}(distribution::D, quantiles::Vector{Float64}) where {D}
An AbstractPredictor
that represents the quantiles
of distribution
.
Example
julia> using JuMP, Distributions, MathOptAI
julia> model = Model();
@@ -204,7 +204,7 @@
│ └ moai_quantile[2]
└ constraints [2]
├ moai_quantile[1] - op_quantile_0.1(x) = 0
- └ moai_quantile[2] - op_quantile_0.9(x) = 0
sourceReducedSpace
MathOptAI.ReducedSpace
— TypeReducedSpace(predictor::AbstractPredictor)
A wrapper type for other predictors that implement a reduced-space formulation.
Example
julia> using JuMP, MathOptAI
+ └ moai_quantile[2] - op_quantile_0.9(x) = 0
sourceReducedSpace
MathOptAI.ReducedSpace
— TypeReducedSpace(predictor::AbstractPredictor)
A wrapper type for other predictors that implement a reduced-space formulation.
Example
julia> using JuMP, MathOptAI
julia> model = Model();
@@ -217,7 +217,7 @@
julia> y
2-element Vector{NonlinearExpr}:
max(0.0, x[1])
- max(0.0, x[2])
sourceReLU
MathOptAI.ReLU
— TypeReLU() <: AbstractPredictor
An AbstractPredictor
that represents the relationship:
\[y = \max\{0, x\}\]
as a non-smooth nonlinear constraint.
Example
julia> using JuMP, MathOptAI
+ max(0.0, x[2])
sourceReLU
MathOptAI.ReLU
— TypeReLU() <: AbstractPredictor
An AbstractPredictor
that represents the relationship:
\[y = \max\{0, x\}\]
as a non-smooth nonlinear constraint.
Example
julia> using JuMP, MathOptAI
julia> model = Model();
@@ -257,7 +257,7 @@
julia> formulation
ReducedSpace(ReLU())
├ variables [0]
-└ constraints [0]
sourceReLUBigM
MathOptAI.ReLUBigM
— TypeReLUBigM(M::Float64) <: AbstractPredictor
An AbstractPredictor
that represents the relationship:
\[y = \max\{0, x\}\]
via the big-M MIP reformulation:
\[\begin{aligned}
+└ constraints [0]
sourceReLUBigM
MathOptAI.ReLUBigM
— TypeReLUBigM(M::Float64) <: AbstractPredictor
An AbstractPredictor
that represents the relationship:
\[y = \max\{0, x\}\]
via the big-M MIP reformulation:
\[\begin{aligned}
y \ge 0 \\
y \ge x \\
y \le M z \\
@@ -298,7 +298,7 @@
├ moai_z[2] binary
├ -x[2] + moai_ReLU[2] ≥ 0
├ moai_ReLU[2] - 2 moai_z[2] ≤ 0
- └ -x[2] + moai_ReLU[2] + 3 moai_z[2] ≤ 3
sourceReLUQuadratic
MathOptAI.ReLUQuadratic
— TypeReLUQuadratic() <: AbstractPredictor
An AbstractPredictor
that represents the relationship:
\[y = \max\{0, x\}\]
by the reformulation:
\[\begin{aligned}
+ └ -x[2] + moai_ReLU[2] + 3 moai_z[2] ≤ 3
sourceReLUQuadratic
MathOptAI.ReLUQuadratic
— TypeReLUQuadratic() <: AbstractPredictor
An AbstractPredictor
that represents the relationship:
\[y = \max\{0, x\}\]
by the reformulation:
\[\begin{aligned}
x = y - z \\
y \cdot z = 0 \\
y, z \ge 0
@@ -337,7 +337,7 @@
├ x[1] - moai_ReLU[1] + moai_z[1] = 0
├ x[2] - moai_ReLU[2] + moai_z[2] = 0
├ moai_ReLU[1]*moai_z[1] = 0
- └ moai_ReLU[2]*moai_z[2] = 0
sourceReLUSOS1
MathOptAI.ReLUSOS1
— TypeReLUSOS1() <: AbstractPredictor
An AbstractPredictor
that represents the relationship:
\[y = \max\{0, x\}\]
by the reformulation:
\[\begin{aligned}
+ └ moai_ReLU[2]*moai_z[2] = 0
sourceReLUSOS1
MathOptAI.ReLUSOS1
— TypeReLUSOS1() <: AbstractPredictor
An AbstractPredictor
that represents the relationship:
\[y = \max\{0, x\}\]
by the reformulation:
\[\begin{aligned}
x = y - z \\
[y, z] \in SOS1 \\
y, z \ge 0
@@ -374,7 +374,7 @@
├ x[1] - moai_ReLU[1] + moai_z[1] = 0
├ x[2] - moai_ReLU[2] + moai_z[2] = 0
├ [moai_ReLU[1], moai_z[1]] ∈ MathOptInterface.SOS1{Float64}([1.0, 2.0])
- └ [moai_ReLU[2], moai_z[2]] ∈ MathOptInterface.SOS1{Float64}([1.0, 2.0])
sourceScale
MathOptAI.Scale
— TypeScale(
+ └ [moai_ReLU[2], moai_z[2]] ∈ MathOptInterface.SOS1{Float64}([1.0, 2.0])
sourceScale
MathOptAI.Scale
— TypeScale(
scale::Vector{T},
bias::Vector{T},
) where {T} <: AbstractPredictor
An AbstractPredictor
that represents the relationship:
\[y = Diag(scale)x + bias\]
Example
julia> using JuMP, MathOptAI
@@ -417,7 +417,7 @@
julia> formulation
ReducedSpace(Scale(scale, bias))
├ variables [0]
-└ constraints [0]
sourceSigmoid
MathOptAI.Sigmoid
— TypeSigmoid() <: AbstractPredictor
An AbstractPredictor
that represents the relationship:
\[y = \frac{1}{1 + e^{-x}}\]
as a smooth nonlinear constraint.
Example
julia> using JuMP, MathOptAI
+└ constraints [0]
sourceSigmoid
MathOptAI.Sigmoid
— TypeSigmoid() <: AbstractPredictor
An AbstractPredictor
that represents the relationship:
\[y = \frac{1}{1 + e^{-x}}\]
as a smooth nonlinear constraint.
Example
julia> using JuMP, MathOptAI
julia> model = Model();
@@ -457,7 +457,7 @@
julia> formulation
ReducedSpace(Sigmoid())
├ variables [0]
-└ constraints [0]
sourceSoftMax
MathOptAI.SoftMax
— TypeSoftMax() <: AbstractPredictor
An AbstractPredictor
that represents the relationship:
\[y = \frac{e^{x}}{||e^{x}||_1}\]
as a smooth nonlinear constraint.
Example
julia> using JuMP, MathOptAI
+└ constraints [0]
sourceSoftMax
MathOptAI.SoftMax
— TypeSoftMax() <: AbstractPredictor
An AbstractPredictor
that represents the relationship:
\[y = \frac{e^{x}}{||e^{x}||_1}\]
as a smooth nonlinear constraint.
Example
julia> using JuMP, MathOptAI
julia> model = Model();
@@ -503,7 +503,7 @@
│ └ moai_SoftMax_denom
└ constraints [2]
├ moai_SoftMax_denom ≥ 0
- └ moai_SoftMax_denom - (0.0 + exp(x[2]) + exp(x[1])) = 0
sourceSoftPlus
MathOptAI.SoftPlus
— TypeSoftPlus(; beta = 1.0) <: AbstractPredictor
An AbstractPredictor
that represents the relationship:
\[y = \frac{1}{\beta} \log(1 + e^{\beta x})\]
as a smooth nonlinear constraint.
Example
julia> using JuMP, MathOptAI
+ └ moai_SoftMax_denom - (0.0 + exp(x[2]) + exp(x[1])) = 0
sourceSoftPlus
MathOptAI.SoftPlus
— TypeSoftPlus(; beta = 1.0) <: AbstractPredictor
An AbstractPredictor
that represents the relationship:
\[y = \frac{1}{\beta} \log(1 + e^{\beta x})\]
as a smooth nonlinear constraint.
Example
julia> using JuMP, MathOptAI
julia> model = Model();
@@ -543,7 +543,7 @@
julia> formulation
ReducedSpace(SoftPlus(2.0))
├ variables [0]
-└ constraints [0]
sourceTanh
MathOptAI.Tanh
— TypeTanh() <: AbstractPredictor
An AbstractPredictor
that represents the relationship:
\[y = \tanh(x)\]
as a smooth nonlinear constraint.
Example
julia> using JuMP, MathOptAI
+└ constraints [0]
sourceTanh
MathOptAI.Tanh
— TypeTanh() <: AbstractPredictor
An AbstractPredictor
that represents the relationship:
\[y = \tanh(x)\]
as a smooth nonlinear constraint.
Example
julia> using JuMP, MathOptAI
julia> model = Model();
@@ -583,14 +583,14 @@
julia> formulation
ReducedSpace(Tanh())
├ variables [0]
-└ constraints [0]
sourceAbstractFormulation
MathOptAI.AbstractFormulation
— Typeabstract type AbstractFormulation end
An abstract type representing different formulations.
sourceFormulation
MathOptAI.Formulation
— Typestruct Formulation{P<:AbstractPredictor} <: AbstractFormulation
+└ constraints [0]
sourceAbstractFormulation
MathOptAI.AbstractFormulation
— Typeabstract type AbstractFormulation end
An abstract type representing different formulations.
sourceFormulation
MathOptAI.Formulation
— Typestruct Formulation{P<:AbstractPredictor} <: AbstractFormulation
predictor::P
variables::Vector{Any}
constraints::Vector{Any}
-end
Fields
predictor
: the predictor object used to build the formulationvariables
: a vector of new decision variables added to the modelconstraints
: a vector of new constraints added to the model
Check the docstring of the predictor for an explanation of the formulation and the order of the elements in .variables
and .constraints
.
sourcePipelineFormulation
MathOptAI.PipelineFormulation
— Typestruct PipelineFormulation{P<:AbstractPredictor} <: AbstractFormulation
+end
Fields
predictor
: the predictor object used to build the formulationvariables
: a vector of new decision variables added to the modelconstraints
: a vector of new constraints added to the model
Check the docstring of the predictor for an explanation of the formulation and the order of the elements in .variables
and .constraints
.
sourcePipelineFormulation
MathOptAI.PipelineFormulation
— Typestruct PipelineFormulation{P<:AbstractPredictor} <: AbstractFormulation
predictor::P
layers::Vector{Any}
-end
Fields
predictor
: the predictor object used to build the formulationlayers
: the formulation associated with each of the layers in the pipeline
sourceExtensions
MathOptAI.add_predictor
— MethodMathOptAI.add_predictor(
+end
Fields
predictor
: the predictor object used to build the formulationlayers
: the formulation associated with each of the layers in the pipeline
sourceExtensions
MathOptAI.add_predictor
— MethodMathOptAI.add_predictor(
model::JuMP.AbstractModel,
predictor::MathOptAI.Quantile{<:AbstractGPs.PosteriorGP},
x::Vector,
@@ -618,7 +618,7 @@
moai_quantile[2]
julia> @objective(model, Max, y[2] - y[1])
-moai_quantile[2] - moai_quantile[1]
sourceMathOptAI.add_predictor
— MethodMathOptAI.add_predictor(
+moai_quantile[2] - moai_quantile[1]
sourceMathOptAI.add_predictor
— MethodMathOptAI.add_predictor(
model::JuMP.AbstractModel,
predictor::Union{DecisionTree.Root,DecisionTree.DecisionTreeClassifier},
x::Vector,
@@ -647,7 +647,7 @@
julia> y
1-element Vector{VariableRef}:
- moai_BinaryDecisionTree_value
sourceMathOptAI.build_predictor
— MethodMathOptAI.build_predictor(predictor::DecisionTree.Root)
Convert a binary decision tree from DecisionTree.jl to a BinaryDecisionTree
.
Example
julia> using MathOptAI, DecisionTree
+ moai_BinaryDecisionTree_value
sourceMathOptAI.build_predictor
— MethodMathOptAI.build_predictor(predictor::DecisionTree.Root)
Convert a binary decision tree from DecisionTree.jl to a BinaryDecisionTree
.
Example
julia> using MathOptAI, DecisionTree
julia> truth(x::Vector) = x[1] <= 0.5 ? -2 : (x[2] <= 0.3 ? 3 : 4)
truth (generic function with 1 method)
@@ -665,7 +665,7 @@
Depth: 2
julia> predictor = MathOptAI.build_predictor(tree)
-BinaryDecisionTree{Float64,Int64} [leaves=3, depth=2]
sourceMathOptAI.add_predictor
— MethodMathOptAI.add_predictor(
+BinaryDecisionTree{Float64,Int64} [leaves=3, depth=2]
sourceMathOptAI.add_predictor
— MethodMathOptAI.add_predictor(
model::JuMP.AbstractModel,
predictor::Flux.Chain,
x::Vector;
@@ -689,7 +689,7 @@
julia> y
1-element Vector{VariableRef}:
- moai_Affine[1]
sourceMathOptAI.build_predictor
— MethodMathOptAI.build_predictor(
+ moai_Affine[1]
sourceMathOptAI.build_predictor
— MethodMathOptAI.build_predictor(
predictor::Flux.Chain;
config::Dict = Dict{Any,Any}(),
gray_box::Bool = false,
@@ -714,7 +714,7 @@
Pipeline with layers:
* Affine(A, b) [input: 1, output: 16]
* ReLUQuadratic()
- * Affine(A, b) [input: 16, output: 1]
sourceMathOptAI.add_predictor
— MethodMathOptAI.add_predictor(
+ * Affine(A, b) [input: 16, output: 1]
sourceMathOptAI.add_predictor
— MethodMathOptAI.add_predictor(
model::JuMP.AbstractModel,
predictor::GLM.GeneralizedLinearModel{
GLM.GlmResp{Vector{Float64},GLM.Bernoulli{Float64},GLM.LogitLink},
@@ -741,7 +741,7 @@
julia> y
1-element Vector{VariableRef}:
- moai_Sigmoid[1]
sourceMathOptAI.add_predictor
— MethodMathOptAI.add_predictor(
+ moai_Sigmoid[1]
sourceMathOptAI.add_predictor
— MethodMathOptAI.add_predictor(
model::JuMP.AbstractModel,
predictor::GLM.LinearModel,
x::Vector;
@@ -760,7 +760,7 @@
julia> y
1-element Vector{VariableRef}:
- moai_Affine[1]
sourceMathOptAI.build_predictor
— MethodMathOptAI.build_predictor(
+ moai_Affine[1]
sourceMathOptAI.build_predictor
— MethodMathOptAI.build_predictor(
predictor::GLM.GeneralizedLinearModel{
GLM.GlmResp{Vector{Float64},GLM.Bernoulli{Float64},GLM.LogitLink},
};
@@ -774,14 +774,14 @@
julia> predictor = MathOptAI.build_predictor(model)
Pipeline with layers:
* Affine(A, b) [input: 2, output: 1]
- * Sigmoid()
sourceMathOptAI.build_predictor
— MethodMathOptAI.build_predictor(predictor::GLM.LinearModel)
Convert a trained linear model from GLM.jl to an Affine
layer.
Example
julia> using GLM, MathOptAI
+ * Sigmoid()
sourceMathOptAI.build_predictor
— MethodMathOptAI.build_predictor(predictor::GLM.LinearModel)
Convert a trained linear model from GLM.jl to an Affine
layer.
Example
julia> using GLM, MathOptAI
julia> X, Y = rand(10, 2), rand(10);
julia> model = GLM.lm(X, Y);
julia> predictor = MathOptAI.build_predictor(model)
-Affine(A, b) [input: 2, output: 1]
sourceMathOptAI.add_predictor
— MethodMathOptAI.add_predictor(
+Affine(A, b) [input: 2, output: 1]
sourceMathOptAI.add_predictor
— MethodMathOptAI.add_predictor(
model::JuMP.AbstractModel,
predictor::Tuple{<:Lux.Chain,<:NamedTuple,<:NamedTuple},
x::Vector;
@@ -815,7 +815,7 @@
julia> y
1-element Vector{VariableRef}:
- moai_Affine[1]
sourceMathOptAI.build_predictor
— MethodMathOptAI.build_predictor(
+ moai_Affine[1]
sourceMathOptAI.build_predictor
— MethodMathOptAI.build_predictor(
predictor::Tuple{<:Lux.Chain,<:NamedTuple,<:NamedTuple};
config::Dict = Dict{Any,Any}(),
)
Convert a trained neural network from Lux.jl to a Pipeline
.
Supported layers
Lux.Dense
Lux.Scale
Supported activation functions
Lux.relu
Lux.sigmoid
Lux.softplus
Lux.softmax
Lux.tanh
Keyword arguments
config
: a dictionary that maps supported Lux
activation functions to AbstractPredictor
s that control how the activation functions are reformulated. For example, Lux.sigmoid => MathOptAI.Sigmoid()
or Lux.relu => MathOptAI.QuadraticReLU()
.
Example
julia> using Lux, MathOptAI, Random
@@ -847,7 +847,7 @@
Pipeline with layers:
* Affine(A, b) [input: 1, output: 16]
* ReLUQuadratic()
- * Affine(A, b) [input: 16, output: 1]
sourceMathOptAI.add_predictor
— MethodMathOptAI.add_predictor(
+ * Affine(A, b) [input: 16, output: 1]
sourceMathOptAI.add_predictor
— MethodMathOptAI.add_predictor(
model::JuMP.AbstractModel,
predictor::MathOptAI.PytorchModel,
x::Vector;
@@ -856,13 +856,13 @@
gray_box::Bool = false,
gray_box_hessian::Bool = false,
gray_box_device::String = "cpu",
-)
Add a trained neural network from PyTorch via PythonCall.jl to model
.
Supported layers
nn.Linear
nn.ReLU
nn.Sequential
nn.Sigmoid
nn.Softplus
nn.Tanh
Keyword arguments
config
: a dictionary that maps Symbol
s to AbstractPredictor
s that control how the activation functions are reformulated. For example, :Sigmoid => MathOptAI.Sigmoid()
or :ReLU => MathOptAI.QuadraticReLU()
. The supported Symbols are :ReLU
, :Sigmoid
, :SoftPlus
, and :Tanh
.gray_box
: if true
, the neural network is added as a user-defined nonlinear operator, with gradients provided by torch.func.jacrev
.gray_box_hessian
: if true
, the gray box additionally computes the Hessian of the output using torch.func.hessian
.gray_box_device
: device used to construct PyTorch tensors, e.g. "cuda"
to run on an Nvidia GPU.
sourceMathOptAI.build_predictor
— MethodMathOptAI.build_predictor(
+)
Add a trained neural network from PyTorch via PythonCall.jl to model
.
Supported layers
nn.Linear
nn.ReLU
nn.Sequential
nn.Sigmoid
nn.Softplus
nn.Tanh
Keyword arguments
config
: a dictionary that maps Symbol
s to AbstractPredictor
s that control how the activation functions are reformulated. For example, :Sigmoid => MathOptAI.Sigmoid()
or :ReLU => MathOptAI.QuadraticReLU()
. The supported Symbols are :ReLU
, :Sigmoid
, :SoftPlus
, and :Tanh
.gray_box
: if true
, the neural network is added as a user-defined nonlinear operator, with gradients provided by torch.func.jacrev
.gray_box_hessian
: if true
, the gray box additionally computes the Hessian of the output using torch.func.hessian
.gray_box_device
: device used to construct PyTorch tensors, e.g. "cuda"
to run on an Nvidia GPU.
sourceMathOptAI.build_predictor
— MethodMathOptAI.build_predictor(
predictor::MathOptAI.PytorchModel;
config::Dict = Dict{Any,Any}(),
gray_box::Bool = false,
gray_box_hessian::Bool = false,
gray_box_device::String = "cpu",
-)
Convert a trained neural network from PyTorch via PythonCall.jl to a Pipeline
.
Supported layers
nn.Linear
nn.ReLU
nn.Sequential
nn.Sigmoid
nn.Softplus
nn.Tanh
Keyword arguments
config
: a dictionary that maps Symbol
s to AbstractPredictor
s that control how the activation functions are reformulated. For example, :Sigmoid => MathOptAI.Sigmoid()
or :ReLU => MathOptAI.QuadraticReLU()
. The supported Symbols are :ReLU
, :Sigmoid
, :SoftPlus
, and :Tanh
.gray_box
: if true
, the neural network is added as a user-defined nonlinear operator, with gradients provided by torch.func.jacrev
.gray_box_hessian
: if true
, the gray box additionally computes the Hessian of the output using torch.func.hessian
.gray_box_device
: device used to construct PyTorch tensors, e.g. "cuda"
to run on an Nvidia GPU.
sourceMathOptAI.add_predictor
— MethodMathOptAI.add_predictor(
+)
Convert a trained neural network from PyTorch via PythonCall.jl to a Pipeline
.
Supported layers
nn.Linear
nn.ReLU
nn.Sequential
nn.Sigmoid
nn.Softplus
nn.Tanh
Keyword arguments
config
: a dictionary that maps Symbol
s to AbstractPredictor
s that control how the activation functions are reformulated. For example, :Sigmoid => MathOptAI.Sigmoid()
or :ReLU => MathOptAI.QuadraticReLU()
. The supported Symbols are :ReLU
, :Sigmoid
, :SoftPlus
, and :Tanh
.gray_box
: if true
, the neural network is added as a user-defined nonlinear operator, with gradients provided by torch.func.jacrev
.gray_box_hessian
: if true
, the gray box additionally computes the Hessian of the output using torch.func.hessian
.gray_box_device
: device used to construct PyTorch tensors, e.g. "cuda"
to run on an Nvidia GPU.
sourceMathOptAI.add_predictor
— MethodMathOptAI.add_predictor(
model::JuMP.AbstractModel,
predictor::StatsModels.TableRegressionModel,
x::DataFrames.DataFrame;
@@ -891,4 +891,4 @@
moai_Affine[1]
moai_Affine[1]
moai_Affine[1]
- moai_Affine[1]
sourceSettings
This document was generated with Documenter.jl version 1.8.0 on Friday 17 January 2025. Using Julia version 1.11.2.
+ moai_Affine[1]