Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support MOI.VectorNonlinearFunction #204

Closed
amontoison opened this issue Jan 23, 2025 · 7 comments · Fixed by #205
Closed

Support MOI.VectorNonlinearFunction #204

amontoison opened this issue Jan 23, 2025 · 7 comments · Fixed by #205
Assignees

Comments

@amontoison
Copy link
Member

No description provided.

@odow
Copy link
Contributor

odow commented Jan 23, 2025

Why do you want to support this?

@odow
Copy link
Contributor

odow commented Jan 23, 2025

The only real reason is VectorNonlinearFunction-in-Complements.

Others like VectorNonlinearFunction-in-Nonnegatives can be bridged to ScalarNonlinearFunction-in-GreaterThan

@amontoison
Copy link
Member Author

amontoison commented Jan 23, 2025

Why do you want to support this?

A colleague wanted to verify the sparsity pattern of the Jacobian of the constraints / Hessian of the Lagrangian and got this error:

nlp = MathOptNLPModel(model):
ERROR: Function MathOptInterface.VectorNonlinearFunction is not supported.

I suppose that he used function tracing because he only defined the following functions:

 function obj_vecchia(w::AbstractVector, cache::VecchiaCache)
     t1 = -cache.M * sum(w[(cache.nnz_L+1):end])
 
     t2 = sum(
             sum(
                 sum(
                     w[r] * cache.samples[k, cache.rows[r]] 
                     for r in cache.colptr[j]:(cache.colptr[j+1] - 1)
                 )^2
                 for j in 1:cache.n
             ) 
             for k in 1:cache.M
         )
     return t1 + 0.5 * t2
 end
 
 function cons_vecchia(w::AbstractVector, cache::VecchiaCache)
     #return exp.(w[(1:cache.n).+cache.nnz_L]) .- w[cache.colptr[1:end-1]]
     return [exp(w[i]) - w[j] for  (i, j) in zip((1:cache.n).+cache.nnz_L, cache.colptr[1:end-1])]
 end

@constraint(model, cons_vecchia(w, cache) == 0)
@objective(model, Min, obj_vecchia(w, cache))

I didn't planned to support to support Vector NonlinearFunction because I never found a model with it before.
It seems quite uncommon but I didn't tried the recent features of JuMP.

@amontoison
Copy link
Member Author

@CalebDerrickson
Can you share the complete JuMP model?
We would like to understand why a Vector NonlinearFunction is in your model.

@odow
Copy link
Contributor

odow commented Jan 23, 2025

Do @constraint(model, cons_vecchia(w, cache) .== 0).

I guess the issue is that you're not using the bridges to declare what sets you support.

@CalebDerrickson
Copy link

CalebDerrickson commented Jan 23, 2025

Sure, this should be my full code block @amontoison. I verified the change from
@constraint(model, cons_vecchia(w, cache) == 0)
to
@constraint(model, cons_vecchia(w, cache) .== 0)
fixed the issue. I kept the former in the code below.

``

   function main()
	mesh_n = 3
	n = mesh_n^2

	Number_of_Samples = 100
params = [5.0, 0.2, 2.25, 0.25]

	grid1d = range(0.0, 1.0, length=mesh_n)
	xyGrid = vec([SA[x[1], x[2]] for x in Iterators.product(grid1d, grid1d)])

	MatCov = covariance2D(xyGrid, params)
	mean_mu = zeros(n)
	foo = MvNormal(mean_mu, MatCov)
	# rand gives each samples as a column vector.
	samples = Matrix(transpose(rand(foo, Number_of_Samples)))    


model = Model(()->MadNLP.Optimizer(print_level=MadNLP.ERROR, max_iter=100))

	cache = create_vecchia_cache(samples)

	@variable(model, w[1:(cache.nnz_L + cache.n)])

	# Initial L is identity
	for i in cache.colptr
    	set_start_value(w[i], 1.0)  
	end


@constraint(model, cons_vecchia(w, cache) ==0)
	@objective(model, Min, obj_vecchia(w, cache)) 

	nlp = MathOptNLPModel(model)
	Hx = sparse(hess(nlp, nlp.meta.x0))
	Jx = sparse(jac(nlp, nlp.meta.x0))

println("Hessian")
display(Hx)
println("Jacobian")
display(Jx)
end

function obj_vecchia(w::AbstractVector, cache::VecchiaCache)
t1 = -cache.M * sum(w[(cache.nnz_L+1):end])

	# This looks stupid, but its better than putting it on one line 
	t2 = sum(
    	    sum(
            	sum(
                		w[r] * cache.samples[k, cache.rows[r]] 
                		for r in cache.colptr[j]:(cache.colptr[j+1] - 1)
            	)^2
            	for j in 1:cache.n
        	) 
        	for k in 1:cache.M
    	)
	return t1 + 0.5 * t2
end

function cons_vecchia(w::AbstractVector, cache::VecchiaCache)
	    return exp.(w[(1:cache.n).+cache.nnz_L]) .- w[cache.colptr[1:end-1]]
end

``

@amontoison
Copy link
Member Author

Thanks @CalebDerrickson!
I propose to support VectorNonlinearFunction only if one day we need the complementarity constraints.
The only potential user of @frapac.

Thanks for your help Oscar!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants