You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to use your code with ITensors.jl to compute the groundstate of a 2D Heisenberg model and obtain random pauli measurements from it. The code works fine when starting julia normally without any arguments and including my function into REPL and then executing it. However I noticed that julia was not using the full potential of my CPU (only about 8 cores are used) and I looked into the getsamples function where threads are beeing used. Thats why I thought I could speed up the sampling process by starting julia with more threads i.e julia -t 15 could help. With that change, I was still only observing one cpu beeing utilized during the DMRG sweeps (but this is maybe another issue?) The sampling part of the code (example below) was using all 15 cores successfully. However for some Hamiltonians I obtained the following error:
I then computed the normalization constants before and after the sampling process and printed them. The norm was always 1.000000 for the normal MPS and the dense one. So this error does not make sense to me. Could you please help me to find the bug?
Minimal working example
This is a minimal example to reproduce the bug and here are the files needed for that: https://cloud.ml.jku.at/s/FS6fj9oTpzGSfHx.
The MPS state was obtained during a simulation whose code I will post below. This small script will produce the above error when starting julia in multithread mode (with -t 15 I'll see 22 sub-processes) and will just work fine when starting julia with (-t 1 but weirdly I still see 8 processes in htop)
# this is the file test_failed_simu.jlusing ITensors
using ITensors.HDF5
using LinearAlgebra
using PastaQ
using Random
using Printf
using JLD
using Debugger
f =h5open("./data/failed_simulation_5x5_id9_gs.jld", "r")
psi =read(f,"psi", MPS)
close(f)
# display(psi)
norm1 =norm(psi)
norm1d =norm(dense(psi))
@printf("norm before normalize %f, dense %f\n", norm1, norm1d)
Random.seed!(0)
other_vars =load("./data/failed_simulation_5x5_id9.jld")
N=5*5
nshots=1000
samples =getsamples(dense(psi), randombases(N, nshots), 1)
original code
The following is some adapted code which was given to me by Hsing-Yuan Huang which was probably also code written by you? It generates the groundstates of different random Hamiltonians and samples from them, which are then stored. The problem occurs eventough I played with different values of minsweep, cutoff Λ, energy tolerance ϵ.
using ITensors
using ITensors.HDF5
using LinearAlgebra
using PastaQ
using Random
# using Plotsusing Printf
# using Test# using StatsBaseusing JLD
using ProgressBars
using Debugger
using CodeTracking
#TODO move to GPU# https://github.com/ITensor/ITensors.jl/blob/74db39bab0a1bcd6a02dd744a38c74714ad6ff83/ITensorGPU/examples/dmrg.jl# https://github.com/ITensor/ITensors.jl/tree/main/ITensorGPUfunctionmain()
n =1
Random.seed!(n *1234)
basepath ="data/"
Lx =5
Ly =5
N = Lx * Ly
nshots =1000
Jmin =0.0
Jmax =2.0
npoints =100@printf("Lx=%i, Ly=%i ", Lx, Ly)
Λ =1e-8
noise =1e-9
χ = [10,10,20,20,50,100,200,200,500,1000,1500]
χ₀ =10
ϵ =1e-4
nsweeps =500
minsweeps =10
groundstate =nothingfor k inProgressBar(1:npoints)
@printf("iteration %i", k)
sites =siteinds("Qubit", N; conserve_qns=true)
lattice =square_lattice(Lx, Ly; yperiodic=false, xperiodic=false)
J = Jmax *rand(length(lattice))
# println("couplings:")# display(J)# Define the Heisenberg spin Hamiltonian on this lattice# for (b, bond) in enumerate(lattice)# print(bond)# end
ampo =AutoMPO()
for (b, bond) inenumerate(lattice)
ampo .+= J[b], "X", bond.s1, "X", bond.s2
ampo .+= J[b], "Y", bond.s1, "Y", bond.s2
ampo .+= J[b], "Z", bond.s1, "Z", bond.s2
end
H =MPO(ampo, sites)
st = [isodd(n) ?"1":"0"for n =1:N]
ψ₀ =randomMPS(sites, st)
sweeps =Sweeps(nsweeps)
maxdim!(sweeps, χ...)
cutoff!(sweeps, Λ)
noise!(sweeps, noise)
@printf("Running dmrg\n")
observer =DMRGObserver(["Z"], sites, energy_tol=ϵ, minsweeps=minsweeps)
E, ψ =dmrg(H, ψ₀, sweeps; observer=observer, outputlevel=1)
ψ = LinearAlgebra.normalize(ψ)
norm1 =norm(ψ)
norm1d =norm(dense(ψ))
@printf("norm after normalize %f, dense %f\n", norm1, norm1d)
@printf("Measurements\n")
SvN =entanglemententropy(ψ)
norm2 =norm(ψ)
norm2d =norm(dense(ψ))
@printf("norm after entanglemententropy %f, dense %f\n", norm2, norm2d)
## X = measure(ψ, "X")## Z = measure(ψ, "Z")# XX = measure(ψ, ("X", "X")) # YY = measure(ψ, ("Y", "Y")) # ZZ = measure(ψ, ("Z", "Z")) # this is the same as above, but in new version# X = expect(ψ, "X")# Y = expect(ψ, "Y")# Z = expect(ψ, "Z")
XX =correlation_matrix(ψ, "X", "X")
norm3 =norm(ψ)
norm3d =norm(dense(ψ))
@printf("norm after correlation_matrix XX %f, dense %f\n", norm3, norm3d)
YY =correlation_matrix(ψ, "Y", "Y")
norm4 =norm(ψ)
norm4d =norm(dense(ψ))
@printf("norm after correlation_matrix YY %f, dense %f\n", norm4, norm4d)
ZZ =correlation_matrix(ψ, "Z", "Z")
norm5 =norm(ψ)
norm5d =norm(dense(ψ))
@printf("norm after correlation_matrix ZZ %f, dense %f\n", norm5, norm5d)
# ψ = LinearAlgebra.normalize(ψ)try@printf("Sampling\n")
samples =getsamples(dense(ψ), randombases(N, nshots), 1) # nshots different meas. bases, each base with one shot@bp
path = basepath *"simulation_$(Lx)x$(Ly)_id$(npoints * (n -1) + k).jld"
JLD.jldopen(path, "w") do fout
JLD.write(fout, "J", J)
JLD.write(fout, "ZZ", ZZ)
JLD.write(fout, "YY", YY)
JLD.write(fout, "XX", XX)
JLD.write(fout, "E", E)
JLD.write(fout, "SvN", SvN)
JLD.write(fout, "samples", samples)
end
fout =h5open(basepath *"simulation_$(Lx)x$(Ly)_id$(npoints * (n -1) + k)_gs.jld", "w")
write(fout, "psi", ψ)
close(fout)
catch e
showerror(stdout, e)
@printf("Sampling failed for sample %i\n", npoints * (n -1) + k)
# saving error states
path = basepath *"failed_simulation_$(Lx)x$(Ly)_id$(npoints * (n -1) + k).jld"
JLD.jldopen(path, "w") do fout
JLD.write(fout, "J", J)
JLD.write(fout, "ZZ", ZZ)
JLD.write(fout, "YY", YY)
JLD.write(fout, "XX", XX)
JLD.write(fout, "E", E)
JLD.write(fout, "SvN", SvN)
end
fout =h5open(basepath *"failed_simulation_$(Lx)x$(Ly)_id$(npoints * (n -1) + k)_gs.jld", "w")
write(fout, "psi", ψ)
close(fout)
endprintln("")
endendprint(@code_stringmain())
if""!=PROGRAM_FILE&&realpath(@__FILE__) ==realpath(PROGRAM_FILE)
main()
end
Thank you in advance for your time and for providing this useful package!
Hm I reckon that this is not the issue, since nshot is 1 in my setting. I am trying to measure 1 shot for many different bases. So the parallelization over nshots in line 161 would only create 1 single thread?
Also I tried to change sample_ = getsamples!(M; kwargs...) into sample_ = getsamples!(copy(M); kwargs...) howoever still experiencing the bug :|
Problem
Hi @GTorlai,
I am trying to use your code with ITensors.jl to compute the groundstate of a 2D Heisenberg model and obtain random pauli measurements from it. The code works fine when starting julia normally without any arguments and including my function into REPL and then executing it. However I noticed that julia was not using the full potential of my CPU (only about 8 cores are used) and I looked into the getsamples function where threads are beeing used. Thats why I thought I could speed up the sampling process by starting julia with more threads i.e
julia -t 15
could help. With that change, I was still only observing one cpu beeing utilized during the DMRG sweeps (but this is maybe another issue?) The sampling part of the code (example below) was using all 15 cores successfully. However for some Hamiltonians I obtained the following error:I then computed the normalization constants before and after the sampling process and printed them. The norm was always 1.000000 for the normal MPS and the dense one. So this error does not make sense to me. Could you please help me to find the bug?
Minimal working example
This is a minimal example to reproduce the bug and here are the files needed for that: https://cloud.ml.jku.at/s/FS6fj9oTpzGSfHx.
The MPS state was obtained during a simulation whose code I will post below. This small script will produce the above error when starting julia in multithread mode (with -t 15 I'll see 22 sub-processes) and will just work fine when starting julia with (-t 1 but weirdly I still see 8 processes in htop)
original code
The following is some adapted code which was given to me by Hsing-Yuan Huang which was probably also code written by you? It generates the groundstates of different random Hamiltonians and samples from them, which are then stored. The problem occurs eventough I played with different values of
minsweep
, cutoffΛ
, energy toleranceϵ
.Thank you in advance for your time and for providing this useful package!
PS.:
Package info:
The packages ITensors and PastaQ are unmodified and are just installed in development mode to make code search with visual studio code more reliable
The text was updated successfully, but these errors were encountered: