diff --git a/docs/.documenter-siteinfo.json b/docs/.documenter-siteinfo.json index 3aa7290..1567257 100644 --- a/docs/.documenter-siteinfo.json +++ b/docs/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.7.3","generation_timestamp":"2024-11-26T10:56:04","documenter_version":"1.8.0"}} \ No newline at end of file +{"documenter":{"julia_version":"1.7.3","generation_timestamp":"2024-11-26T15:10:52","documenter_version":"1.8.0"}} \ No newline at end of file diff --git a/docs/convergence/index.html b/docs/convergence/index.html index fab8766..36e5059 100644 --- a/docs/convergence/index.html +++ b/docs/convergence/index.html @@ -1,4 +1,4 @@ Convergence checks · MPSDynamics.jl

Convergence checks

In this section, we analyze the numerical accuracy of our simulations by outlining the necessary approximations and providing practical guidelines to ensure convergence of results.

Approximations

Chain length $N$

In numerical simulations, a truncation on the number of chain modes (and therefore the chain length) is introduced to handle a finite chain instead of a semi-infinite one. This truncation, denoted as $N$, corresponds to a sampling of the modes in the original environment.

The chain length $N$ is directly connected to the simulation time as follows: excitations injected into the chain propagate as a wavefront traveling along the chain. Perturbations to the initial state outside this wavefront are exponentially suppressed. Therefore, truncating the chain basis beyond the expanding wavefront ensures that the resulting sampling error is also exponentially small [woods_simulating_2015][DeVega_howto_2015].

To optimize the chain length for a given simulation time and set of chain coefficients, the following procedure can be used:

  1. Set the total simulation time T and compute the chain coefficients for an intentionally oversized chain N_huge (longer than needed):
T = 100 
 N_huge = 1000
-cpars = chaincoeffs_flat(N_huge, αchain; ωc = 1.0)
  1. Use the built-in function findchainlength to determine the optimal chain length N_opt based on the propagation speed of the wavefront (given by the hopping coefficients $t_n$):
N_opt = findchainlength(T, cpars; eps=10^-4, verbose=false)

2bis. A rule-of-thumb estimate N_est can also be obtained with findchainlength using universal asymptotic properties of chain mapped environment:

N_est = findchainlength(T, ωc, β)

Local dimension $d$

Another critical truncation is imposed on the local dimension $d$ of each tensor in the MPS that undergoes dynamic evolution. The local dimension $d$ corresponds to the number of Fock states retained in the Hilbert space of each chain mode. Since harmonic oscillators are, in principle, infinite-dimensional systems, truncating their Hilbert space to a finite $d$ is necessary for numerical computations. The choice of $d$ determines the maximum number of excitations per site in the MPS and must be carefully tuned to capture the relevant physics while ensuring numerical convergence. For a bath that initially contains only a finite number of particles, i.e. any bath in practice, the error originating from the local Hilbert space truncation vanishes exponentially as $d$ increases[woods_simulating_2015].

A practical guideline for choosing $d$ can be derived by considering the physical states being simulated. For example, for coherent states, the occupation number follows a Poisson distribution. In such cases, the average occupation number is given by the mean $\langle n \rangle$, and the probability of observing a state with occupation number $n$ decreases exponentially for $n \gg \langle n \rangle$. Thus, the truncation $d$ should be chosen such that the cumulative probability of truncation error is negligible.

Bond dimensions and time evolution

Different time-evolution algorithms are implemented in MPSDynamics, and can be selected as options in the function runsim. Detailed descriptions of these algorithms are provided in the Theoretical Background section of this documentation. The key parameter for ensuring convergence in these methods is controlled via the convparams option, which accepts arrays of multiple parameter values to test until convergence is achieved.

Below are the main time-evolution algorithms and their corresponding convergence parameters:

  • One-Site Time-Dependent Variational Principle (1TDVP):
    • Selected using: method = :TDVP1.
    • The convergence parameter is the bond dimension of the MPS, which defines the manifold on which the time evolution is constrained. To ensure convergence, multiple bond dimensions can be tested: convparams = [bond1, bond2, bond3].
  • Two-Site Time-Dependent Variational Principle (2TDVP):
    • Selected using: method = :TDVP2.
    • The convergence parameter is the SVD singular value truncation threshold, which determines the precision of the truncation during simulations. Multiple truncation thresholds can be tested: convparams = [trunc1, trunc2, trunc3].
    • The bond dimensions at each chain-site and at each time-step can be saved by specifying savebonddims = true. The maximum allowed value of bond dimension can be chosen by setting the Dlim option.
  • Adaptive Time-Dependent Variational Principle (DTDVP):
    • Selected using: method = :DTDVP.
    • The convergence parameter is the target precision for the adaptive algorithm. Multiple precision values can be provided to test convergence: convparams = [prec1, prec2, prec3].
    • The bond dimensions at each chain-site and at each time-step can be saved by specifying savebonddims = true. The maximum allowed value of bond dimension can be chosen with the Dlim option.
    • In some cases, it might be better to embed the initial time MPS in a manifold of higher bond-dimension, which can be done with the function mpsembed!, to avoid for the dynamics to get stuck.

Time-step

The time step used in simulations must be small enough to accurately capture the dynamics of the system. To ensure convergence, gradually reduce the time step until key observables (e.g., energy, population dynamics, or correlation functions) stabilize. It is often practical to test a range of time steps and assess their impact on results to determine an optimal balance between accuracy and computational cost.

Common pitfalls

Hard cut-off in the spectral density function

The cutoff frequency $\omega_c$ in the spectral density function has to be selected in order to incorporate all of the relevant frequencies. It is to be noted that, for spectral densities belonging to the Szegö class[chin_exact_2010], the cutoff frenquency $\omega_c$ also determines the asymptotic values to which the chain coefficients converge [woods_mappings_2014]. Therefore, if we want to reach translational invariance (converged values) in the chain coefficients, then we need to specify a cutoff. To avoid ringing effects, it is a good choice to select a spectral density function that approaches zero before the cutoff frequency.

References

  • woods_simulating_2015Woods, M. P.; Cramer, M.; Plenio, M. B. Simulating Bosonic Baths with Error Bars. Phys. Rev. Lett. 2015, 115 (13), 130401. https://doi.org/10.1103/PhysRevLett.115.130401.
  • DeVega_howto_2015De Vega, I.; Schollwöck, U.; Wolf, F. A. How to Discretize a Quantum Bath for Real-Time Evolution. Phys. Rev. B 2015, 92 (15), 155126. https://doi.org/10.1103/PhysRevB.92.155126.
  • woods_mappings_2014Woods, M. P. et al. “Mappings of open quantum systems onto chain representations and Markovian embeddings”. In: Journal of Mathematical Physics 55.3 (Mar. 2014), p. 032101. issn: 0022-2488, 1089-7658. doi: 10.1063/1.4866769
  • chin_exact_2010Chin, A. W. et al. “Exact mapping between system-reservoir quantum models and semi-infinite discrete chains using orthogonal polynomials”. In: Journal of Mathematical Physics 51.9 (Sept. 2010), p. 092109. issn: 0022-2488, 1089-7658. doi: 10.1063/1.3490188. arXiv: 1006.4507
+cpars = chaincoeffs_flat(N_huge, αchain; ωc = 1.0)
  1. Use the built-in function findchainlength to determine the optimal chain length N_opt based on the propagation speed of the wavefront (given by the hopping coefficients $t_n$):
N_opt = findchainlength(T, cpars; eps=10^-4, verbose=false)

2bis. A rule-of-thumb estimate N_est can also be obtained with findchainlength using universal asymptotic properties of chain mapped environment:

N_est = findchainlength(T, ωc, β)

Local dimension $d$

Another critical truncation is imposed on the local dimension $d$ of each tensor in the MPS that undergoes dynamic evolution. The local dimension $d$ corresponds to the number of Fock states retained in the Hilbert space of each chain mode. Since harmonic oscillators are, in principle, infinite-dimensional systems, truncating their Hilbert space to a finite $d$ is necessary for numerical computations. The choice of $d$ determines the maximum number of excitations per site in the MPS and must be carefully tuned to capture the relevant physics while ensuring numerical convergence. For a bath that initially contains only a finite number of particles, i.e. any bath in practice, the error originating from the local Hilbert space truncation vanishes exponentially as $d$ increases[woods_simulating_2015].

A practical guideline for choosing $d$ can be derived by considering the physical states being simulated. For example, for coherent states, the occupation number follows a Poisson distribution. In such cases, the average occupation number is given by the mean $\langle n \rangle$, and the probability of observing a state with occupation number $n$ decreases exponentially for $n \gg \langle n \rangle$. Thus, the truncation $d$ should be chosen such that the cumulative probability of truncation error is negligible.

Bond dimensions and time evolution

Different time-evolution algorithms are implemented in MPSDynamics, and can be selected as options in the function runsim. Detailed descriptions of these algorithms are provided in the Theoretical Background section of this documentation. The key parameter for ensuring convergence in these methods is controlled via the convparams option, which accepts arrays of multiple parameter values to test until convergence is achieved.

Below are the main time-evolution algorithms and their corresponding convergence parameters:

Time-step

The time step used in simulations must be small enough to accurately capture the dynamics of the system. To ensure convergence, gradually reduce the time step until key observables (e.g., energy, population dynamics, or correlation functions) stabilize. It is often practical to test a range of time steps and assess their impact on results to determine an optimal balance between accuracy and computational cost.

Common pitfalls

Hard cut-off in the spectral density function

The cutoff frequency $\omega_c$ in the spectral density function has to be selected in order to incorporate all of the relevant frequencies. It is to be noted that, for spectral densities belonging to the Szegö class[chin_exact_2010], the cutoff frenquency $\omega_c$ also determines the asymptotic values to which the chain coefficients converge [woods_mappings_2014]. Therefore, if we want to reach translational invariance (converged values) in the chain coefficients, then we need to specify a cutoff. To avoid ringing effects, it is a good choice to select a spectral density function that approaches zero before the cutoff frequency.

References

diff --git a/docs/dev/index.html b/docs/dev/index.html index 6e4e714..324b064 100644 --- a/docs/dev/index.html +++ b/docs/dev/index.html @@ -1,2 +1,2 @@ -Developers · MPSDynamics.jl

Developers

Simulation Workflow

Flowchart of the simulation workflow

How to Contribute

Contributions are welcome! Don't hesitate to contact us if you

  • found a bug;
  • have a suggestion on how to improve the code and/or documentation;
  • would like to get involved in writing code and/or documentation.

You can contact us by raising an issue on Github, or by writing to one of the developers.

+Developers · MPSDynamics.jl

Developers

Simulation Workflow

Flowchart of the simulation workflow

How to Contribute

Contributions are welcome! Don't hesitate to contact us if you

  • found a bug;
  • have a suggestion on how to improve the code and/or documentation;
  • would like to get involved in writing code and/or documentation.

You can contact us by raising an issue on Github, or by writing to one of the developers.

diff --git a/docs/examples/anderson-model/index.html b/docs/examples/anderson-model/index.html index 8a78f57..8007cfd 100644 --- a/docs/examples/anderson-model/index.html +++ b/docs/examples/anderson-model/index.html @@ -266,4 +266,4 @@ end # Display the plots -plot(p2, p3, p4, p5, p1, layout = (3, 2), size = (1400, 1200))

Output of this program

________________

Bibliography

+plot(p2, p3, p4, p5, p1, layout = (3, 2), size = (1400, 1200))

Output of this program

________________

Bibliography

diff --git a/docs/examples/bath-observables/index.html b/docs/examples/bath-observables/index.html index e2e7c05..17460be 100644 --- a/docs/examples/bath-observables/index.html +++ b/docs/examples/bath-observables/index.html @@ -104,4 +104,4 @@ xlabel=L"\omega", ylabel=L"\langle n^b_\omega \rangle", title="Mode occupation in the physical bath") -plot(p1, p2, p3, p4, layout = (2, 2), size = (1400, 1200))

Output of this program

___________________

Bibliography

+plot(p1, p2, p3, p4, layout = (2, 2), size = (1400, 1200))

Output of this program

___________________

Bibliography

diff --git a/docs/examples/protontransfer/index.html b/docs/examples/protontransfer/index.html index d623e19..9faa808 100644 --- a/docs/examples/protontransfer/index.html +++ b/docs/examples/protontransfer/index.html @@ -232,4 +232,4 @@ print("\n k = ", k) (plot(heatmap(xlist_rho, xlist_rho, abs.(ρx[:,:,t]), c=:Blues ), title="Time = $k (arb. units)", xlabel=L"$x$ (arb. units)",ylabel=L"$x$ (arb. units)", tickfontsize=(12),colorbar_title = L"||\rho_{RC}(x,x)||", legend=:none, xlims=(-0.5, 0.5), ylims=(-0.5,0.5), clims=(0,0.18),aspect_ratio=:equal)) end -display(gif(anim, "gif_reducedrho.gif", fps = 2.5))

________________

Bibliography

+display(gif(anim, "gif_reducedrho.gif", fps = 2.5))

________________

Bibliography

diff --git a/docs/examples/puredephasing/index.html b/docs/examples/puredephasing/index.html index 1ab4897..62a9eda 100644 --- a/docs/examples/puredephasing/index.html +++ b/docs/examples/puredephasing/index.html @@ -91,4 +91,4 @@ plot(time_analytical, t->Decoherence_ohmic(t), label="Analytics", title=L"Pure Dephasing, Ohmic $s=%$s$, $\beta = %$β ~\mathrm{K}$", linecolor=:black, xlabel="Time (arb. units)", ylabel=L"Coherence $|\rho_{12}(t)|$", linewidth=4, titlefontsize=16, legend=:best, legendfontsize=16, xguidefontsize=16, yguidefontsize=16, tickfontsize=10) -plot!(dat["data/times"], ρ12, lw=4, ls=:dash, label="Numerics")

Bibliography

+plot!(dat["data/times"], ρ12, lw=4, ls=:dash, label="Numerics")

Bibliography

diff --git a/docs/examples/sbm/index.html b/docs/examples/sbm/index.html index 35f3d85..5e5f780 100644 --- a/docs/examples/sbm/index.html +++ b/docs/examples/sbm/index.html @@ -78,4 +78,4 @@ method == :DTDVP && heatmap(dat["data/times"], collect(0:N+1), dat["data/bonddims"], xlabel=L"t",ylabel="bond index") -heatmap(dat["data/times"], collect(1:N), abs.(dat["data/SXdisp"][1,:,:]), xlabel=L"t",ylabel="chain mode") +heatmap(dat["data/times"], collect(1:N), abs.(dat["data/SXdisp"][1,:,:]), xlabel=L"t",ylabel="chain mode") diff --git a/docs/examples/timedep/index.html b/docs/examples/timedep/index.html index 92ff518..9500394 100644 --- a/docs/examples/timedep/index.html +++ b/docs/examples/timedep/index.html @@ -119,4 +119,4 @@ omeg = MPSDynamics.eigenchain(cpars, nummodes=N).values plot(omeg, bath_occup[:, :, end], lw=4, xlabel=L"\omega", ylabel=L"\langle n^b_\omega \rangle", -title="Mode occupation in the extended bath at final time") +title="Mode occupation in the extended bath at final time") diff --git a/docs/index.html b/docs/index.html index 4875783..77ed7eb 100644 --- a/docs/index.html +++ b/docs/index.html @@ -24,4 +24,4 @@ year = {2024}, pages = {084116}, } - + diff --git a/docs/methods/index.html b/docs/methods/index.html index 06cab75..23f9e8e 100644 --- a/docs/methods/index.html +++ b/docs/methods/index.html @@ -45,4 +45,4 @@ unid=randstring(5), name=nothing, kwargs... - )

Propagate the MPS A with the MPO H up to time tmax in time steps of dt. The final MPS is returned to A and the measurement data is returned to dat

Arguments

Advanced

+ )

Propagate the MPS A with the MPO H up to time tmax in time steps of dt. The final MPS is returned to A and the measurement data is returned to dat

Arguments

Advanced

diff --git a/docs/nutshell/index.html b/docs/nutshell/index.html index 2e6f56b..4667cd0 100644 --- a/docs/nutshell/index.html +++ b/docs/nutshell/index.html @@ -1,2 +1,2 @@ -In a nutshell: MPSDynamics.jl for open quantum systems · MPSDynamics.jl

In a nutshell: MPSDynamics.jl for open quantum systems

MPSDynamics.jl was originally developed to perform tensor network simulations of open quantum systems by solving the Schrödinger equation for the closed {System + Environment}. The key idea to construct the simulations is to reformulate the open quantum system Hamiltonian as a one-dimensional many-body problem with nearest-neighbor interactions. The dynamics are then efficiently simulated with Tensor Networks methods.

Sketch of the different ingredients behind MPSDynamics

The starting point is a system linearly coupled to a bosonic environment, where the coupling between system and environment is specified by the spectral density function $J(\omega)$: the prototypical example being The Spin-Boson Model. At finite temperature, the initial state of the environment will be a thermal state (mixed), requiring density matrix formalism and thus making the problem more complex. To circumvent this issue, we consider instead an extended environment, with coupling to the system characterized by a thermalized spectral density function $J(\omega,\beta)$. The two environments induce the same reduced dynamics on the system, enabling us to deal with pure states only[1]. The vacuum state (pure) of the extended environment corresponds to the thermal state of the original environment.

To exploit the computational efficiency of the matrix product states (MPS) description of pure states, we apply the so called Chain-Mapping of bosonic environments —a unitary transformation dependent on $J(\omega,\beta)$— to the Spin-Boson Hamiltonian, mapping it on a chain Hamiltonian with nearest-neighbor interactions. This enables efficient representation of the full {System + Environment} state and its time evolution using the Time-Dependent Variational Principle.

To go further, you will find in this documentation:

If this package was useful in your work, do not forget Citation. And if you would like to get involved in its development, you can find out How to Contribute.

+In a nutshell: MPSDynamics.jl for open quantum systems · MPSDynamics.jl

In a nutshell: MPSDynamics.jl for open quantum systems

MPSDynamics.jl was originally developed to perform tensor network simulations of open quantum systems by solving the Schrödinger equation for the closed {System + Environment}. The key idea to construct the simulations is to reformulate the open quantum system Hamiltonian as a one-dimensional many-body problem with nearest-neighbor interactions. The dynamics are then efficiently simulated with Tensor Networks methods.

Sketch of the different ingredients behind MPSDynamics

The starting point is a system linearly coupled to a bosonic environment, where the coupling between system and environment is specified by the spectral density function $J(\omega)$: the prototypical example being The Spin-Boson Model. At finite temperature, the initial state of the environment will be a thermal state (mixed), requiring density matrix formalism and thus making the problem more complex. To circumvent this issue, we consider instead an extended environment, with coupling to the system characterized by a thermalized spectral density function $J(\omega,\beta)$. The two environments induce the same reduced dynamics on the system, enabling us to deal with pure states only[1]. The vacuum state (pure) of the extended environment corresponds to the thermal state of the original environment.

To exploit the computational efficiency of the matrix product states (MPS) description of pure states, we apply the so called Chain-Mapping of bosonic environments —a unitary transformation dependent on $J(\omega,\beta)$— to the Spin-Boson Hamiltonian, mapping it on a chain Hamiltonian with nearest-neighbor interactions. This enables efficient representation of the full {System + Environment} state and its time evolution using the Time-Dependent Variational Principle.

To go further, you will find in this documentation:

If this package was useful in your work, do not forget Citation. And if you would like to get involved in its development, you can find out How to Contribute.

diff --git a/docs/theory/index.html b/docs/theory/index.html index d095646..c349c85 100644 --- a/docs/theory/index.html +++ b/docs/theory/index.html @@ -4,4 +4,4 @@ \end{aligned}\]

where $\hat a_\omega$ ($\hat a^\dagger_\omega$) is a bosonic annihilation (creation) operator for a normal mode of the environment of energy $\omega$, $\hat{A}_S$ is a system operator, and $J(\omega) = \sum_k |g_k|^2\delta(\omega - \omega_k)$ is the bath spectral density (SD), defined with the microscopic system-environment coupling strength $g_k$. The SD quantifies the coupling strengths of the different normal modes of the environment with the system. Any SD that is not flat corresponds to a non-Markovian environment.

Zero Temperature

Let us consider the Hamiltonian presented in Eq.(1). To investigate its rich phenomenology, using the powerful tools provided by tensor network techniques, it is convenient to map the system and its environment from a star-like to a chain-like configuration, with local interactions. To do so, we introduce a unitary transformation of the continuous normal modes $\hat{a}_\omega$ to an infinite discrete set of interacting modes $\hat{b}_n$[chin_exact_2010].

\[ \hat{a}_\omega = \sum_{n=0}^{+\infty} U_n(\omega)\hat{b}_n = \sum_{n=0}^{+\infty} \sqrt{J(\omega)}P_n(\omega)\hat{b}_n\ ,\]

where $P_n(\omega)$ are orthonormal polynomials such that

\[ \int_{0}^{+\infty}P_n(\omega)P_m(\omega)J(\omega)\mathrm{d}\omega = \delta_{n,m}\ ;\]

and the inverse transformation is

\[ \hat{b}_n = \int_0^{+\infty} U^*_n(\omega)\hat{a}_\omega\mathrm{d}\omega\ .\]

Note that the orthonormality of the polynomials ensures the unitarity of the transformation defined in Eq.(2). The mapping from a continuous set of modes to a (still infinite) discrete set might seem counter-intuitive, however it is a direct consequence of the separability of the underlying Hilbert space.

Under this transformation, the Hamiltonian in Eq.(1) becomes

\[ \hat{H}= \hat{H}_S + \sum_{n=0}^{+\infty}\varepsilon_n\hat{b}_n^\dagger\hat{b}_n + t_n(\hat{b}_{n+1}^\dagger\hat{b}_n + \mathrm{h.c.}) + c_0 \hat{A}_S(\hat{b}_0 + \hat{b}_0^\dagger)\ .\]

Hence, this mapping transforms the normal bath Hamiltonian into a tight-binding Hamiltonian with on-site energies $\varepsilon_n$ and hopping energies $t_n$. Another important consequence of this mapping is that now the system only interacts with the first mode $n = 0$ of the chain-mapped environment. The chain coefficients $\varepsilon_n$, $t_n$, and the coupling $c_0$ depend solely on the SD.

This makes chain mapping a tool of choice for describing systems coupled to environment with highly structured SD (e.g. experimentally measured or calculated ab initio)[chin_role_2013][alvertis_nonequilibrium_2019][dunnett_influence_2021][caycedosoler_exact_2022]. In this new representation, the Hamiltonian in Eq.(5) has naturally a 1D chain topology. This makes its representation as a Matrix Product Operator (MPO) and the representation of the joint {System + Environment} wave-function as a Matrix Product State (MPS) suited [orus_practical_2014][paeckel_timeevolution_2019].

The orthogonal polynomial-based chain mapping and the subsequent representation of the joint wave-function as a MPS (and the operators as MPO) are the building blocks of the Time-dependent Density operator with Orthonormal Polynomials Algorithm (TEDOPA) one of the state-of-the-art numerically exact method to simulate the dynamics of open quantum systems especially in the non-Markovian, non-perturbative regimes both at zero and finite temperatures [prior_efficient_2010][woods_simulating_2015][tamascelli_efficient_2019][dunnett_simulating_2021][lacroix_unveiling_2021].

Finite Temperature with T-TEDOPA

Assuming a unitary evolution for both the system and environment, the system's dynamics can be isolated by tracing out the environmental degrees of freedom. The density operator for the system at time $t$ is described as

\[\hat{\rho}_S(t) = \text{T}r_E\left\{\hat{U}(t) \hat{\rho}_S(0) \otimes \hat{\rho}_E(0) \hat{U}^\dagger(t)\right\}.\]

Initially, the system state, $\hat{\rho}_S(0)$, can be pure or mixed, and the environment is in a thermal state defined by the inverse temperature $\beta = (k_B T)^{-1}$. This state is represented by a product of Gaussian states

\[\hat{\rho}_E(0) = \bigotimes_\omega \frac{e^{-\beta \omega \hat{a}_\omega^\dagger \hat{a}_\omega}}{Z_\omega(\beta)} .\]

The system's evolution is dictated by the environment's two-time correlation function $S(t)$, which in turn is determined by the spectral density function $J$ and the temperature $\beta$ via the Bose-Einstein distribution $n_\omega(\beta)$

\[S(t) = \int_0^\infty d\omega J(\omega)\left[e^{-i\omega t}(1 + n_\omega(\beta)) + e^{i\omega t} n_\omega(\beta)\right] .\]

To simulate finite temperature effects using a zero-temperature model with the T-TEDOPA method [tamascelli_efficient_2019], we extend the spectral density function to cover both positive and negative frequencies, allowing us to use a pure state description for the environment. This is achieved by defining a new spectral density function $J(\omega, \beta)$ that incorporates the Boltzmann factors, supporting the entire real axis

\[J(\omega, \beta) = \frac{\text{sign}(\omega)J(\left|\omega\right|)}{2} \Big(1 + \coth\Big(\frac{\beta \omega}{2}\Big)\Big).\]

This modified bath has the same correlation function $S(t)$ and thus allows us to maintain a pure state description of the environment, represented as a vacuum state, and avoid the computational complexities of density matrices

\[\left|\text{vac}\right\rangle = \bigotimes_\omega \left|0_\omega\right\rangle .\]

The Hamiltonian of the system interacting with this extended bath now includes temperature-dependent interactions:

\[\hat{H} = \hat{H}_S + \int_{-\infty}^{+\infty} \omega \hat{a}_\omega^\dagger \hat{a}_\omega \mathrm{d}\omega + \hat{A}_S \otimes \int_{-\infty}^{+\infty} \sqrt{J(\omega,\beta)}\left(\hat{a}_\omega^\dagger+\hat{a}_\omega\right) \mathrm{d}\omega,\]

This method simplifies the simulation of finite temperature effects by treating them within an effective zero-temperature framework, thereby keeping the computational advantages of using pure states. In conclusion: the dynamics of the system resulting from the interaction with the original bath, starting in a thermal state at finite temperature, is exactly the same as the one resulting from the interaction with the extended environment, starting in the vacuum state at zero temperature. Once computed the chain coefficients at a given inverse temperature $\beta$, the time evolution of the vacuum state interacting with the extended environment can be efficiently simulated using MPS time evolution methods.

Finite temperature with the thermofield transformation

Finite temperature initial states of the system are challenging: in the previous section we have seen that a very efficient solution is to reabsorb the effect of the temperature inside of the spectral density function that characterizes the environment. De Vega and Bañuls present[devega_thermo_2015] another suggestive approach based on the idea of the thermofield transformation. The idea is to double the environmental degrees of freedom and then to apply a Bogoliubov transformation: the real environment in a thermal state is transformed into two virtual environments in the vacuum state, defined as the thermofield vacuum. For any operator of the real environment, the expectation values in the thermal state are equivalent to those calculated in the thermofield vacuum. Behind the thermofield approach there is the concept of purification: the initial mixed state of the environment can be represented as the partial trace of a pure state (the thermofield vacuum) defined on the larger Hilbert space of the modes of the two virtual environments at zero temperature. The interaction of the system and each one of the two environments is then mapped on two separate chains, using the TEDOPA chain mapping to define a unitary transformation that can be applied to the bath modes, in order to define new modes for the environment, mapping it from a star-like to a one dimensional chain-like configuration, which is very well suited for the application of tensor network techniques.

In the thermofield approach, the first step is to introduce the auxiliary environment $E^{\text{aux}}$ of non-interacting bosonic modes of negative frequencies:

\[ \hat H^\text{aux} = \hat H - \sum_k \omega_k \hat c_k^\dagger \hat c_k,\]

where the Hamiltonian $\hat H$ is the bosonic Hamiltonian defined in Eq. \ref{eq:bosonic_ham}. The two environments, of positive and negative frequencies, are assumed to be in a thermal environment at inverse temperature $\beta$; the second step is to apply a thermal Bogoliubov transformation to change the basis. The applied transformation produces two-modes squeezed states:

\[ \hat a_{1k}=e^{-iG} \hat b_k e^{iG}= \cosh(\theta_k) \hat b_k -\sinh(\theta_k) \hat c_k^\dagger \\ \hat a_{2k}=e^{-iG} \hat c_k e^{iG}= \cosh(\theta_k) \hat c_k -\sinh(\theta_k) \hat b_k^\dagger,\]

where the exponent of the squeeze operator is $G = i \sum_k \theta_k(\hat b_k^\dagger \hat c_k^\dagger-\hat c_k \hat b_k)$, and $\theta_k$ is dependent on the temperature as in the following relations, where the number of excitations in the $k$-th mode is $n_k = 1/(e^{\beta \omega_k}-1)$:

\[ \cosh(\theta_k) = \sqrt{1+n_k} = \sqrt{\frac{1}{1-e^{-\beta \omega_k}}} \\ \sinh(\theta_k) =\quad\sqrt{n_k}\quad= \sqrt{\frac{1}{e^{\beta \omega_k}-1}}.\]

The Bogoliubov transformation defines a new squeezed vacuum state, which we write in terms of the vacuum state $| \Omega_0 \rangle$ of the operators $\hat b_k$, $\hat c_k$:

\[ |\Omega\rangle = e^{iG} |\Omega_0\rangle, \quad \text{such that: }\quad \hat a_{1k} |\Omega\rangle = 0, \hat a_{2k} |\Omega\rangle = 0.\]

From the vacuum state, we can obtain the thermal state of the original environment:

\[ \hat \rho_E = \text{Tr}_{\text{aux}}\{ |\Omega\rangle\langle\Omega| \},\]

and it can be now used as pure an initial state for both of the environments. Moreover, the expectation value on $| \Omega \rangle$ of the number of physical modes $\hat n_k$ does not vanish:

\[ n_k= \langle\Omega| \hat b_k^\dagger \hat b_k |\Omega\rangle = \sinh^2(\theta_k).\]

Therefore, solving the dynamics given by the original Hamiltonian $\hat H$, starting from the initial condition $\hat \rho_S(0) \otimes \hat \rho_E(\beta)$, is equivalent to solving the dynamics given by the following Hamiltonian:

\[ \hat H = \hat H_S +\hat H_E +\hat H_I \\ - \hat H = \overbrace{\hat A_S}^{\hat H_S} + \overbrace{\sum_k \omega_k \big(\hat a_{1k}^\dagger \hat a_{1k} - \hat a_{2k}^\dagger \hat a_{2k} \big)}^{\hat H_E} + \overbrace{\hat L_S \otimes \sum_k g_{1k}(\hat a_{1k}^\dagger + \hat a_{1k})+\hat L_S \otimes \sum_k g_{2k}(\hat a_{2k}^\dagger + \hat a_{2k})}^{\hat H_I}.\]

where $\hat L_S = \hat L_S^\dagger$, considering $\hat \rho_S(0) \otimes |\Omega \rangle \langle \Omega |$ as the initial state of system and environment. It is this Hamiltonian that is mapped on two chains with TEDOPA, to be able to perform the time evolution using tensor network techniques.

Computation of the chain coefficients

A useful property of the orthonormal polynomials is that they obey a recurrence relation

\[ P_n(\omega) = (C_{n-1}\omega - A_{n-1})P_{n-1}(\omega) + B_{n-1}P_{n-2}(\omega)\ ,\]

where $A_n$ is related to the first moment of $P_n$, $B_n$ and $C_n$ to the norms of $P_n$ and $P_{n-1}$[appel_mathematics_2007]. This recurrence relation can be used to construct the polynomials with the conditions that $P_0(\omega) = ||p_0||^{-1} = \left(\int_{\mathbb{R}^{+}} J(\omega)\mathrm{d}\omega \right)^{-\frac{1}{2}}$ and $P_{-1}(\omega) = 0$, with $||\bullet||$ the norm of $\bullet$ with respect to the measure $J(\omega)$, and $P_n(\omega) = p_n(\omega)||p_n||^{-1}$ ; where the polynomials $\{p_n\}_{n\in\mathbb{N}}$ are the so called monic polynomials where the factor $a_n$ in front of $\omega^{n}$ is equal to 1.

The energy of the chain mode $n$ is given by $\varepsilon_n = A_n C_n^{-1}$ and $t_n=C_n^{-1}$ is the coupling between mode $n$ and $n+1$[chin_exact_2010].

The system couples only to the first mode with the coupling strength $c_0 = ||p_0||$.

Explain that for some weight function/SD they are known analytically and that for others we can use the build-in routines inspired by Gautschi or the PolyChaos.jl package.

Tensor Networks

A multipartite quantum state $|\psi\rangle$, e.g. a $N$-site system where the sites can each be in a state $|\phi_i\rangle$ belonging to a $d$-dimensional Hilbert space, can be written as follows

\[ |\psi\rangle = \sum_{\{i_k\}}c_{i_1\ldots i_N}|\phi_{i_1}\rangle\otimes\ldots\otimes|\phi_{i_N}\rangle\ ,\]

where the complex numbers $c_{i_1\ldots i_N}$ are the amplitudes of each state $|\phi_{i_1}\rangle\otimes\ldots\otimes|\phi_{i_N}\rangle$ whose superpositions form in full generality the state $|\psi\rangle$. Thus the state $|\psi\rangle$ can be completely represented by a rank-$N$ tensor $c$ that is the collection of all possible amplitudes $c_{i_1\ldots i_N}$. Here by the rank of a tensor, we simply mean the number of indices it has.

MPS

The tensor $c$ of a quantum state $|\psi\rangle$ corresponding to a one-dimensional system can be decomposed into a product of $N$ smaller rank-3 tensors $T_{k}$ (except for the first and last sites where the tensors will have a rank-2)

\[ c_{i_1\ldots i_N} = \sum_{\{\alpha\}} T^{\alpha_1}_{i_1}T^{\alpha_1\alpha_2\ }_{i_2}T^{\alpha_2\alpha_3\ }_{i_3}\ldots T^{\alpha_{N-1}}_{i_N} \ .\]

In this form, the local tensor $T_k$ contains the information on the quantum state on site $k$ and its relation (especially the entanglement) with the neighbouring sites.

The decomposition of the tensor of the amplitudes of a quantum state into a product of smaller rank tensors is called a Matrix Product State decomposition.

The contracted indices $\alpha_k$ between the tensors are called virtual indices and carry information about the correlations between bi-partitions of the state at bond $k$. The number of different values a virtual index can take is called the bond dimension and is denoted $D$. The free indices $i_k$ associated with local quantum states are called physical indices. Thus, they can take $d$ values (with $d$ the dimension of the local Hilbert space).

Any state in the Hilbert space of a one-dimensional many-body system can in principle be represented by a MPS by choosing a sufficiently large value for the bond dimension $D$ [Orus]. On top of this intellectually satisfying property of MPSs being a dense set of states for a 1d-system, they can also be used as a practical Ansätze for a many-body quantum states by setting a maximal allowed value $\chi$ for the bond dimension $D$. In doing so, we restrict ourselves to a corner of the total Hilbert space. The rationale behind this Ansatz is the following: if the initial quantum state of a many-body system has a low bond dimension (typically if the initial state is a product state with $D = 1$), then in a finite time it will only be able to explore a region of the Hilbert space that is not to far away from its starting point. Thus, the bond dimension will not have the time to diverge exponentially [poulin_quantum_2011]. However, depending on the physical system at hand, this sub-manifold of the Hilbert space could still be "too large". There is an additional reason that explains why MPSs are good Ansätze for 1d physical systems. Most many-body Hamiltonians we (physicists) are interested in are local, meaning that the interactions they describe involve objects that are "neighbours". For such Hamiltonians, the ground states (outside of potential critical phases) follow the so called area law for the entanglement entropy [srednicki_entropy_1993][vidal_entanglement_2003][wolf_area_2008]. This law states that the entanglement entropy $S_{vN}$ of a bi-partition of the system is proportional, not to the volume of the partition as one might expect, but to the hyper-surface of the partition's boundary; hence the name "area law". For a 3d system this corresponds to an actual surface area $A$, $S_{vN} \sim A$; for a 2d system it corresponds to the length $L$ of the partition's boundary, $S_{vN} \sim L$; and in 1d the boundary reduces to a point, thus the entropy will be independent of the size of the system $S_{vN} \sim \text{constant}$. The MPSs are states that satisfy this area law.

An application of the Singular Value Decomposition is to create efficient approximations of quantum states to perform computations. The main idea is to reduce the content of the MPS to keep only the parts that contain the physics of interest. One method to realise this approximation is to do a SVD on each of the tensors of the MPS after each time step of the state time-evolution and to trim the smallest singular values in order to decrease the bond dimension of the MPS down to a chosen maximal value $\chi$. The corresponding columns and rows of the unitary matrices $U$ and $V^\dagger$ are also removed. Then, the trimmed matrices $\tilde{U}$, $\tilde{S}$ and $\tilde{V}^\dagger$ are contracted back to give an approximated tensor $T$ with a smaller bond dimension. Another way to apply the restricted rank approximation is to restrict oneself into working in a manifold of fixed bond dimension $D$ and to use methods that can enforce this constraint.

MPO

In order to compute expectation values of observables or apply unitary transformations to a quantum state, we need a TN representation of operators. In the same fashion as a one-dimensional quantum state can be represented as a MPS, operators acting on those states can be represented as Matrix Product Operators (MPO). For an operator $\hat{O}$, its MPO can be defined as follows

\[ \hat{O} = \sum_{\{i_k\}\{i_k^{'}\} \{w\}} W^{i_1\ i^{'}_1}_{1\ w_0w_1}\ldots W^{i_N\ i^{'}_N}_{N\ w_{N-1}w_N} |\phi_{i_1^{'}}\ldots \phi_{i_N^{'}}\rangle\langle\phi_{i_1}\ldots \phi_{i_N}| \]

The contracted indices between the tensors are called virtual indices. The free indices are called physical indices and correspond to the different input and output local quantum states. They can take $d$ values (with $d$ the dimension of the local Hilbert space).

TTN

A natural extension to the MPS is the (loop-free) tree tensor network. A TTN is a generalisation of the MPS wherein each site, instead of being connected to only one other site to its right, may be connected to any arbitrary number of child sites. Provided the tree does not contain any loops, everything that one can do to an MPS/MPO can be extended straight-forwardly to TTN states and TTN operators. The generalisation to trees introduces no new conceptual complexity (only implementational complexity). The sites of a TTN are usually referred to as nodes. For our purposes, every node of a TTN state and operator has one parent leg, and any number (including zero) of child legs. The first node is known as the head-node and has a dummy parent leg with dimension 1.

Time-Dependent Variational Principle

The original idea behind TDVP goes back to Dirac [dirac_note_1930] and Frenkel [frenkel_wave_1934]. The main point, in the modern tensor networks formulation, is that instead of solving the Schrödinger equation and then truncating the MPS representation of the quantum state, one can solve the equations of motion projected into a space of restricted bond dimension [haegeman_timedependent_2011][haegeman_unifying_2016].

The general formulation of the Dirac-Frenkel Variational Principle [raab_diracfrenkelmclachlan_2000] is that one looks for a solution $|\varphi\rangle \in \mathcal{M}$ of the Schrödinger equation where $\mathcal{M} \subset \mathcal{H}$ is a manifold of the total Hilbert space $\mathcal{H}$ in which we think that the relevant physical states `live'.

We define $T_{|\varphi\rangle}\mathcal{M}$ the tangent space of $\mathcal{M}$ around the state $|\varphi\rangle$. The criterion to find $|\varphi\rangle$ is that for every state $|\chi\rangle \in T_{|\varphi\rangle}\mathcal{M}$

\[ \langle\chi|\left(\frac{\mathrm{d}}{\mathrm{d}t} - \frac{1}{\mathrm{i}\hbar}\hat{H}\right)|\varphi\rangle =0\ ,\]

which can be interpreted as saying that the time evolution procedure should keep $|\varphi\rangle$ inside of the manifold $\mathcal{M}$.

The term variational in the name of the method comes from the fact that in practice one aims at minimising the right-hand side of the hereinabove Eq. to find $|\varphi\rangle$.

Introducing $\hat{P}_{T_{|\varphi\rangle}\mathcal{M}}$ the projector onto the tangent space $T_{|\varphi\rangle}\mathcal{M}$, we can write the state $|\chi\rangle = \hat{P}_{T_{|\varphi\rangle}\mathcal{M}}|\phi\rangle$ with $|\phi\rangle$ a state in $\mathcal{H}$. Leading to

\[ \forall |\phi\rangle \in \mathcal{H}, \ \langle\phi|\hat{P}_{T_{|\varphi\rangle}\mathcal{M}}\left(\frac{\mathrm{d}}{\mathrm{d}t} - \frac{1}{\mathrm{i}\hbar}\hat{H}\right)|\varphi\rangle =0\ .\]

Because the time derivation and the projector commute, we have

\[ \forall |\phi\rangle \in \mathcal{H}, \ \langle\phi|\left(\frac{\mathrm{d}}{\mathrm{d}t} - \frac{1}{\mathrm{i}\hbar}\hat{P}_{T_{|\varphi\rangle}\mathcal{M}}\hat{H}\right)|\varphi\rangle =0\ .\]

This equation must be true for any $|\phi\rangle \in \mathcal{H}$, Eq.~(\ref{eq:DiracFrenkel1}) can thus be written

\[ \left(\frac{\mathrm{d}}{\mathrm{d}t} - \frac{1}{\mathrm{i}\hbar}\hat{P}_{T_{|\varphi\rangle}\mathcal{M}}\hat{H}\right)|\varphi\rangle =0\ .\]

In the context of MPS, the manifold $\mathcal{M}$ will correspond to the space of full-ranked MPS of a given bond dimension $D$, and the tangent space will be the space spanned by variations of single MPS tensors. The projector defines an effective Hamiltonian under which the dynamics are constrained on $\mathcal{M}$. Constraining the dynamics on a manifold, introduces a projection error: the time evolution will obey to an effective Hamiltonian different from the starting one. After the introduction of TDVP as a time evolution method for MPS, Haegeman et al. pointed out [haegeman_unifying_2016] that there exist an analytical decomposition for the projector operator $\hat{\mathcal{P}}$ that simplifies the resolution of the equation, turning the problem into one where each matrix $A_i$ can be updated with an effective on site Hamiltonian $\hat H_\text{eff}$ via a Schroedinger like equation. The effective Hamiltonian $\hat H_\text{eff}$ is a contraction of the Hamiltonian MPO and the current state of the other matrices composing the MPS. This allows to do a sequential update.

There exist different versions of the TDVP algorithm. In MPSDynamics.jl three methods have been so far implemented:

The main advantage of the one-site 1TDVP algorithm is that it preserves the unitarity of the MPS during the time evolution. Its main problem, conversely, is that the time evolution is constrained to happen on a manifold constituted by tensors of fixed bond dimension, a quantity closely related to the amount of entanglement in the MPS, and such a bond dimension has therefore to be fixed before the beginning of the time evolution. This strategy will necessarily be non optimal: the growth of the bond dimensions required to describe the quantum state should ideally mirror the entanglement growth induced by the time evolution. 2TDVP allows for such a dynamical growth of the bond dimensions, and therefore better describes the entanglement in the MPS. It suffers however of other drawbacks: first of all, a truncation error is introduced (by the means of an SVD decomposition), which entails a loss of unitarity of the time-evolved MPS. Moreover, 2TDVP has bad scaling properties with the size of the local dimensions of the MPS: this is a major issue when dealing with bosons. The DTDVP algorithm combines the best features of 1TDVP and 2TDVP: it preserves unitarity, it has the same scaling properties of 1TDVP, and it adapts the bond dimensions to the entanglement evolution at each site and at each time-step. DTDVP does not suffer from a truncation error, but introduces only a projection error.

________________

Bibliography

+ \hat H = \overbrace{\hat A_S}^{\hat H_S} + \overbrace{\sum_k \omega_k \big(\hat a_{1k}^\dagger \hat a_{1k} - \hat a_{2k}^\dagger \hat a_{2k} \big)}^{\hat H_E} + \overbrace{\hat L_S \otimes \sum_k g_{1k}(\hat a_{1k}^\dagger + \hat a_{1k})+\hat L_S \otimes \sum_k g_{2k}(\hat a_{2k}^\dagger + \hat a_{2k})}^{\hat H_I}.\]

where $\hat L_S = \hat L_S^\dagger$, considering $\hat \rho_S(0) \otimes |\Omega \rangle \langle \Omega |$ as the initial state of system and environment. It is this Hamiltonian that is mapped on two chains with TEDOPA, to be able to perform the time evolution using tensor network techniques.

Computation of the chain coefficients

A useful property of the orthonormal polynomials is that they obey a recurrence relation

\[ P_n(\omega) = (C_{n-1}\omega - A_{n-1})P_{n-1}(\omega) + B_{n-1}P_{n-2}(\omega)\ ,\]

where $A_n$ is related to the first moment of $P_n$, $B_n$ and $C_n$ to the norms of $P_n$ and $P_{n-1}$[appel_mathematics_2007]. This recurrence relation can be used to construct the polynomials with the conditions that $P_0(\omega) = ||p_0||^{-1} = \left(\int_{\mathbb{R}^{+}} J(\omega)\mathrm{d}\omega \right)^{-\frac{1}{2}}$ and $P_{-1}(\omega) = 0$, with $||\bullet||$ the norm of $\bullet$ with respect to the measure $J(\omega)$, and $P_n(\omega) = p_n(\omega)||p_n||^{-1}$ ; where the polynomials $\{p_n\}_{n\in\mathbb{N}}$ are the so called monic polynomials where the factor $a_n$ in front of $\omega^{n}$ is equal to 1.

The energy of the chain mode $n$ is given by $\varepsilon_n = A_n C_n^{-1}$ and $t_n=C_n^{-1}$ is the coupling between mode $n$ and $n+1$[chin_exact_2010].

The system couples only to the first mode with the coupling strength $c_0 = ||p_0||$.

Explain that for some weight function/SD they are known analytically and that for others we can use the build-in routines inspired by Gautschi or the PolyChaos.jl package.

Tensor Networks

A multipartite quantum state $|\psi\rangle$, e.g. a $N$-site system where the sites can each be in a state $|\phi_i\rangle$ belonging to a $d$-dimensional Hilbert space, can be written as follows

\[ |\psi\rangle = \sum_{\{i_k\}}c_{i_1\ldots i_N}|\phi_{i_1}\rangle\otimes\ldots\otimes|\phi_{i_N}\rangle\ ,\]

where the complex numbers $c_{i_1\ldots i_N}$ are the amplitudes of each state $|\phi_{i_1}\rangle\otimes\ldots\otimes|\phi_{i_N}\rangle$ whose superpositions form in full generality the state $|\psi\rangle$. Thus the state $|\psi\rangle$ can be completely represented by a rank-$N$ tensor $c$ that is the collection of all possible amplitudes $c_{i_1\ldots i_N}$. Here by the rank of a tensor, we simply mean the number of indices it has.

MPS

The tensor $c$ of a quantum state $|\psi\rangle$ corresponding to a one-dimensional system can be decomposed into a product of $N$ smaller rank-3 tensors $T_{k}$ (except for the first and last sites where the tensors will have a rank-2)

\[ c_{i_1\ldots i_N} = \sum_{\{\alpha\}} T^{\alpha_1}_{i_1}T^{\alpha_1\alpha_2\ }_{i_2}T^{\alpha_2\alpha_3\ }_{i_3}\ldots T^{\alpha_{N-1}}_{i_N} \ .\]

In this form, the local tensor $T_k$ contains the information on the quantum state on site $k$ and its relation (especially the entanglement) with the neighbouring sites.

The decomposition of the tensor of the amplitudes of a quantum state into a product of smaller rank tensors is called a Matrix Product State decomposition.

The contracted indices $\alpha_k$ between the tensors are called virtual indices and carry information about the correlations between bi-partitions of the state at bond $k$. The number of different values a virtual index can take is called the bond dimension and is denoted $D$. The free indices $i_k$ associated with local quantum states are called physical indices. Thus, they can take $d$ values (with $d$ the dimension of the local Hilbert space).

Any state in the Hilbert space of a one-dimensional many-body system can in principle be represented by a MPS by choosing a sufficiently large value for the bond dimension $D$ [Orus]. On top of this intellectually satisfying property of MPSs being a dense set of states for a 1d-system, they can also be used as a practical Ansätze for a many-body quantum states by setting a maximal allowed value $\chi$ for the bond dimension $D$. In doing so, we restrict ourselves to a corner of the total Hilbert space. The rationale behind this Ansatz is the following: if the initial quantum state of a many-body system has a low bond dimension (typically if the initial state is a product state with $D = 1$), then in a finite time it will only be able to explore a region of the Hilbert space that is not to far away from its starting point. Thus, the bond dimension will not have the time to diverge exponentially [poulin_quantum_2011]. However, depending on the physical system at hand, this sub-manifold of the Hilbert space could still be "too large". There is an additional reason that explains why MPSs are good Ansätze for 1d physical systems. Most many-body Hamiltonians we (physicists) are interested in are local, meaning that the interactions they describe involve objects that are "neighbours". For such Hamiltonians, the ground states (outside of potential critical phases) follow the so called area law for the entanglement entropy [srednicki_entropy_1993][vidal_entanglement_2003][wolf_area_2008]. This law states that the entanglement entropy $S_{vN}$ of a bi-partition of the system is proportional, not to the volume of the partition as one might expect, but to the hyper-surface of the partition's boundary; hence the name "area law". For a 3d system this corresponds to an actual surface area $A$, $S_{vN} \sim A$; for a 2d system it corresponds to the length $L$ of the partition's boundary, $S_{vN} \sim L$; and in 1d the boundary reduces to a point, thus the entropy will be independent of the size of the system $S_{vN} \sim \text{constant}$. The MPSs are states that satisfy this area law.

An application of the Singular Value Decomposition is to create efficient approximations of quantum states to perform computations. The main idea is to reduce the content of the MPS to keep only the parts that contain the physics of interest. One method to realise this approximation is to do a SVD on each of the tensors of the MPS after each time step of the state time-evolution and to trim the smallest singular values in order to decrease the bond dimension of the MPS down to a chosen maximal value $\chi$. The corresponding columns and rows of the unitary matrices $U$ and $V^\dagger$ are also removed. Then, the trimmed matrices $\tilde{U}$, $\tilde{S}$ and $\tilde{V}^\dagger$ are contracted back to give an approximated tensor $T$ with a smaller bond dimension. Another way to apply the restricted rank approximation is to restrict oneself into working in a manifold of fixed bond dimension $D$ and to use methods that can enforce this constraint.

MPO

In order to compute expectation values of observables or apply unitary transformations to a quantum state, we need a TN representation of operators. In the same fashion as a one-dimensional quantum state can be represented as a MPS, operators acting on those states can be represented as Matrix Product Operators (MPO). For an operator $\hat{O}$, its MPO can be defined as follows

\[ \hat{O} = \sum_{\{i_k\}\{i_k^{'}\} \{w\}} W^{i_1\ i^{'}_1}_{1\ w_0w_1}\ldots W^{i_N\ i^{'}_N}_{N\ w_{N-1}w_N} |\phi_{i_1^{'}}\ldots \phi_{i_N^{'}}\rangle\langle\phi_{i_1}\ldots \phi_{i_N}| \]

The contracted indices between the tensors are called virtual indices. The free indices are called physical indices and correspond to the different input and output local quantum states. They can take $d$ values (with $d$ the dimension of the local Hilbert space).

TTN

A natural extension to the MPS is the (loop-free) tree tensor network. A TTN is a generalisation of the MPS wherein each site, instead of being connected to only one other site to its right, may be connected to any arbitrary number of child sites. Provided the tree does not contain any loops, everything that one can do to an MPS/MPO can be extended straight-forwardly to TTN states and TTN operators. The generalisation to trees introduces no new conceptual complexity (only implementational complexity). The sites of a TTN are usually referred to as nodes. For our purposes, every node of a TTN state and operator has one parent leg, and any number (including zero) of child legs. The first node is known as the head-node and has a dummy parent leg with dimension 1.

Time-Dependent Variational Principle

The original idea behind TDVP goes back to Dirac [dirac_note_1930] and Frenkel [frenkel_wave_1934]. The main point, in the modern tensor networks formulation, is that instead of solving the Schrödinger equation and then truncating the MPS representation of the quantum state, one can solve the equations of motion projected into a space of restricted bond dimension [haegeman_timedependent_2011][haegeman_unifying_2016].

The general formulation of the Dirac-Frenkel Variational Principle [raab_diracfrenkelmclachlan_2000] is that one looks for a solution $|\varphi\rangle \in \mathcal{M}$ of the Schrödinger equation where $\mathcal{M} \subset \mathcal{H}$ is a manifold of the total Hilbert space $\mathcal{H}$ in which we think that the relevant physical states `live'.

We define $T_{|\varphi\rangle}\mathcal{M}$ the tangent space of $\mathcal{M}$ around the state $|\varphi\rangle$. The criterion to find $|\varphi\rangle$ is that for every state $|\chi\rangle \in T_{|\varphi\rangle}\mathcal{M}$

\[ \langle\chi|\left(\frac{\mathrm{d}}{\mathrm{d}t} - \frac{1}{\mathrm{i}\hbar}\hat{H}\right)|\varphi\rangle =0\ ,\]

which can be interpreted as saying that the time evolution procedure should keep $|\varphi\rangle$ inside of the manifold $\mathcal{M}$.

The term variational in the name of the method comes from the fact that in practice one aims at minimising the right-hand side of the hereinabove Eq. to find $|\varphi\rangle$.

Introducing $\hat{P}_{T_{|\varphi\rangle}\mathcal{M}}$ the projector onto the tangent space $T_{|\varphi\rangle}\mathcal{M}$, we can write the state $|\chi\rangle = \hat{P}_{T_{|\varphi\rangle}\mathcal{M}}|\phi\rangle$ with $|\phi\rangle$ a state in $\mathcal{H}$. Leading to

\[ \forall |\phi\rangle \in \mathcal{H}, \ \langle\phi|\hat{P}_{T_{|\varphi\rangle}\mathcal{M}}\left(\frac{\mathrm{d}}{\mathrm{d}t} - \frac{1}{\mathrm{i}\hbar}\hat{H}\right)|\varphi\rangle =0\ .\]

Because the time derivation and the projector commute, we have

\[ \forall |\phi\rangle \in \mathcal{H}, \ \langle\phi|\left(\frac{\mathrm{d}}{\mathrm{d}t} - \frac{1}{\mathrm{i}\hbar}\hat{P}_{T_{|\varphi\rangle}\mathcal{M}}\hat{H}\right)|\varphi\rangle =0\ .\]

This equation must be true for any $|\phi\rangle \in \mathcal{H}$, Eq.~(\ref{eq:DiracFrenkel1}) can thus be written

\[ \left(\frac{\mathrm{d}}{\mathrm{d}t} - \frac{1}{\mathrm{i}\hbar}\hat{P}_{T_{|\varphi\rangle}\mathcal{M}}\hat{H}\right)|\varphi\rangle =0\ .\]

In the context of MPS, the manifold $\mathcal{M}$ will correspond to the space of full-ranked MPS of a given bond dimension $D$, and the tangent space will be the space spanned by variations of single MPS tensors. The projector defines an effective Hamiltonian under which the dynamics are constrained on $\mathcal{M}$. Constraining the dynamics on a manifold, introduces a projection error: the time evolution will obey to an effective Hamiltonian different from the starting one. After the introduction of TDVP as a time evolution method for MPS, Haegeman et al. pointed out [haegeman_unifying_2016] that there exist an analytical decomposition for the projector operator $\hat{\mathcal{P}}$ that simplifies the resolution of the equation, turning the problem into one where each matrix $A_i$ can be updated with an effective on site Hamiltonian $\hat H_\text{eff}$ via a Schroedinger like equation. The effective Hamiltonian $\hat H_\text{eff}$ is a contraction of the Hamiltonian MPO and the current state of the other matrices composing the MPS. This allows to do a sequential update.

There exist different versions of the TDVP algorithm. In MPSDynamics.jl three methods have been so far implemented:

The main advantage of the one-site 1TDVP algorithm is that it preserves the unitarity of the MPS during the time evolution. Its main problem, conversely, is that the time evolution is constrained to happen on a manifold constituted by tensors of fixed bond dimension, a quantity closely related to the amount of entanglement in the MPS, and such a bond dimension has therefore to be fixed before the beginning of the time evolution. This strategy will necessarily be non optimal: the growth of the bond dimensions required to describe the quantum state should ideally mirror the entanglement growth induced by the time evolution. 2TDVP allows for such a dynamical growth of the bond dimensions, and therefore better describes the entanglement in the MPS. It suffers however of other drawbacks: first of all, a truncation error is introduced (by the means of an SVD decomposition), which entails a loss of unitarity of the time-evolved MPS. Moreover, 2TDVP has bad scaling properties with the size of the local dimensions of the MPS: this is a major issue when dealing with bosons. The DTDVP algorithm combines the best features of 1TDVP and 2TDVP: it preserves unitarity, it has the same scaling properties of 1TDVP, and it adapts the bond dimensions to the entanglement evolution at each site and at each time-step. DTDVP does not suffer from a truncation error, but introduces only a projection error.

________________

Bibliography

diff --git a/docs/user-guide/index.html b/docs/user-guide/index.html index 4e43d63..a204cde 100644 --- a/docs/user-guide/index.html +++ b/docs/user-guide/index.html @@ -42,4 +42,4 @@ "nchain" => [0.0 0.23466 … 1.84319 1.76098; 0.0 0.00231507 … 0.83105 0.9033… "sy" => [0.0, -0.0133489, -0.0588887, -0.0858181, -0.0759996, -0.048539… "times" => [0.0, 0.0005, 0.001, 0.0015, 0.002, 0.0025, 0.003, 0.0035, 0.00… -
+