Dynamics with fitapplyMPO on dmrg groundstate

+1 vote
edited Jan 25

Hi,
I am trying to study some dynamics using ITensor and I was wondering what is the most stable combination of functions.

To fix the ideas, I am considering the open Ising spin 1/2 chain in the gapped phase (h transverse field > 1) with N=100 sites. The groundstate is found very efficiently with the dmrg function of ITensor with a small number of sweeps; for the case I am considering (h = 1.5), I obtain high accuracy with a bond dimension of 5 at most.

Now, I am trying to perform some dynamics on this state. In order to do so, I am using the bult-in function fitapplyMPO with imaginary time. What I noticed is that even though I evolve with the same MPO Hamiltonian I used to find the groundstate, the bond dimension starts to grow. In principle, the state should not evolve at all as it is the groundstate of the Hamiltonian.

This effect is suppressed if I take a very small timestep (t=0.001) for the time-evolution.

I was wondering: is this the expected behaviour, which is essentially unavoidable, or it is a bad idea to combine the dmrg function and then the fitapplyMPO?

I was thinking that using imaginary time evolution e^{- tau H} to find the groundstate still with fitapplyMPO could avoid this issue, even though it would provide a worse description of the groundstate.

What do you think?

+1 vote
answered Jan 28 by (20,240 points)

Hi, good question. Yes the fitApplyMPO method has some drawbacks and I can imagine it doing some confusing things like what you describe. However its benefit is that it is very efficient if used in the right situation.

But as a default choice, I would say to start with the exactApplyMPO method, which I recently updated to be much more efficient than in its previous incarnation. Its scaling is m^3 k^2 + k^3 m^2 where m is the MPS bond dimension and k is the MPO bond dimension. (Versus fitApply which has a scaling m^3 k + m^2 k^2.)

So please try exactApplyMPO and let me know if it still doesn't give the results you want.

There are multiple confusing aspects about the menu of apply-MPO methods we have currently (i.e. the names are somewhat misleading as, for example, exactApply doesn't mean the result you get is exact, but just that it's a fully controlled algorithm, and also the issues about some algorithms like fitApply being reliable only in certain contexts).

So to clear up any confusion, I'm planning to create a single function "applyMPO" which by default chooses the most reliable algorithm, but which accepts an option that lets you request other algorithms.

Best regards,
Miles

commented Feb 3 by (170 points)
Thank you Miles. I am trying with exactApplyMPO, but it seems that after few time steps evolutions both the orthogonality and the norm start to get lost. Is that normal and I should reorthogonalize and normalize the state by hand? Or is it some form of instability?
commented Feb 3 by (20,240 points)
Hi, good question. Yes, the norm growing is a usual aspect of the finite time-step error in most time evolution methods. The toExpH feature in ITensor currently uses the Zaletel, Pollmann, et al. approach to constructing the exponential of a Hamiltonian which has a significant time step error (check their paper for more details; link on this page http://itensor.org/docs.cgi?page=formulas/tevol_mps_mpo). There is a nice trick in their paper about using two complex time steps instead of a single time step to make the error terms be higher-order.

So after each step you need to re-normalize the state.

I'm not sure however what you mean by loss of orthogonality. Orthogonality of what state to what other state? Assuming there is some other wavefunction to which you expect orthogonality, then I could imagine the time step error also affects that. In that case, it's either something you have to live with, or try to reduce by taking smaller time steps. Or depending on details of your algorithm and what you're trying to do, you could re-orthogonalize the state against whatever other states are involved, using a Gram-Schmidt procedure, say.
commented Feb 3 by (20,240 points)
Based on your comment, I fixed the MPO time evolution code formula to include a call to "normalize" after every step. http://itensor.org/docs.cgi?page=formulas/tevol_mps_mpo

Thanks for mentioning this issue!
commented Feb 3 by (170 points)