+1 vote
asked by (170 points)

Hi Miles and Matthew,

I have a question about DMRG calculation for 2D system in iTensor. We know that the MPO of 2D system is very sparse. The number of non-zero element scales only linearly with the dimension of MPO matrix. I wonder whether this sparsity is used in the DMRG calculation in iTensor.

Thanks,

Mingpu

1 Answer

0 votes
answered by (70.1k points)

Hi Mingpu,
Thanks for the question. We haven't taken too much advantage of MPO sparsity yet. It would be an interesting thing to try doing. One reason is that we just have many things we want to try with only a small number of developers so far. The other reason is that using the sparsity might not give a significant enough benefit except when the MPO is very large (you did mention 2D systems, yes). So one would have to just try.

I should mention that one thing we do employ sometimes is to split the Hamiltonian up into separate MPOs. Then we have a special mode of DMRG that performs a summation loop over all of these MPOs at each step of DMRG. So for cases like quantum chemistry systems this can give a nice speedup since the MPO has a block-diagonal (block-sparse) structure.

Best regards,
Miles

commented by (170 points)
Hi Miles,

Thank you very much for your answer. Recently I am working on problems similar as quantum chemistry systems whose Hamiltonian has a huge bond dimension if written in MPO form. As you mentioned, it will help a lot if we can split the MPO into a summation of MPO with smaller bond dimension. Do you have a reference in mind on how to achieve this?

Thanks,

Mingpu
commented by (70.1k points)
Hi Mingpu,
Sorry but no I don't have a reference on this because I'm not sure there is one. For how to do this with ITensor, the steps are not too hard:
1. use the AutoMPO feature to create different MPOs, one for each piece of the Hamiltonian you want to be in its own MPO
2. call the "generalized DMRG interface" described on this page: http://itensor.org/docs.cgi?vers=cppv3&page=classes/dmrg, the one which takes std::vector<MPO> as its first argument

Then one just does timing to see whether it is faster to split the terms into separate MPOs or keep all as one MPO. We did this and found that it was faster to split them up.

If one works out the scaling of the iterative eigensolver step of DMRG one finds that it scales as m^3 k^2 (I believe) where k is the MPO bond dimension. So if one can change this to: D*m^3*p^2 where D is the number of different MPOs you split into, and p is the typical bond dimension of these new MPOs, then D*p^2 ought to be significantly smaller than k^2.

Best regards,
Miles
Welcome to ITensor Support Q&A, where you can ask questions and receive answers from other members of the community.

Formatting Tips:
  • To format code, indent by four spaces
  • To format inline LaTeX, surround it by @@ on both sides
  • To format LaTeX on its own line, surround it by $$ above and below
  • For LaTeX, it may be necessary to backslash-escape underscore characters to obtain proper formatting. So for example writing \sum\_i to represent a sum over i.
If you cannot register due to firewall issues (e.g. you cannot see the capcha box) please email Miles Stoudenmire to ask for an account.

To report ITensor bugs, please use the issue tracker.

Categories

...