0 votes
asked by (1.2k points)

Hi Miles,

Could you help with two questions?

  1. It seems that the "gateTEvol" function does the SVD compression after each step a single gate is applied to the bond, right? But tMPS can do iterative compression after evolving all bonds first. How big the improvement of accuracy will be by doing that? Or it's not worth doing that because the error from trotter decompositions is much bigger?

  2. I installed itensor with OpenBLAS library. I'm not sure if DMRG can be multi-threaded by setting export OPENBLASNUMTHREADS>1. Because when I set OPENBLASNUMTHREADS=1, 2, 4, the calculation time is increasing. Should I always set OPENBLASNUMTHREADS=1?



1 Answer

0 votes
answered by (70.1k points)

Hi Jin,
I think the answer to #1 is complicated. I'm not sure what the difference in error would be between the two methods without doing a careful theoretical error analysis. Barring that, I think if you're really very curious about both methods, or suspect one might not be accurate enough for what you need, then you could just code both of them and plot results to see how the errors behave.

Regarding BLAS parallelism, it's kind of complicated also. Sometimes we have seen speedups from letting the BLAS use multiple cores, but other times we've seen what you report, namely that it does a bad job and can even hurt speed. Basically this is a feature of your BLAS which is outside of ITensor and which we do not control. Also its effectiveness is highly dependent on the details of the algorithm you are doing, whether DMRG or another algorithm, and the sizes of the tensors, their block structure, etc. So again I think your best bet is just to adjust the settings as you have done and see what works best for your needs. Also you might get different results with a different BLAS implementation, such as MKL which is a very high quality BLAS code.

Best regards,

P.S. about the BLAS, you might want to write a simple test code which just multiplies two very large matrices, and run that with different OPENBLAS NUM THREADS settings to see what results you get. If it doesn't work for that, i.e. if it doesn't give a speedup even for rather large matrices, you might conclude that it's just not a very useful feature of your BLAS. On the other hand, if you do see a speedup, you should see what matrix sizes are needed to see it and compare to the typical sizes in your DMRG calculation. Perhaps it could help if you turn it on for a very large DMRG calculation in the last few sweeps.

Welcome to ITensor Support Q&A, where you can ask questions and receive answers from other members of the community.

Formatting Tips:
  • To format code, indent by four spaces
  • To format inline LaTeX, surround it by @@ on both sides
  • To format LaTeX on its own line, surround it by $$ above and below
  • For LaTeX, it may be necessary to backslash-escape underscore characters to obtain proper formatting. So for example writing \sum\_i to represent a sum over i.
If you cannot register due to firewall issues (e.g. you cannot see the capcha box) please email Miles Stoudenmire to ask for an account.

To report ITensor bugs, please use the issue tracker.