0 votes
asked by (260 points)

Can we run parallel calculation for tDMRG in iTensor?

1 Answer

0 votes
answered by (46.8k points)
selected by
Best answer

Thanks for the question. We don’t have a parallel time evolution code available. Just out of curiosity though, what part of the algorithm were you hoping to parallelize and did you mean across multiple cores or using MPI across different nodes?

Best regards,

commented by (260 points)
Hi Miles,

Thank you for quick response! Since I'm doing time evolution on MPS by MPO, the thing I wanna parallelize is applyMPO across multiple cores.
commented by (46.8k points)
I see, thanks. I wanted  to ask just to know what kind of demand we had from our users for certain algorithms we might implement in the future. For that specific one, I should mention that it's not obvious to me how it could be parallelized. A lot of the operations are dependent on results of previous operations, so it's very far from the so-called "embarassingly parallel" ideal case. But at a minimum, one could always try more multi-core parallelism of the tensor contractions themselves, and that is something we are working on soon which will help many different algorithms. Best regards,  Miles
commented by (260 points)
I see. Indeed, it would be very helpful if iTensor could implement multi-core parallelism on SVD and tensor contraction. I'm looking forward to it! thank you very much~
Welcome to ITensor Support Q&A, where you can ask questions and receive answers from other members of the community.

Formatting Tips:
  • To format code, indent by four spaces
  • To format inline LaTeX, surround it by @@ on both sides
  • To format LaTeX on its own line, surround it by $$ above and below
  • For LaTeX, it may be necessary to backslash-escape underscore characters to obtain proper formatting. So for example writing \sum\_i to represent a sum over i.
If you cannot register due to firewall issues (e.g. you cannot see the capcha box) please email Miles Stoudenmire to ask for an account.

To report ITensor bugs, please use the issue tracker.