+3 votes
asked by (310 points)

Hi all,

I am currently running DMRG on a large 2D lattice on Intel CPUs and it takes a really long time. I have heard that it will significantly reduce the time on GPU for tensor flow, but for ITensor, I was worried about the Intel MKL not compatible with GPU.

I was wondering, is it possible to utilize a GPU to speed up the Davidson algorithm within ITensor, which is probably the most time-consuming process?

Best regards,
Yixuan

1 Answer

+1 vote
answered by (27.6k points)

Dear Yixuan,
It might indeed be possible, but it is not something we have experimented with very much yet. I believe Steve White tried it at one point a few years ago, but found that with that generation of GPU and the method he was trying that the time spent moving memory to and from the GPU was a significant fraction of the computation time, such that there wasn't much gain. But it's something we are planning to investigate in the future. In the end, it might be very algorithm dependent.

Of course you are welcome to open up the code your self and try to add GPU support to various parts. If you try that and have some questions about the internals of the code I happy to answer them.

Miles

commented by (310 points)
Thank you very much Miles, I will give it a try.

Best,
Yixuan
Welcome to ITensor Support Q&A, where you can ask questions and receive answers from other members of the community.

Formatting Tips:
  • To format code, indent by four spaces
  • To format inline LaTeX, surround it by @@ on both sides
  • To format LaTeX on its own line, surround it by $$ above and below
  • For LaTeX, it may be necessary to backslash-escape underscore characters to obtain proper formatting. So for example writing \sum\_i to represent a sum over i.
If you cannot register due to firewall issues (e.g. you cannot see the capcha box) please email Miles Stoudenmire to ask for an account.

To report ITensor bugs, please use the issue tracker.

Categories

...