+1 vote
asked by (130 points)


I'm a system administrator on UConn's HPC. We have a few users on our cluster that would like to make full use of our compute capabilities and run ITensor through MPI or SLURM.

After scanning through the documentation and threads here, it seems like the only parallelism conveyed by ITensor is what's baked-into BLAS and LAPACK and there is not yet any functionality in the core library to facilitate distributed/parallel environments. Is this still the case?

If so, any resources or examples on accelerating ITensor with MPI would be incredibly helpful and greatly appreciated.

Thanks for your time,

1 Answer

0 votes
answered by (70.1k points)

Hi Chris,
Thanks for the question. Unfortunately, no we don't have any higher level parallelism publicly available in the library right now. To some degree it's because tensor network algorithms aren't as readily parallelizable as, say, Monte Carlo type algorithms.

But here are some future and past parallel aspects of ITensor we could discuss further:

(1) I wrote a paper with Steve White about how to parallelize DMRG in real space: https://arxiv.org/abs/1301.3494 We implemented this idea in ITensor, and I still have the code although it needs some cleaning up before releasing it publicly. But if you and your colleagues have a use case in mind I can send you the code I have and consult with you about adapting it for your project(s). I'm hoping to make a public release of the parallel DMRG code sometime this year.

(2) One thing on my short list of things to parallelize within ITensor is the contraction of two IQTensors, which are block-sparse tensors. That algorithm is already parallelizable, but requires some research as to how to code properly e.g. setting the number of cores for different BLAS implementations and on different computers.

(3) I recently parallelized a machine learning code based on ITensor for a paper I wrote. Basically there is a gradient descent step that involves looping over thousands of images, so I multithreaded this on top of ITensor. But one could envision using MPI very effectively here too, storing batches of images on different computers and synchronizing the MPS tensors (which are pretty small individually) across the different machines. I'm happy to share the multithreaded version of this code with you.

Hope some of that piques your interest.


Welcome to ITensor Support Q&A, where you can ask questions and receive answers from other members of the community.

Formatting Tips:
  • To format code, indent by four spaces
  • To format inline LaTeX, surround it by @@ on both sides
  • To format LaTeX on its own line, surround it by $$ above and below
  • For LaTeX, it may be necessary to backslash-escape underscore characters to obtain proper formatting. So for example writing \sum\_i to represent a sum over i.
If you cannot register due to firewall issues (e.g. you cannot see the capcha box) please email Miles Stoudenmire to ask for an account.

To report ITensor bugs, please use the issue tracker.