+1 vote
asked by (310 points)
edited by

Hi Miles,

Thank you for your persistent effort to address our problems.

I can write Mps or Hamiltonian into disk in dmrg. However, I encountered problems when I want to do so in parallel dmrg program.

  1. In dmrg, I insert writeToFile() in DMRG_Worker in order to output wavefunction after each sweep. However, this approach no longer work in parallel dmrg. If I still do so, each cores only write the block of wavefunction sent to it, while the last core finishing the sweep will overwrite wavefunction stored before. Therefore, I cannot store a complete wavefunction for further measurement. So my Question is: is there any way to output the whole wavefunction collected from all cores at a time? And it would be the best if there is a way to output complete wavefuntion after each sweep.

  2. Besides, I also need to write Hamiltonian into disk to save memory. I tried to set WriteM in the arguments as I did in dmrg program, but the program stopped in the second sweep. I found the names of Hamiltonians written in the last sweep do not match the ones to be read in the next sweep. For example, in the output message I found :

    Tried to read file ./PH0pBYHg/PH002
    Missing file

    Similar for other PH's.
    I checked the directory PH0pBYHg/, the file name of PH inside is "PH003" . It seems that all the number given to PH when writing is 1 greater than the number in the name of the PH to be read.

    Is it a bug or there should be other ways to write Hamiltonian?

Thank you in advance.

Zhiyu

1 Answer

0 votes
answered by (70.1k points)

Hi Zhiyu,
This is a good question and sorry for such a slow reply. The ITensor codes, such as the parallel DMRG code, are provided "as is" and I don't support them to quite the extent as the main ITensor library.

But I am happy to suggest things you can try to make them do what you want.

For the first question, the fact that not all of the most up-to-date MPS tensors are available on each node is part of the design of the parallel code. If all the tensors were kept up to date it would incur far too much communication overhead. But at the time that you want to write all of the tensors together to disk, you can use the MPI tools in the file util/parallel.h to send each of the MPS tensors to a particular node (node 0, say) and then collect them into a single MPS and write them that way.

For the second question, I would have thought that write-to-disk would work ok with parallel DMRG, although I am not completely surprised it doesn't because it's such a complicated algorithm. One thing about the write to disk feature, as you observed, is that it uses randomized folder names which should prevent bugs due to collisions of names. But apparently the bug is caused by something else. Could you please file a bug report on the github repo for parallel DMRG? I will likely be using this code for a project myself soon and may need the write to disk feature. For now, I invite you to look at how write-to-disk is handled in the "LocalMPO" class (itensor/mps/localmpo.h) and see if there is some aspect that is not functioning properly within the parallel DMRG algorithm.

Best regards,
Miles

commented by (310 points)

Hi Miles,

Sorry for my late reply. Thank you so much for your suggestion! But I am currently busy with other things. I will come back to report my bug and solution soon!

Best,
Zhiyu

Welcome to ITensor Support Q&A, where you can ask questions and receive answers from other members of the community.

Formatting Tips:
  • To format code, indent by four spaces
  • To format inline LaTeX, surround it by @@ on both sides
  • To format LaTeX on its own line, surround it by $$ above and below
  • For LaTeX, it may be necessary to backslash-escape underscore characters to obtain proper formatting. So for example writing \sum\_i to represent a sum over i.
If you cannot register due to firewall issues (e.g. you cannot see the capcha box) please email Miles Stoudenmire to ask for an account.

To report ITensor bugs, please use the issue tracker.

Categories

...