+1 vote
asked by (160 points)
edited by

Hi and thank you for the nice work on this library!

I would be very interested in using ITensor as a lower level symmetric tensor manipulation library, and I'm looking into wrapping parts of it in a higher level language to use together with existing code (mainly iPEPS).

The part that I would be most interested in is the translation from tensor to block-sparse matrix form and back. From what I can see it is possible to get the blocks out with something like

auto blocks = doTask(GetBlocks<T>{A.inds(),L,R},A.store());

However, it is unclear to me how to 'put the blocks back' after operations on them (either in a new tensor or in the original one). If I look at svd.cc as an example (from line 312), it seems the clue should be in

auto uind = stdx::make_array(B.i1,n);
auto pU = getBlock(Ustore,Uis,uind);
auto Uref = makeMatRef(pU,uI[B.i1].m(),L[n].m());
reduceCols(UU,L[n].m()); // just as an example
Uref &= UU;

And then use Ustore to construct a new tensor.

It would be very helpful if you could provide a little example of how to do such a thing, if possible.
Hopefully my question is clear enough, thank you in advance!

Best regards,
Boris

1 Answer

0 votes
answered by (70.1k points)
selected by
 
Best answer

Hi Boris,
Nice that you are looking into the ITensor internals. I think it should be very possible to make a wrapper around some of the ITensor functionality like you are thinking, but indeed these internal parts are not yet documented and not so obvious!

You correctly identified a part of the code which is making the new blocks for a new IQTensor. To explain some of what it's doing, Ustore is a QDense storage object, constructed from an IQIndexSet and a QN object. The QN object is a "zero" QN object = QN() so this says that Ustore will be set up to contain only those blocks which can be made from the IQIndexSet with a net QN flux of zero. So this just means blocks for which all of the quantum numbers of the various IQIndex sectors add up to zero (when weighted with the IQIndex arrows giving appropriate plus and minus signs).

Then, once the QDense storage object is allocated, the getBlock helper function (itensor/itdata/qutil.h) returns a "DataRange" object (itensor/tensor/types.h) which is conceptually just a pointer that knows the size of the memory it points to (in fact just a struct which is a pointer and a size_t, designed to aid with memory safety).

To explain getBlock, the arguments to it are the block indices of a particular block to be fetched, in this case B.i1 and n, which are passed as a std::array called uind. By block indices I mean like if uind = (m,n) then getBlock returns the DataRange pointer to the block corresponding to the m'th sector of the first IQIndex in Uis (=uI), and the n'th sector of the second IQIndex in Uis (=L). So these blocks are just the tensor blocks making up a block-sparse tensor.

If the returned DataRange object holds a nullptr (pU.data() == nullptr) then it means one has requested an invalid block (a block which is zero by the QN symmetry) so there is an assert checking for that erroneous case.

Finally, the DataRange object pU is used to construct a MatrixRef object. This is just a matrix "reference" or "view" (non-owning array with reference semantics) of the data pointed to by pU with appropriate row and column dimensions, namely the sizes of the IQIndex's uI and L making up Uis. Some manipulations are done on the resulting MatrixRef object, namely setting it to be the first L[n].m() columns of the matrix UU.

So the last two lines of that block of code (reduceCols and Uref &= UU) are the least applicable to your case, since they are some specific matrix related manipulations, and just a way of setting the values of the block appropriate for this case. More generally, one could loop over the elements of the data range pU and modify them like this:
for(size_t i = 0; i < pU.size(); ++i)
{
pU[i] = ... ;
}

But if you are curious to learn more about the operation Uref &= UU I could discuss that some more. The idea there is that the left-hand side is a MatrixRef, and the normal = assignment operation for MatrixRef's just defines a new MatrixRef, similar to assigning to a pointer which just makes it a different pointer. So to modify the elements actually referenced or pointed to by a MatrixRef, one uses the operator &= which can be read as "dereference and assign". The right-hand side is a regular matrix with value semantics, so in this case the elements pointed to by Uref get overwritten with the entries of UU.

Finally, you had asked for an example. I think the code you cited could already be seen as an example if you understand what each part is doing and if you replace the last two lines (reduceCols and Uref &= UU) by your own code that assigns the elements of the block (pointed to by pU) in whatever way is appropriate for your case.

To understand what operations are available with a DataRange (the type of the object pU) have a look at the definition of DataRange in itensor/tensor/types.h

Best regards,
Miles

commented by (160 points)
Hi Miles,

Thank you very much for your extensive reply, it was very helpful! I got it working fine now, including some wrappings (in Python). I have a small follow-up question, but please tell me if I should open a new thread for it.

So the optimal way for an operation on the blocks of an arbitrary tensor would be to create combiners (one for the left and one for the right combined leg), contract them with the original tensor to get a matrix-shaped tensor and then call `doTask(GetBlocks)` on it, as you do in `svd.cc`?

The reason I ask is that I found in my tests that the contraction of two tensors (not combiners) seems to be significantly faster than the procedure for 'reshaping'->getting blocks. It's not very clear to me how the contraction routine works in detail from looking at the code, but in general I expected that for a contraction of two tensors you would need the blocks as well. Is this the case, or is there an alternative approach you take for contractions that would not be applicable to other operations?

Best regards,
Boris
commented by (70.1k points)
Hi Boris,
I wouldn't necessarily say that using combiners as we do in svd.cc is the most optimal way to do things. For the particular case of doing the SVD, using the combiner feature was a convenient way for us to make the code more modular in the sense that we only had to write the inner part of that function assuming a two-index tensor, since the outer part uses combiners to reduce the number of indices to two.

If code involving combiners is much slower than contraction, as you are seeing from your tests, it could either be that our combiner code is slow and needs to be sped up (always a possibility) or more likely that the contraction you are testing doesn't require any data permutation and can be sent directly to BLAS dgemm, which is highly optimized and extremely fast. So comparing those two (combiners versus contraction) might be comparing "apples and oranges".

Hope that helps -
Welcome to ITensor Support Q&A, where you can ask questions and receive answers from other members of the community.

Formatting Tips:
  • To format code, indent by four spaces
  • To format inline LaTeX, surround it by @@ on both sides
  • To format LaTeX on its own line, surround it by $$ above and below
  • For LaTeX, it may be necessary to backslash-escape underscore characters to obtain proper formatting. So for example writing \sum\_i to represent a sum over i.
If you cannot register due to firewall issues (e.g. you cannot see the capcha box) please email Miles Stoudenmire to ask for an account.

To report ITensor bugs, please use the issue tracker.

Categories

...