0 votes
asked by (180 points)

Hello,

I am currently using ITensor's tensor contraction engine for a few projects. The tensors that I am creating are mostly very sparse, and the structure is pretty well understood. I was trying to understand if it would be possible to use either IQTensors or some other tools to make the representation more compact.

For instance suppose I have a rank-4 tensor with the following structure:
$$
A{j,k,l,m} = a{j,k} * delta{j,m} * delta{k,l}
$$
where @@a{j,k}@@ is a dense rank-2 tensor, and @@delta{j,m}@@ is the Kronecker delta tensor.

Or something like:
$$
B{j,k,l} = b{j,k} * delta_{k,l}.
$$

Right now, I am just using dense storage ITensors for a proof of concept and everything works just fine. However, being able to take advantage of the relative sparseness would make the algorithm far more effective. I don't understand the IQTensors well enough to be able to apply it to this kind of a tricky case.

Would anyone please explain me how to work this out?

Thanks and regards,
Amartya

1 Answer

0 votes
answered by (14.1k points)

Would you mind editing your question to fix the formatting so it is easier to read? You can look at the preview to check if it looks correct before submitting the question.

Also, what do you mean by:

B{j,k,l} = b{j,k} * delta{k,l}

? I ask because in "standard" einstein summation notation like what is used in ITensor, repeated indices on the right hand side of the equation would be contracted, so the results would be:

b{j,k} * delta{k,l} -> b{j,l}

I assume you mean there is an implicit "copy tensor" that set the values of j and k to be equal, but I want to double check before answering. In the standard ITensor notation, maybe you mean something like:

B{j,k,l} = b{j,k'} * delta{k',k,l}

Also, the short answer is that it should be possible to represent any sparse tensor in ITensor, since below the level of ITensor there is a "general" block sparse data structure which in principle can be completely sparse by setting all of the block sizes to 1. But the tricky part is a matter of how to make that feature accessible at a higher level (at the level of ITensor operations).

An example of how to do this would be:

julia> space = [QN(0) => 1, QN(0) => 1];

julia> j, k, l = Index.((space, space, space), ("j", "k", "l"));

julia> b = randomITensor(j, k');

julia> @show b;
b = ITensor ord=2
Dim 1: (dim=2|id=614|"j") <Out>
 1: QN(0) => 1
 2: QN(0) => 1
Dim 2: (dim=2|id=51|"k")' <Out>
 1: QN(0) => 1
 2: QN(0) => 1
ITensors.NDTensors.BlockSparse{Float64, Vector{Float64}, 2}
 2×2
Block(1, 1)
 [1:1, 1:1]
 -0.1287929424199622

Block(2, 1)
 [2:2, 1:1]
 1.5858418816585433

Block(1, 2)
 [1:1, 2:2]
 -1.5869389100745541

Block(2, 2)
 [2:2, 2:2]
 2.5038110677538454

julia> d = δ(dag(k'), k, l);

julia> @show d;
d = ITensor ord=3
Dim 1: (dim=2|id=51|"k")' <In>
 1: QN(0) => 1
 2: QN(0) => 1
Dim 2: (dim=2|id=51|"k") <Out>
 1: QN(0) => 1
 2: QN(0) => 1
Dim 3: (dim=2|id=668|"l") <Out>
 1: QN(0) => 1
 2: QN(0) => 1
ITensors.NDTensors.DiagBlockSparse{Float64, Float64, 3}
 2×2×2
Block(1, 1, 1)
 [1:1, 1:1, 1:1]
[:, :, 1] =
 1.0

Block(2, 2, 2)
 [2:2, 2:2, 2:2]
[:, :, 1] =
 1.0

julia> B = b * d;

julia> @show B;
B = ITensor ord=3
Dim 1: (dim=2|id=614|"j") <Out>
 1: QN(0) => 1
 2: QN(0) => 1
Dim 2: (dim=2|id=51|"k") <Out>
 1: QN(0) => 1
 2: QN(0) => 1
Dim 3: (dim=2|id=668|"l") <Out>
 1: QN(0) => 1
 2: QN(0) => 1
ITensors.NDTensors.BlockSparse{Float64, Vector{Float64}, 3}
 2×2×2
Block(1, 1, 1)
 [1:1, 1:1, 1:1]
[:, :, 1] =
 -0.1287929424199622

Block(2, 1, 1)
 [2:2, 1:1, 1:1]
[:, :, 1] =
 1.5858418816585433

Block(1, 2, 2)
 [1:1, 2:2, 2:2]
[:, :, 1] =
 -1.5869389100745541

Block(2, 2, 2)
 [2:2, 2:2, 2:2]
[:, :, 1] =
 2.5038110677538454

This is kind of a "trick" where we split up each index into trivial blocks of size 1. By using QN(0), ITensor doesn't put any restrictions on which elements can get set. You can see that ITensor then only tracks the nonzero blocks through operations like contraction.

Welcome to ITensor Support Q&A, where you can ask questions and receive answers from other members of the community.

Formatting Tips:
  • To format code, indent by four spaces
  • To format inline LaTeX, surround it by @@ on both sides
  • To format LaTeX on its own line, surround it by $$ above and below
  • For LaTeX, it may be necessary to backslash-escape underscore characters to obtain proper formatting. So for example writing \sum\_i to represent a sum over i.
If you cannot register due to firewall issues (e.g. you cannot see the capcha box) please email Miles Stoudenmire to ask for an account.

To report ITensor bugs, please use the issue tracker.

Categories

...