Would you mind editing your question to fix the formatting so it is easier to read? You can look at the preview to check if it looks correct before submitting the question.

Also, what do you mean by:

```
B{j,k,l} = b{j,k} * delta{k,l}
```

? I ask because in "standard" einstein summation notation like what is used in ITensor, repeated indices on the right hand side of the equation would be contracted, so the results would be:

```
b{j,k} * delta{k,l} -> b{j,l}
```

I assume you mean there is an implicit "copy tensor" that set the values of `j`

and `k`

to be equal, but I want to double check before answering. In the standard ITensor notation, maybe you mean something like:

```
B{j,k,l} = b{j,k'} * delta{k',k,l}
```

Also, the short answer is that it should be possible to represent any sparse tensor in ITensor, since below the level of ITensor there is a "general" block sparse data structure which in principle can be completely sparse by setting all of the block sizes to 1. But the tricky part is a matter of how to make that feature accessible at a higher level (at the level of ITensor operations).

An example of how to do this would be:

```
julia> space = [QN(0) => 1, QN(0) => 1];
julia> j, k, l = Index.((space, space, space), ("j", "k", "l"));
julia> b = randomITensor(j, k');
julia> @show b;
b = ITensor ord=2
Dim 1: (dim=2|id=614|"j") <Out>
1: QN(0) => 1
2: QN(0) => 1
Dim 2: (dim=2|id=51|"k")' <Out>
1: QN(0) => 1
2: QN(0) => 1
ITensors.NDTensors.BlockSparse{Float64, Vector{Float64}, 2}
2×2
Block(1, 1)
[1:1, 1:1]
-0.1287929424199622
Block(2, 1)
[2:2, 1:1]
1.5858418816585433
Block(1, 2)
[1:1, 2:2]
-1.5869389100745541
Block(2, 2)
[2:2, 2:2]
2.5038110677538454
julia> d = δ(dag(k'), k, l);
julia> @show d;
d = ITensor ord=3
Dim 1: (dim=2|id=51|"k")' <In>
1: QN(0) => 1
2: QN(0) => 1
Dim 2: (dim=2|id=51|"k") <Out>
1: QN(0) => 1
2: QN(0) => 1
Dim 3: (dim=2|id=668|"l") <Out>
1: QN(0) => 1
2: QN(0) => 1
ITensors.NDTensors.DiagBlockSparse{Float64, Float64, 3}
2×2×2
Block(1, 1, 1)
[1:1, 1:1, 1:1]
[:, :, 1] =
1.0
Block(2, 2, 2)
[2:2, 2:2, 2:2]
[:, :, 1] =
1.0
julia> B = b * d;
julia> @show B;
B = ITensor ord=3
Dim 1: (dim=2|id=614|"j") <Out>
1: QN(0) => 1
2: QN(0) => 1
Dim 2: (dim=2|id=51|"k") <Out>
1: QN(0) => 1
2: QN(0) => 1
Dim 3: (dim=2|id=668|"l") <Out>
1: QN(0) => 1
2: QN(0) => 1
ITensors.NDTensors.BlockSparse{Float64, Vector{Float64}, 3}
2×2×2
Block(1, 1, 1)
[1:1, 1:1, 1:1]
[:, :, 1] =
-0.1287929424199622
Block(2, 1, 1)
[2:2, 1:1, 1:1]
[:, :, 1] =
1.5858418816585433
Block(1, 2, 2)
[1:1, 2:2, 2:2]
[:, :, 1] =
-1.5869389100745541
Block(2, 2, 2)
[2:2, 2:2, 2:2]
[:, :, 1] =
2.5038110677538454
```

This is kind of a "trick" where we split up each index into trivial blocks of size 1. By using `QN(0)`

, ITensor doesn't put any restrictions on which elements can get set. You can see that ITensor then only tracks the nonzero blocks through operations like contraction.