This is essentially not a question, but I still consider this a good place to discuss. Feel free to remark otherwise.
This is going to be a very long post, I apologize.
Our group currently plans to use itensor as a basis for a more mathematically focused toolbox. Most of our work will be to transfer existing code (c++ and Matlab) into a combined framework, as we already have algorithms for HT Tensors, TT/MPS Tensors, and secondarily CP that deal with multigrid methods, completion problems and general linear equations. We will primarily write noninvasive wrappers.
Now, in the course of my phd-thesis, I have among other things worked on a formalization of a similar kind of combination between Einstein notation and unique labeling, albeit in certain parts more generally. I have not been aware of your toolbox, as we only lately found out about it. Please note that there is unfortunately a considerable communication gap between physics and mathematics.
I implemented a specific tensor node arithmetic in a Matlab toolbox and while reading your documentation, I found it entertaining to find both similar and different solutions to naturally arising problems. Itensor appears to be very successful and well written (and subject to proper c++ code rather that the comfort and ease of Matlab). However, in view of the problems we and many other mathematicians are concerned with, some other functionality comes into mind. So would guess that it is interesting for both sides to compare it to the methods within the node toolbox.
While there is also much missing in the node toolbox compared to yours (for example we did not implement any specific quantum mechanics options), this is certainly less interesting for you, so I will not focus on that part . There are two main differences (the second one being the major one) between our two toolboxes, apart from the fact that the node arithmetic is bound to serve also as mathematical and formal framework and comprehensive notation, therefore having a slightly different overall goal. There are more minor differences, but this post is already long enough. If you are interested in the public git-repository, I can forward it to you, but I so far I want to avoid to let this post appear as advertisement.
1:
Let me start with some examples. I can not explain the output of the following code based on the online documentation, so I actually do not know why it does what is does:
#include "itensor/all.h"
using namespace itensor;
int main()
{
auto i = Index(2,"i");
auto j = Index(2,"j");
auto A = randomITensor(i,i);
auto B = randomITensor(i,j);
PrintData(A*B);
A *= delta(i,j);
PrintData(A*B);
return 0;
}
Both outputs return the same results in every case (my best guess is that the toolbox searches for i from right to left?). However, initializing
auto A = randomITensor(i,i,i);
auto B = randomITensor(i,j);
will cause the program to crash with
terminate called after throwing an instance of 'std::length_error'
what(): vector::_M_default_append
Aborted (core dumped)
On the other hand, the code
#include "itensor/all.h"
using namespace itensor;
int main()
{
auto i = Index(2,"i");
auto j = Index(2,"j");
auto A = randomITensor(i,j);
auto B = randomITensor(i,j);
PrintData((A*delta(j,i))*B);
PrintData(A*B);
return 0;
}
gives two different outputs. I strongly assume that the assignment
auto A = randomITensor(i,i);
is not actually considered valid code, although it does not throw an error a priori.
We instead allow tensors to have multiple copies of the same, in your terms, index (with identical ID). Assuming for a moment your itensor would use our rules, written in your syntax, the code
auto i = Index(2,"i");
auto j = Index(2,"j");
auto A = randomITensor(i,i);
auto B = randomITensor(i,i,j);
PrintData((A*B);
would return that A*B is a tensor with indices (i,i,j) and return the same (assuming the same random seed) as the currently valid code
auto i = Index(2,"i");
auto j = Index(2,"j");
auto A = randomITensor(i,prime(i));
auto B = randomITensor(prime(i),prime(i,2),j);
PrintData(noPrime(A*B));
(By the way, on http://www.itensor.org/docs.cgi?page=classes/itensor_functions the function noPrime is accidentally written as noprime.
Note that we constructed a tensor with indices (i,i,j) that may again be subject to ambiguous(?) behavior.)
At any time, in our interpretation, only those duplicate indices facing each other are contracted. While you have to get used to this (and it comes with some minor cost), I saw good reason to formulate and implement it.
When for example exchanging matrices between neighboring cores within an MPS format, the linking-indices remain the same. You can of course achieve this also in your syntax manually, but it requires some back-and-forth renaming which is likely to make your code less comprehensive and compact. The linking of indices (i,j) with a delta(i,j) tensors is in the node toolbox achieved by certain index-based partial trace or diagonal operations (which are however not yet compatible with the network contraction algorithms explained in part 2 below).
2:
The major difference is due to an automatic network contraction algorithm in the node toolbox. As opposed to always multiplying two (i)tensors from left to right, the node arithmetic has extended rules for multiplications of arbitrarily, possible nested expressions.
For example, again formulated as kind of pseudo-code in your syntax, the code
auto i = Index(2,"i");
auto j = Index(2,"j");
auto k = Index(2,"k");
auto v = randomITensor(i);
auto w = randomITensor(i);
auto u = randomITensor(i);
PrintData( *({v,w,u}) );
would return the same as the currently valid code line
PrintData( v*(w*delta(i,j))*(u*delta(i,k))*delta(i,j,k) );
In plain Einstein notation, we want to calculate the term @@vi wi ui := \sum\i vi wi u_i@@. In this and any more complex situation, the toolbox calculates an either heuristic or (if not too complex) optimal contraction order, remembers it under a specific hash code and will recall it for analogous problems.
The * function is in the node toolbox written such that for any nested expression (that may involve more than two levels)
*( { {A11,A12,...}, {A21,A22,...}, ... } )
the toolbox outputs the exactly same tensor as
*( { *({A11,A12,...}), *({A21,A22,...}), ... })
but utilizes another contraction order based on the overall resulting network. This involves an automatized renaming procedure of indices (essentially like priming) and an efficient implementation of partial Hadamard products. You can use this for example to calculate the scalar product between two copies of the same MPS object (or any other format) as single command (avoiding exponential complexity)
*( {MPS(1),MPS(2), ...}, {MPS(1),MPS(2), ...} )
or in that sence just
*( MPS, MPS )
if MPS is a(n in Matlab so called) cell object.
I am not suprised that several parts of the above arithmetic have not been implemented since you most likely had no interest in it within the framework of computational quantum physics. So don't take this as a critique or demand. For other problems, it just proved very convenient at least to me.
So I essentially wanted to give you a notice on some functionalities we are possibly (likely) going to implement (non invasive), as you may find it interesting. We on our side would be interested to get some feedback on how interesting these functionalities may be within your community.