I need to calculate the off-diagonal elements of a density matrix for an MPS. I was wondering if there is any way I can do it with itensor.

+1 vote

I need to calculate the off-diagonal elements of a density matrix for an MPS. I was wondering if there is any way I can do it with itensor.

Hi, does the answer I gave in the following post and the subsequent follow up comments to it answer your question? http://itensor.org/support/904/how-to-get-a-reduced-density-matrix-on-more-than-2-qubits

If not I can post another answer here.

One question I have is: which density matrix are you trying to calculate? By full density matrix do you mean just the outer product of the entire wavefunction with itself, not partial-tracing any sites?

If not I can post another answer here.

One question I have is: which density matrix are you trying to calculate? By full density matrix do you mean just the outer product of the entire wavefunction with itself, not partial-tracing any sites?

Thanks miles. I am not trying to calculate the reduced density matrix. I want the outer product of the entire wavefunction with itself and then get the off-diagonal terms of that density matrix.

So I was doing TEBD in real time for a Heisenberg Chain and wanted to look at how the off-diagonal part of the density matrix changes with time.

So I was doing TEBD in real time for a Heisenberg Chain and wanted to look at how the off-diagonal part of the density matrix changes with time.

+1 vote

Hi, so the answer to your question is that if you want matrix elements of the entire (pure state) density matrix, these are products of the amplitudes of the wavefunction itself. Amplitudes of MPS wavefunctions are efficient to compute: you just have to "clamp" all of the indices then multiply the resulting matrices. We don't have an automatic facility for this with ITensor but it's easy to do. Here is an example code.

Say we have an MPS "psi" on four S=1/2 sites (sites = SpinHalf(4)).

Then if we want the psi_{1212} amplitude (the up, down, up, down amplitude) for that wavefunction we can do:

```
auto amp = psi.A(1)*setElt(sites(1)(1));
amp *= psi.A(2)*setElt(sites(2)(2));
amp *= psi.A(3)*setElt(sites(3)(1));
amp *= psi.A(4)*setElt(sites(4)(2));
Print(amp.real());
```

Here recall that "sites" is a site set object (a SpinHalf object in this example) that retrieves for us the site indices that the MPS uses. The setElt function is given an "IndexVal" (an index set t to a specific value) and makes an ITensor with a "1" entry for that index value and "0" entry otherwise.

I think if you write out the tensor diagram for the above code it should be apparent what it does, and you can generalize it to an MPS of size "N" by doing steps 2,3,..., N inside a for loop.

Let me know if you have more questions about it.

(Oh and you might want to do amp.cplx() if you expect the amplitude to be complex.)

Miles

Hi,

I understand your point. Now for any N spins, there are @4^{N}@ terms in density matrix{spin half}. For example, if I want to calculate some quantity like @\sum_{i \neq j} \rho(i,j)@, the sum of all non -diagonal terms of the density matrix then I have to calculate all possible matrices for all those state.

So my question is now is it that hard or am I thinking in some wrong way?

I understand your point. Now for any N spins, there are @4^{N}@ terms in density matrix{spin half}. For example, if I want to calculate some quantity like @\sum_{i \neq j} \rho(i,j)@, the sum of all non -diagonal terms of the density matrix then I have to calculate all possible matrices for all those state.

So my question is now is it that hard or am I thinking in some wrong way?

Hi, so it depends on what you mean by "hard". Each amplitude evaluation involves multiplying N matrices and say that each matrix is m by m. So the cost of each amplitude evaluation is N*m^3, which is usually a reasonably fast thing.

However, the problem you will run into is just the sheer size of the density matrix, which is 4^N by 4^N as you point out. So the number of amplitudes you'll have to sum over will grow exponentially.

The advantage of tensor networks is that they make it efficient to get reduced density matrices or to find a reduced basis for otherwise large parts of the Hilbert space.

I'm not sure what you're trying to calculate (like why you want to sum the whole upper triangle of the full density matrix) but maybe there is another way to do it that takes advantage of the MPS structure of the wavefunction? You'd have to tell me more about your ultimate goal.

However, the problem you will run into is just the sheer size of the density matrix, which is 4^N by 4^N as you point out. So the number of amplitudes you'll have to sum over will grow exponentially.

The advantage of tensor networks is that they make it efficient to get reduced density matrices or to find a reduced basis for otherwise large parts of the Hilbert space.

I'm not sure what you're trying to calculate (like why you want to sum the whole upper triangle of the full density matrix) but maybe there is another way to do it that takes advantage of the MPS structure of the wavefunction? You'd have to tell me more about your ultimate goal.

The quantity that I want to calculate is the L1 norm for coherence which is the sum over all the non-diagonal elements of the full density matrix.

$$

C_{\text{(L1)}} = \sum_{i \neq j} |\rho(i,j)|

$$

So basically I want to see how the system decoheres if I do a real-time evolution with Itebd.

$$

C_{\text{(L1)}} = \sum_{i \neq j} |\rho(i,j)|

$$

So basically I want to see how the system decoheres if I do a real-time evolution with Itebd.

Sorry but I think that's just going to be fundamentally hard to compute for a many body system using just about any technique I'm aware of (other than working with small system sizes). The reason mainly has to do with the absolute value sign.

I can think of a clever way to compute the 2-norm version of your quantity though, where each term in the same sum is squared instead of absolute-valued. The trick would be to devise an MPO that is defined to be the sum of all products of one or more single-site "not" gates. I think this would have a small MPO bond dimension. Then one would just take the expectation value of this MPO, which is efficient to do.

I can think of a clever way to compute the 2-norm version of your quantity though, where each term in the same sum is squared instead of absolute-valued. The trick would be to devise an MPO that is defined to be the sum of all products of one or more single-site "not" gates. I think this would have a small MPO bond dimension. Then one would just take the expectation value of this MPO, which is efficient to do.

Hi Miles,

Can you elaborate a bit more on the MPO technique that you have mentioned for 2-norm version?

Can you elaborate a bit more on the MPO technique that you have mentioned for 2-norm version?

Hi, so it would get a bit complicated to explain in full, but what I mean is an MPO which makes the following sum of operators:

$$

\sum_i N_i + \sum_{i < j} N_i N_j + \sum_{i<j<k} N_i N_j N_k + \ldots

$$

where @@N_i@@ is a single-site operator which enforces that its incoming index is not equal to its outgoing index (this would be the @@\sigma_x@@ operator in the case of spin 1/2).

To make the MPO, you could follow the design discussed in this paper: https://arxiv.org/abs/1407.1832

$$

\sum_i N_i + \sum_{i < j} N_i N_j + \sum_{i<j<k} N_i N_j N_k + \ldots

$$

where @@N_i@@ is a single-site operator which enforces that its incoming index is not equal to its outgoing index (this would be the @@\sigma_x@@ operator in the case of spin 1/2).

To make the MPO, you could follow the design discussed in this paper: https://arxiv.org/abs/1407.1832

...