# convergence problem for small lattice

+1 vote
asked

I was using the following code for simulating 1D Hubbard model in presence of magnetic field.

int main(int argc, char* argv[])
{
//
//Parse the input file
//
if(argc != 2) { printfln("Usage: %s inputfile_newcode",argv); return 0; }
auto input = InputGroup(argv,"input");
auto N = input.getInt("N");
auto Npart = input.getInt("Npart",N); //number of particles, default is N (half filling)
auto nsweeps = input.getInt("nsweeps");

auto t1 = 1;
auto U1 = 1;
auto mu = 2;

auto quiet = input.getYesNo("quiet",false);
auto table = InputGroup(input,"sweeps");
auto sweeps = Sweeps(nsweeps,table);

//
// Initialize the site degrees of freedom.
//
auto sites = Electron(N,{"ConserveSz=",false});
auto ampo = AutoMPO(sites);

for(int i = 1; i <= N; ++i)
{
ampo += U1,"Nupdn",i;
}
for(int i = 1; i <= N; ++i)
{
ampo += -mu,"Sz",i;
}
for(int b = 1; b < N; ++b)
{
ampo += -t1,"Cdagup",b,"Cup",b+1;
ampo += -t1,"Cdagup",b+1,"Cup",b;
ampo += -t1,"Cdagdn",b,"Cdn",b+1;
ampo += -t1,"Cdagdn",b+1,"Cdn",b;
}
auto H = toMPO(ampo);
//
// Set the initial wavefunction matrix product state
// to be a Neel state.
//
auto state = InitState(sites);
int p = Npart;
for(int i = N; i >= 1; --i)
{
if(p > 0)
{
println("Singly occupying site ",i);
state.set(i,(i%2==1 ? "Up" : "Dn")); //state.set(i,"Dn");
p -= 1;
}
}
auto psi0 = MPS(state);

//An alternate state initial state

/*for(int i = N; i >= 1; --i)
{
if(p > i)
{
println("Doubly occupying site ",i);
state.set(i,"UpDn");
p -= 2;
}
else
if(p > 0)
{
println("Singly occupying site ",i);
state.set(i,(i%2==1 ? "Up" : "Dn"));
p -= 1;
}
else
{
state.set(i,"Emp");
}
}

auto psi0 = MPS(state);*/

// Begin the DMRG calculation
auto [energy,psi] = dmrg(H,psi0,sweeps,{"Quiet",quiet});
Energy << mu << "\t" << energy << endl;


I was matching the data obtained by DMRG with the exact diagonalization of the Hamiltonian. For this I used half-filled 2-site Hubbard model. The exact diagonalization was showing that the magnetically saturated phase (all spins up or down) was reaching around "mu=1.6". But the DMRG data was not showing any change of eigenvalue in the range mu=0 to mu=2. For example, when mu=1.6, for 10 sweeps, the data is below:

vN Entropy at center bond b=1 = 1.356587261479
Eigs at center bond b=1: 0.3106 0.3106 0.1894 0.1894
Largest link dim during sweep 1/10 was 4
Largest truncation error: 0
Energy after sweep 1/10 is -1.561552812809
Sweep 1/10 CPU time = 0.00107s (Wall time = 0.00115s)

vN Entropy at center bond b=1 = 1.356587241339
Eigs at center bond b=1: 0.3106 0.3106 0.1894 0.1894
Largest link dim during sweep 2/10 was 4
Largest truncation error: 0
Energy after sweep 2/10 is -1.561552812809
Sweep 2/10 CPU time = 0.000803s (Wall time = 0.000813s)

vN Entropy at center bond b=1 = 1.356587239124
Eigs at center bond b=1: 0.3106 0.3106 0.1894 0.1894
Largest link dim during sweep 3/10 was 4
Largest truncation error: 0
Energy after sweep 3/10 is -1.561552812809
Sweep 3/10 CPU time = 0.000795s (Wall time = 0.000806s)

vN Entropy at center bond b=1 = 1.356587239124
Eigs at center bond b=1: 0.3106 0.3106 0.1894 0.1894
Largest link dim during sweep 4/10 was 4
Largest truncation error: 0
Energy after sweep 4/10 is -1.561552812809
Sweep 4/10 CPU time = 0.000936s (Wall time = 0.000971s)

vN Entropy at center bond b=1 = 1.356587239124
Eigs at center bond b=1: 0.3106 0.3106 0.1894 0.1894
Largest link dim during sweep 5/10 was 4
Largest truncation error: 0
Energy after sweep 5/10 is -1.561552812809
Sweep 5/10 CPU time = 0.000794s (Wall time = 0.000807s)

vN Entropy at center bond b=1 = 1.356587239124
Eigs at center bond b=1: 0.3106 0.3106 0.1894 0.1894
Largest link dim during sweep 6/10 was 4
Largest truncation error: 0
Energy after sweep 6/10 is -1.561552812809
Sweep 6/10 CPU time = 0.000785s (Wall time = 0.000796s)

vN Entropy at center bond b=1 = 1.356587239124
Eigs at center bond b=1: 0.3106 0.3106 0.1894 0.1894
Largest link dim during sweep 7/10 was 4
Largest truncation error: 0
Energy after sweep 7/10 is -1.561552812809
Sweep 7/10 CPU time = 0.000785s (Wall time = 0.000798s)

vN Entropy at center bond b=1 = 1.356587239124
Eigs at center bond b=1: 0.3106 0.3106 0.1894 0.1894
Largest link dim during sweep 8/10 was 4
Largest truncation error: 0
Energy after sweep 8/10 is -1.561552812809
Sweep 8/10 CPU time = 0.000793s (Wall time = 0.000803s)

vN Entropy at center bond b=1 = 1.356587239124
Eigs at center bond b=1: 0.3106 0.3106 0.1894 0.1894
Largest link dim during sweep 9/10 was 4
Largest truncation error: 0
Energy after sweep 9/10 is -1.561552812809
Sweep 9/10 CPU time = 0.000761s (Wall time = 0.000770s)

vN Entropy at center bond b=1 = 1.356587239124
Eigs at center bond b=1: 0.3106 0.3106 0.1894 0.1894
Largest link dim during sweep 10/10 was 4
Largest truncation error: 0
Energy after sweep 10/10 is -1.561552812809
Sweep 10/10 CPU time = 0.000761s (Wall time = 0.000771s)


It seems that the nergy is converged, but this is not the correct magnetically saturated state. Then I changed the input file with larger bond-dimension and huge number of sweeps:

N = 2
Npart = 2

nsweeps = 4000
sweeps
{
maxdim  mindim  cutoff  niter  noise
50      10      1E-12   2      1E-7
100     20      1E-12   2      1E-8
200     20      1E-12   2      1E-10
400     20      1E-12   2      1E-10
800     20      1E-12   2      1E-10
1600    20      1E-12   2      1E-10
3200    20      1E-12   2      1E-2
6400    20      1E-12   2      1E-2
12800   20      1E-12   2      1E-1
}


Now it gives the correct energy:

vN Entropy at center bond b=1 = 0.518448864409
Eigs at center bond b=1: 0.8534 0.0733 0.0733
Largest link dim during sweep 3999/4000 was 4
Largest truncation error: 0
Energy after sweep 3999/4000 is -1.600000000000
Sweep 3999/4000 CPU time = 0.000662s (Wall time = 0.000671s)

vN Entropy at center bond b=1 = 0.518448864409
Eigs at center bond b=1: 0.8534 0.0733 0.0733
Largest link dim during sweep 4000/4000 was 4
Largest truncation error: 0
Energy after sweep 4000/4000 is -1.600000000000
Sweep 4000/4000 CPU time = 0.000766s (Wall time = 0.000776s)


I have also used an alternate initial state but this feature does not improve. My questions are:

1. Is this problem because of very small lattice size?
2. Is there any way to fix this problem by tweaking the input file parameters?
3. I will be actually working with higher number of sites of the order of 10. Is there any chance that there also the correct transition value of the parameter "mu" may not reflect?
commented by (52.6k points)
Thanks for the question. For questions about convergence, it can be hard to say because it can depend on many things and is model and parameter dependent. But I would say "no" that you shouldn't encounter a problem that is worse for more sites, and it could be better (i.e. it could be that this is an unusual problem just to do with two sites).

My main guess is that you did not use the 'noise' DMRG parameter for your first test, or did you?

Overall, ITensor has been used quite a lot to study the Hubbard model with excellent results, so I would not worry as long as you do some tests of your setup i.e. compare the U=0 case with the exact result for the number of sweeps and accuracy parameters you choose etc. But it can be a tough model to converge sometimes, again depending on details.

Miles
commented by (52.6k points)
Thinking about this some more, I do think that N=2 sites is a funny limit for DMRG, and might have some pathologies. I think N=4 sites could be more representative of how larger calculations might go.
commented by (220 points)
Thank you very much Miles for your reply.

I checked now by keeping the maximum bond-dimension 400 and increasing the noise. It does converge but after about 3000 sweeps. If I take 4-site half-filled Hubbard model, then the situation is a bit improved, i.e. now the convergence is achieved for lower number of sweeps (still of the order of thousands).

My worry is that if small number of sweeps is taken, it seems the energy is converged, because it does not change even after 100 sweeps- and this can be deceptive. However, this behaviour is dominant in the region where "mu" is very close to the transition point. Well above the transition point, the magnetically saturated energy is obtained quicker.

I will do more checks. Hope this problem will not exist for sufficiently large lattice.
commented by (52.6k points)
I see. But again one question and some thoughts:

1. did you set the "noise" parameter for the sweeps you did?

2. the Hubbard model has been studied quite a lot with ITensor DMRG, so there's no fundamental reason to think it would work any worse than any other DMRG code for this system. So if there is a real problem here, it would be a more fundamental DMRG problem more than anything.

3. the usual 'culprits' could be responsible for what you are seeing, so apart from using the noise parameter also the choice of initial state or just difficult points in your parameter space (e.g. say if one happens to tune very close to a first-order transition where two very different ground states have almost the same energy).

Other ideas you could try if some of the above don't work would be using imaginary time evolution to prepare an initial state to give to DMRG, or using ground states computed for other values of mu to initialize your DMRG.
commented by (220 points)
To answer your thoughts,

1. I had set the noise in the sweeps to a little higher value in the input file.

2. Yes, I completely understand that the above discrepancy may be due to some fundamental limitation of DMRG, not ITENSOR, specially N=2 is the limiting case of a 'lattice'. I have used ITENSOR for quite some time and is satisfied with it till now, but because I do not know very much about DMRG or matrix product states, if there is some fault in parameters or my code, I usually am not able to identify and hence post here.

3. I have tried different initial states and that does not improve the convergence for N=2. But for higher number of sites, the convergence is good and there is no anomaly.
commented by (52.6k points)
Great. So I hadn't been totally clear about that, whether it was working better on larger lattices. I think there could be some specific reasons why N=2 is a particularly difficult case for the DMRG algorithm to work well. In some sense, in the N=2 limit it turns into an exact diagonalization (ED) code, but one which is written in an odd way that you would not want to write an ED code.

So I think the best test of DMRG is on a system of, say, 6 sites or more to figure out whether it will work well on a larger number of sites.
commented by (220 points)
Thanks for your mail. I did not particularly check N=6, but up to N=4, there were anomalies, and above N=10 it is not there. Probably I should abandon the tendency of using DMRG for small lattice sites(<10), and try exact diagonalization instead.  I already have started writing a Lanczos algorithm.
commented by (52.6k points)
Yes for very small systems you might be able to do ED without anything very sophisticated. For example, you could just form H as a dense matrix and do a full Hermitian diagonalization. But if you want to do a somewhat larger calculation, may I recommend KrylovKit.jl. It’s very easy to use: all you have to do is to define the multiplication of your vector with your matrix and maybe one or two other small routines, and you can write that part of the code any way you want.
commented by (220 points)
Thank you very much. I will try it.

## 1 Answer

+1 vote
answered by (52.6k points)

(Please see discussion above.)