I don't often look at the sparsity so I'm not sure, but not that I am aware of. However, the sparsity is very dependent on the model you are looking at (particularly what kind of symmetries it has) and what type of algorithm you are using.

For example, if you are using DMRG to find the ground state of the Hubbard model, you can choose to conserve only the fermionic parity, which would lead to a sparsity of 50%, or you can conserve particle number and spin, which would lead to a lower sparsity (and therefore a lower runtime). Additionally, if you use DMRG to study the 2D Hubbard model on a cylinder, you could also conserve momentum around the cylinder, which would lead to an even lower sparsity.

Using non-abelian symmetries like SU(2) symmetry, which is not currently available in ITensor, would lead to even lower sparsity. Additionally, higher dimensional tensor network algorithms (like PEPS) may have lower sparsity than MPS algorithms like DMRG (but I have not compared, so that is just speculation).

Another factor in the sparsity measure is that we are only using block sparsity right now in ITensor. Hamiltonians are quite sparse beyond just block sparsity (i.e. likely the blocks themselves are sparse), so it would be nice to take advantage of that as well, but we have not tried that yet.