This is a good question. For the Julia version of ITensor, we are soon planning to implement
removeqns for the ITensor level but haven't just yet.
But if what you want for now is to map block sparse ITensors to dense ITensors, you can call the function
dense on an ITensor and it will do the behavior that you are wanting.
The plan for
removeqns is that it will keep other sparsity (such as diagonal sparsity) while removing QN information. This is in contrast to the
dense functiono which removes all sparsity, whether associated with QNs or not.
So in short, please try
dense on your ITensors. I'm implementing a version of that you can call directly on an MPS too right now, but with the current version you'll need to call it on each tensor yourself.