Hi, yes I have also found that MPO based evolution is less well behaved than other types (e.g. Trotter).
Could you say, though, what isn't working very well? Is it exactApplyMPO itself or the approximation of exp(-iHt) as an MPO? Does not working well mean that your results are not accurate or that it is too slow? How much have you explored the effects of choosing smaller and larger time steps? I'm asking in part so we can keep tabs on which parts of ITensor can be improved.
Within the approach of using an MPO approximation for the time evolution operator, I'm not sure the exact approach you are using but there may be some additional tricks which you haven't tried, such as using two complex time steps instead of one purely real or imaginary time step to get a better scaling with the time step size. (See https://arxiv.org/abs/1407.1832 for details.)
As far as other methods, yes the TDVP method (which is also similar to the older "time step targeting" methods) and I believe the Krylov method can handle long-range interactions since they can be formulated in terms of an MPO for the Hamiltonian directly (rather than an MPO approximation of the time-evolution operator).
Unfortunately ITensor doesn't currently have a TDVP, time-step targeting, or Krylov implementaion, but it does have all of the necessary pieces for you to write your own. I'm hoping someone will contribute such a code. Otherwise we will eventually post our own but we aren't working with these methods currently.
Best regards,
Miles