Thanks for the question. Unfortunately, no we don't have any higher level parallelism publicly available in the library right now. To some degree it's because tensor network algorithms aren't as readily parallelizable as, say, Monte Carlo type algorithms.
But here are some future and past parallel aspects of ITensor we could discuss further:
(1) I wrote a paper with Steve White about how to parallelize DMRG in real space: https://arxiv.org/abs/1301.3494 We implemented this idea in ITensor, and I still have the code although it needs some cleaning up before releasing it publicly. But if you and your colleagues have a use case in mind I can send you the code I have and consult with you about adapting it for your project(s). I'm hoping to make a public release of the parallel DMRG code sometime this year.
(2) One thing on my short list of things to parallelize within ITensor is the contraction of two IQTensors, which are block-sparse tensors. That algorithm is already parallelizable, but requires some research as to how to code properly e.g. setting the number of cores for different BLAS implementations and on different computers.
(3) I recently parallelized a machine learning code based on ITensor for a paper I wrote. Basically there is a gradient descent step that involves looping over thousands of images, so I multithreaded this on top of ITensor. But one could envision using MPI very effectively here too, storing batches of images on different computers and synchronizing the MPS tensors (which are pretty small individually) across the different machines. I'm happy to share the multithreaded version of this code with you.
Hope some of that piques your interest.