ITensor News
New GPU Backends 🏎️
Oct 23, 2023
Graphical processing units (GPUs) can dramatically increase performance of linear algebra and tensor calculations. Recently, Karl Pierce and Matt Fishman have revamped ITensor's GPU support, allowing many key operations to run on multiple GPU brands (NVIDIA, Apple Metal, etc.). See Matt's forum post with more details here.
The new backends take advantage of Julia's "package extensions" feature, so all you have to do to use them is to load support for, say, CUDA by adding using CUDA
to the top of your code. The ITensor CUDA extension will be automatically loaded in the background. Then a few lines of code transfer your initial tensors or tensor networks to GPU and the rest happens automatically.
Currently, only dense tensor operations are fully supported and only CUDA supports all of the matrix factorizations needed to run full algorithms such as DMRG. But operations like tensor contraction already work for other backends and for block-sparse (quantum number) tensors. Support for more tensor types and backends will continue to improve and we will keep you posted.