Hi Jun,
Thanks for the interesting question. So I just tried the C++ version of ITensor on a Macbook Air with the M1 processor, and it was a smooth experience. I just cloned ITensor from Github, and then used the options.mk.sample file provided as the template for options.mk. The only change I needed to make was to use clang as the C++ compiler, rather than g++. Otherwise I got an error about the -fconcepts flag not being recognized. I did not try to install MKL and build ITensor against that; it would be an interesting thing to try and important to compare the speed to Apple's default "vecLib" framework (default BLAS provided by MacOS).
The performance was very good. The speed I got running the exthubbard sample code was comparable to a workstation at our office that has a 24-core, 3.4 GHz Xeon chip, whereas from what I have read, the M1 has 8 3.2 GHz cores. It was more than two times faster than my Intel Macbook Pro which has a 2.9 GHz chip!
I also installed Julia and the Julia version of ITensor, ITensors.jl, without any issue. That surprised me because I had read that Julia was not yet working for the M1, so perhaps it is using the Rosetta compatibility layer? Also I was able to install the MKL Julia package which we have found significantly speeds up ITensors.jl on Intel chips. It would be interesting to compare Julia with and without MKL on the M1 chip.
Best regards,
Miles