+1 vote
asked by (140 points)

Hello,

Does the iTensor C++ version runs without any issue with Apple's new M1 processor? Or is there any possible decrease in performance?

Thank you!
Jun

1 Answer

+1 vote
answered by (51.6k points)

Hi Jun,
Thanks for the interesting question. So I just tried the C++ version of ITensor on a Macbook Air with the M1 processor, and it was a smooth experience. I just cloned ITensor from Github, and then used the options.mk.sample file provided as the template for options.mk. The only change I needed to make was to use clang as the C++ compiler, rather than g++. Otherwise I got an error about the -fconcepts flag not being recognized. I did not try to install MKL and build ITensor against that; it would be an interesting thing to try and important to compare the speed to Apple's default "vecLib" framework (default BLAS provided by MacOS).

The performance was very good. The speed I got running the exthubbard sample code was comparable to a workstation at our office that has a 24-core, 3.4 GHz Xeon chip, whereas from what I have read, the M1 has 8 3.2 GHz cores. It was more than two times faster than my Intel Macbook Pro which has a 2.9 GHz chip!

I also installed Julia and the Julia version of ITensor, ITensors.jl, without any issue. That surprised me because I had read that Julia was not yet working for the M1, so perhaps it is using the Rosetta compatibility layer? Also I was able to install the MKL Julia package which we have found significantly speeds up ITensors.jl on Intel chips. It would be interesting to compare Julia with and without MKL on the M1 chip.

Best regards,
Miles

commented by (51.6k points)
I should note that for a 2d DMRG calculation of the Heisenberg model (in Julia) with conserved quantum numbers, the 24-core workstation was faster than with the M1. So my guess is that this particular calculation made better use of the many cores, but we would need to do more extensive testing to be sure of the reason.

It could also very well be that the MKL.jl library doesn't provide the same sort of relative advantage over Julia's default of OpenBLAS on the M1 as on Intel chips.
commented by (140 points)
Hi Miles,

Thanks a lot for the very thorough investigation! Good to know that it works very well and also has some room to be even better.
Personally, I'm with the C++ version so it looks like the M1 chip would not be a problem for me :)
Thanks again, and hope to see you in the near future!

Best,
Jun
Welcome to ITensor Support Q&A, where you can ask questions and receive answers from other members of the community.

Formatting Tips:
  • To format code, indent by four spaces
  • To format inline LaTeX, surround it by @@ on both sides
  • To format LaTeX on its own line, surround it by $$ above and below
  • For LaTeX, it may be necessary to backslash-escape underscore characters to obtain proper formatting. So for example writing \sum\_i to represent a sum over i.
If you cannot register due to firewall issues (e.g. you cannot see the capcha box) please email Miles Stoudenmire to ask for an account.

To report ITensor bugs, please use the issue tracker.

Categories

...