# Questions about checks for ITensors.jl pull request

Slightly less than a week ago, I submitted a pull request for generalized n-site DMRG (as opposed to 2-site DMRG, which is the default). You can find the pull request here (https://github.com/ITensor/ITensors.jl/pull/434).

Just as an example of this, on the "Tests / Julia 1.4 - macOS-latest - x64 (pull_request)" test set, I don't see any failed tests for broadcast.jl. However, when I run the tests on my computer, I see that broadcast.jl has 9 failed tests, even though I didn't change any of the methods that those tests are using (which are all extremely basic functions, I should add).

In addition, I see that there are 2 broken tests for broadcast.jl (which actually should happen, according to comments on the code) and 2 broken tests for empty.jl. The broken tests are never given any explanation, even when you run the test files locally on your computer, so what do they mean?

Another thing I saw was an extremely basic failure. In the mps.jl test file, I got an error from the test on like 16, @test str[1] == "MPS". I don't see why an error like this should happen, because again, this is a very basic thing and I haven't done anything to change it. The error message is shown below.

MPS Basics: Test Failed at c:\Users\user\ITensors.jl\test\mps.jl:16
Expression: str[1] == "MPS"
Evaluated: "ITensors.MPS" == "MPS"


Overall, my main questions are: (1) What do the "broken" tests mean? (2) Why do some of the tests seem to fail for functions I never modified (including some extremely basic ones, like object types and basic operations on tensors)?

I did find some mistakes in my code that I am currently working to fix (I should probably have these done by Monday or Tuesday), but they are related to intricacies of DMRG, and none of them are related to the test failures that I have mentioned above. (In particular, I found one big typo in my code, and even after I fixed that a few hours ago, my code still breaks on the DMRG method that projects out certain states with the goal of finding excited states.) Any help on this would be greatly appreciated.

commented ago by (7.3k points)
If you are seeing some tests pass on Github but failing for you locally, it is possible you don't have the latest version of an ITensors dependency, most likely NDTensors. Have you tried upgrading all of your packages with julia>] up? We would have to know more about your local setup to understand why some tests might not be passing for you.

As Miles said, broken tests are just ones that we would like to pass, but aren't right now because of a bug we plan to fix. You can just ignore those.

It looks like the error you are seeing in mps.jl is a test of printing an MPS. In general, those are very fickle to test, so we should remove tests like that anyway. Just to be clear, are those failing in the Github tests, or when you run tests on your own computer?
commented ago by (7.3k points)
commented ago by (650 points)
Hi Matt, there's a mix. Some are failing in both cases, some are only failing on GitHub, and some are only failing when I run tests on my own computer. For example, the @test str[1] == "MPS" test I mentioned in my original post only fail when I run tests on my own computer. I will look into this more--thank you for your help! And yeah, for future questions about my specific pull request, I'll put them in the pull request GitHub page itself.

+1 vote

Hi Sujay, thanks for the question.

(1) a broken test is one that is known to fail, and is marked as broken meaning "don't mark the whole test run as failing just because this test does, and we should fix this test later". It could be either because the test is out of date or was written incorrectly. For more information you can read about @test_broken here: https://docs.julialang.org/en/v1/stdlib/Test/#Broken-Tests-1

(2) I'm not sure why that particular test was failing. As you know, the tests get run automatically with each pull request on 3 different operating systems, and have recently been working there. So it's hard to comment on this failure without detailed steps to reproduce. If you think it's a bug in the tests or the library, could you please file an issue with steps to reproduce? We'd definitely like to fix this if it's a bug.