I’m running in the current master branch on ARCHER2.
Test generation for tests test116 and test117depends on the script ${QUANTICS_DIR}/bin/python/oct/optcntrl.py, which is explicitly python2, which is not available on ARCHER2. So this is a non-starter.
(Hi Kevin, just FYI as this Discourse instance is primarily meant for public discussions, I’ve turned your message into a public topic in Quantics development category which seems like a better place for it). Thanks for raising the topic!
These tests are for calculations using the optimal control code that is quite old and I wouldn’t know where to start in upgrading it. It not only uses python2, but also uses a fifo pipe scheme to handle multiple wavefunctions (it is effectively a forward-backward optimisation scheme using multiple propagations) that may also not work on clusters. Probably best to park this for the time being.
With a couple of minor corrections (converted to issues in the gitlab) I can run and complete all the tests bar 116 and 117. At least at -O0 in serial…
I suppose my next question would be: how can one tell if the results are “correct” or at least acceptable?
If you have the results from a previous test run that are known to be correct you can run the “elk_test” script to compare the new and old results. When developing the idea is that you run the elk_test_gen once at the start and then after making changes run the elk_test to compare with the original and check everything still works. At the moment though there is not the “official” set we need for a new start and developers have their own. I haven’t worked out the best way to provide a the set: it is quite big (500MB) and of course changes as new tests are added. It will also be platform dependent as most files are binary.
I think it would be reasonable middle ground to record a “blessed copy” the ascii log files in the repository. I would not expect have all the binary outputs. This would allow a new user/installer to have confidence that things are at least broadly correct.