README.rst 1.96 KB
Newer Older
1 2 3

4 5

1. Check tests perform comparisons of simulation results between different versions of yade, as discussed in[email protected]/msg05784.html and the whole thread. They differ with regression tests in the sense that they simulate more complex situations and combinations of different engines, and usually don't have a mathematical proof (though there is no restriction on the latest).

2. They compare the values obtained in version N with values obtained in a previous version or any other "expected" results. The reference values must be hardcoded in the script itself or in data files provided with the script.

3. Check tests are based on regular yade scripts, so that users can easily commit their own scripts to trunk in order to get some automatized testing after commits.

4. Since the check tests history will be mostly based on standard output generated by "yade --checks", a meaningfull checkTest should include some "print" command telling if something went wrong. If the script itself fails for some reason and can't generate an output, the log will contain "scriptName failure".

5. If the script defines differences on obtained and awaited data, it should print some useful information about the problem and increase the value of global variable resultStatus. After this occurs, the automatic test will stop the execution with error message.

6. An example check test can be found in It shows results comparison, output, and how to define the path to data files using "checksPath".

17 18 19
7. Users are encouraged to add their own scripts into the scripts/test/checks/ folder. Discussion of some specific checktests design in users question is welcome.

8. A check test should never need more than a few seconds to run. If your typical script needs more, try and reduce the number of element or the number of steps.

21 22
9. Failures are reported via exception using python command: raise YadeCheckError(stringMessage)