Clarifying comments on methodology

As part of the JOSS review of this package, I went through both the Jupyter notebooks and the class/function documentation included in the package. The documentation is well-written and extensive, and the notebooks are useful in providing guidance on how to use the package; moreover, the JOSS paper itself provides a helpful review. While not required as part of the review, I would suggest that some further details be added to the example notebooks rank_conditions.ipynb and stats.ipynb. These changes are not required as part of the JOSS review.

While the former notebook does contain two BibTeX entries as references and the latter some exposition, it's not immediately clear from running the provided code (neither code comments nor text in markdown cells is included) what the various (dis)advantages of the various methods are. For example, in stats.ipynb's first code cell, the resulting figure shows "smooth" performance along the x-axis for conditional-inference and "jagged" performance from scipy's, and we are told that the method implemented in conditional-inference is state-of-the-art, but how this results in the visualized difference is unclear. Similarly, in the rank_conditions.ipynb notebook, three methods (conventional, conditional, hybrid) are evaluated and visualized for comparison, but it would be helpful to have some exposition available to clarify the differences between the methods and how this results in differing performance. Finally, in the code example in the JOSS paper, comparing the truncated normal in scipy against that implemented in conditional-inference, the difference between the estimates is clear, but this would be clarified further by stating a "right answer" outright, or otherwise stating the issue(s) with the estimates provided by scipy.

Edited by Nima Hejazi