Skip to content

Draft: Direct orthogonal projection of the mixing matrix

Nemo Fournier requested to merge householder into dev

What does the code in the MR do ?

This MR exposes an option to bypass Householder orthonormalization that is currently used in models' Attributes to compute an orthonormal basis. Instead it allows to directly learn a mixing matrix that will be projected onto the orthogonal complement of Span(v0). See more theoretical and technical details in #66.

Simply put, this MR allows to instantiate a Leaspy model as follows:

leaspy_logistic = Leaspy('logistic', source_dimension=2, use_householder=False)

And anything else will be transparent, when .fit is called, the mixing matrix will be directly sampled in the whole space and then projected if use_householder is set to False, or learned through the linear combination using betas coefficient and an orthonormal basis if set to True.

Where should the reviewer start ?

As of now, this MR is not in a merge-ready state. The implementation has only been done and somehow tested for multivariate logistic models. Work remain to be done on documentation, tests, and broader implementation among the code-base.

I still would like an initial “conceptual review” (as in not in a line-by-line level of detail) because I actually have two propositions to tackle the issue, implemented in two subsequent commits (at the moment the MR should thus be looked at commit by commit). I detail what my interrogations are about in a follow-up comment.

(I hope a Draft: MR is a good place to discuss this kind of things, I thought it would be good since it provides a good interface to look at code diffs)

What issues are linked to the MR ?

See #66 for more insights and experiments.

Edited by Nemo Fournier

Merge request reports