Refactoring of cp_opt
Partly addresses Issue #43.
Refactoring cp_opt to use the optimization methods that have been wrapped into the semi-common interface like tt_opt_lbfgsb
. Adding LBFGSB software as a library of Tensor Toolbox so that the user doesn't have to go get it.
Major modifications:
- Added
lbfgsb
as a subtree!! -
cp_opt.m
- Completely overhauled to use new objective function and tt_opt_ optimization methods. -
cp_opt_doc.m
- Updated accordinly.
New files:
-
@ktensor/fg.m
- Optimization function for CP: ||X-M||^2/||X||^2 with options to avoid computing ||X||^2 and to change the scaling. Essentially replacedtt_cp_fg.m
andtt_cp_fun.m
. -
cp_opt_legacy.m
- Old version ofcp_opt.m
. -
cp_opt_legacy_doc.m
&cp_opt_legacy_poblano_doc.m
- Documentation forcp_opt_legacy.m
. Changed name of function throughout and pointed to new version. -
tt_opt_fmincon.m
- Added interface to fmincon. Still needs documentation, but essentially the same asfminunc
. -
tt_opt_fminunc.m
- Changed the way printing works. Ifprintitn > 1
, just prints the final iteration. Unfortunately, the printing of the iterations seems ot be all or nothing. Also added some info about installing the Optimization Toolbox if the appropriate method is not in the path. -
tt_opt_lbfgs.m
- Changed how the gradient tolerance is translated. Divied by n which is something thatlbfgs
does internally too, so it's consistent. Also added note about how to install it that prints if it's not in the path. -
tt_opt_lbfgsb.m
- Added code that automatically uses the version now included with Tensor Toolbox if it's not already in the path. Prints a little warning when it happens.
Minor modifications:
-
@ktensor/tovec.m
- Fixed some comments. -
tt_gcp_fg.m
- Fixed some comments.
Edited by Tammy Kolda