# CG: Documentation proposes degeneration to Gradient Descent

## Submitted by wae..@..il.com

Assigned to **Nobody**

**Link to original bugzilla bug (#733)**

**Version**: 3.2

## Description

The documentation of ConjugateGradient gives an example of how CG can be run step by step:

"Here is a step by step execution example starting with a random guess and printing the evolution of the estimated error:

- x = VectorXd::Random(n);
- cg.setMaxIterations(1);
- int i = 0;
- do {
- x = cg.solveWithGuess(b,x);
- std::cout << i << " : " << cg.error() << std::endl;
- ++i;
- } while (cg.info()!=Success && i<100);

Note that such a step by step excution is slightly slower."

With this my optimization problem took ~1400 iterations and later I found out that it only takes ~180 if not done step wise.

As I see it, every time solveWithGuess is called CG is restarted. CG's first iteration is identical to gradient descent and thus the whole optimization effectively degenerates to gradient descent. The note "that such a step by step excution is slightly slower." does not really capture it. Also the evolution of the estimated errors that are being printed to std::cout are not the true CG errors because real CG would converge quicker.

Maybe the documentation can be changed such that it

- suggests to use much more than 1 iteration in each solveWithGuess step.
- mentions that too small maxIteration numbers make CG degenerate?