# Finding Fibonacci numbers using Linear Algebra
Recently while revisiting Linear Algebra through Gilbert Strang's [textbook](http://math.mit.edu/~gs/linearalgebra/) I came across a very interesting application on finding Fibonacci numbers using eigenvalues.
The problem is rather common: Given an index number $`k`$, find the respective item in the Fibonacci sequence.
The formula is well known:
```math
\begin{aligned}
&F_{k+2} = F_{k+1} + F_{k}, \quad k \ge 0\\
&F_0 = 0\\
&F_1 = 1
\end{aligned}
```
## Conventional approach
The most common solution to the problem is by using memoization and recursion according to the following snippet:
```python
fibonacci = [0, 1]
def fib_mem(k):
try:
return fibonacci[k]
except IndexError:
fibonacci.append(fib_mem(k-2) + fib_mem(k-1))
return fibonacci[k]
```
First, we query the `fibonacci` container for the number of interest, and if not already evaluated we recurse to previous indices according to the formula.
This solution has an $`O(k)`$ space and time complexity for the first query, while for subsequent queries for any index $`n`$:
* If $`n \le k`$: $`O(1)`$ time, $`O(k)`$ space.
* If $`n > k`$: $`O(n-k)`$ time, $`O(n)`$ space.
Not bad a performance, especially if we consider subsequent queries, wouldn't you say?
## Using linear algebra
To use linear algebra we express the formula as a set of linear equations:
```math
\begin{aligned}
&F_{k+2} = F_{k+1} + F_{k}, \\
&F_{k+1} = F_{k+1}
\end{aligned}
```
which in matrix form is expressed as:
```math
\mathbf{u}_{k+1} =
\begin{bmatrix}
1 & 1\\
1 & 0
\end{bmatrix}
\mathbf{u}_{k} = A \mathbf{u}_{k}
```
where
```math
\mathbf{u}_{k} \coloneqq
\begin{Bmatrix}
F_{k+1}\\
F_{k}
\end{Bmatrix}
```
With this notation, and using recursion, the problem resolves to:
```math
\mathbf{u}_{k} = A^k \mathbf{u}_{0}
```
where $`\mathbf{u}_{0}\; =\; \left\{\; 1, 0 \;\right\}`$.
### Eigenvalues and eigenvectors
At first glance the matrix power operation seems rather intimidating, but
hopefully we can further simplify the solution to requiring scalar operations.
To this end, we use the following steps in our derivation:
1. Find the eigenvalues $`\lambda`$, such that: $`A\mathbf{x} = \lambda\mathbf{x}`$.
2. Find a set of respective eigenvectors $`\mathbf{x}`$
3. Express $`\mathbf{u}_{k}`$ as a linear function of the eigenvectors.
4. Use the property $`A^k\mathbf{x} = \lambda^k\mathbf{x}`$ to simplify the problem.
It can be shown, that the eigenvalues of matrix $`A`$ are
$`\lambda_{1, 2} = \dfrac{1+\sqrt{5}}{2}, \dfrac{1-\sqrt{5}}{2}`$
and a set of respective eigenvectors is the following:
```math
\begin{aligned}
&\mathbf{x}_1 = \begin{Bmatrix}\lambda_1\\1\end{Bmatrix}\\
&\mathbf{x}_2 = \begin{Bmatrix}\lambda_2\\1\end{Bmatrix}\\
\end{aligned}
```
These results enable us to express $`\mathbf{u}_0`$ as
```math
\mathbf{u}_0 = \frac{1}{\lambda_1 - \lambda_2}(\mathbf{x}_1 - \mathbf{x}_2)
```
and since $`A^k\mathbf{x}_{1,2} = \lambda_{1,2}^k\mathbf{x}_{1,2}`$, we end up
with the following closed-form solution:
```math
\mathbf{u}_k = \dfrac{
\lambda_1^k\mathbf{x}_1 - \lambda_2^k\mathbf{x}_2
}{\lambda_1 - \lambda_2}
```
> Note that $`F_{k} = \mathbf{u}_k[1]`$, that is the Fibonacci number we seek is
> the second element in the resulting vector.
The algorithm now reduces to:
```python
from math import sqrt
lambda1 = (1 + sqrt(5)) / 2
lambda2 = (1 - sqrt(5)) / 2
factor = 1 / (lambda1 - lambda2)
def fib_eig(k):
fk = (lambda1**k - lambda2**k) * factor
return int(fk)
```
One of the perks of this scalar closed-form solution is that we attain a $`O(1)`$ solution
in space and time!
## Comparison
Let us see how well the different solutions perform in practise:
* First query
```
IPython 7.6.1 -- An enhanced Interactive Python. Type '?' for help.
In [1]: %time fib_mem(100)
CPU times: user 393 µs, sys: 35 µs, total: 428 µs
Wall time: 443 µs
Out[1]: 354224848179261915075
In [2]: %time fib_eig(100)
CPU times: user 37 µs, sys: 3 µs, total: 40 µs
Wall time: 59.6 µs
Out[2]: 354224848179263111168
```
As expected, the closed-form solution outperforms the memoized-recursive
solution.
> There is a downside, nevertheless: Note that the `fib_eig` result is not exact.
> The evaluated eigenvalues are irrational numbers that are only approximately
> represented by the binary floating-point numbers actually stored in the machine
> (see more [here](https://docs.python.org/3/tutorial/floatingpoint.html)). Hence
> the lack of precision when dealing with computations of that order.
* Subsequent queries
```
In [3]: %timeit fib_mem(100)
343 ns ± 2.91 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
In [4]: %timeit fib_eig(100)
1.57 µs ± 2.97 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
```
So, in the long run, the memoized-recursive solution performs ~5x faster than
the closed-form solution. Is this expected?
Well, considering the number of operations I would say it is. Subsequent queries
with the `fib_mem` function essentially fetch an item from a specific index in a
list, whereas the `fib_eig` function requires two power operations, a
subtraction and a multiplication.
## Conclusion
Despite the theoretical elegance of using eigenvalues and eigenvectors to derive
a closed-form solution to the problem, it seems that the conventional memoized-recursive
solution is more precise and faster in the long run.
Nevertheless, I reckon it was worth exploring.