diff --git a/README.md b/README.md
index d99ec8ae178dfe36a8b8ee7619ef72a0679fa110..adf5d1551d7e1a40679246e0829ee13c801098c7 100644
--- a/README.md
+++ b/README.md
@@ -88,6 +88,9 @@ There are currently sections on the following topics:
- [A First Example with Pyplot](matplotlib/README.md#a-first-example-with-pyplot)
- [A More Complex Example](matplotlib/README.md#a-more-complex-example)
- [Further Reading](matplotlib/README.md#further-reading)
+- [Numerical Methods](numerical_methods/README.md)
+ - [Root Finding](numerical_methods/01_Root_Finding)
+ - [Systems of Equations](numerical_methods/02_Systems_of_Equations)
In addition to the material collected here, there are also separate
repositories with material for:
diff --git a/numerical_methods/01_Root_Finding/Finding Roots.ipynb b/numerical_methods/01_Root_Finding/Finding Roots.ipynb
new file mode 100644
index 0000000000000000000000000000000000000000..befe2a81c313c2c36668342f5f61525dbe4b16d5
--- /dev/null
+++ b/numerical_methods/01_Root_Finding/Finding Roots.ipynb
@@ -0,0 +1,612 @@
+{
+ "cells": [
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": true
+ },
+ "outputs": [],
+ "source": [
+ "# I'll be importing these at the start of all the course notebooks.\n",
+ "# If you copy a function definition for one of your own codes, don't\n",
+ "# forget it likely relies on one or more of these.\n",
+ "import numpy as np\n",
+ "import matplotlib.pyplot as plt\n",
+ "\n",
+ "# We'll be using several functions from scipy.optimize here.\n",
+ "import scipy.optimize"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Finding Roots\n",
+ "\n",
+ "Often we will come across some complex expression and want to find where it attains a particular value. While the methods and examples I'll be showing are for finding where a function returns zero, you can find where a function $f(x)$ returns an arbitrary value $t$ by finding where $g(x) = f(x) - t = 0$."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## The Bisection Method\n",
+ "\n",
+ "This is one of the simplest and most robust methods to find the root of a function, and can be useful when dealing with functions which may not have continuous first derivatives. Convergence typically requires many more iterations than more advanced methods.\n",
+ "\n",
+ "Say we have a continuous function and want to find $f(a) = 0$, and we already have $x_0$ and $x_1$ such that $f(x_1)$ and $f(x_0)$ have opposite sign. Then we know there exists $f(a) = 0$ for $x_0 < a < x_1$.\n",
+ "\n",
+ "From here, we pick $x_2 = \\frac{x_0 + x_1}{2}$ and test the sign of $f(x_2)$. If $f(x_2)$ is of opposite sign to $f(x_0)$, then we set our new interval which contains $a$ to $x_0 < a < x_2$, while if $f(x_2)$ is of opposite sign to $f(x_1)$ our new interval is $x_2 < a < x_1$. Either way, we have halved the interval where we know $a$ exists.\n",
+ "\n",
+ "We can repeat this process until we know $a$ to within some arbitrary precision.\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### An Example\n",
+ "\n",
+ "Let's say we have an interatomic potential given by\n",
+ "$$\\Phi_{12}(r) = \\frac{A}{r^{12}} - \\frac{B}{r^6},$$\n",
+ "with $A = 1.209\\times10^5\\ \\textrm{eV Angstrom}^{12}$ and $B = 70.66\\ \\textrm{eV Angstrom}^6$ and we want to find the value of $r$ where $\\Phi_{12}(r) = 0$.\n",
+ "\n",
+ "We first need to construct our function:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": true
+ },
+ "outputs": [],
+ "source": [
+ "def Phi12(r):\n",
+ " '''Lennard-Jones Potential for Argon (eV with r in Angstrom)'''\n",
+ " return 1.209E5*r**(-12) - 70.66*r**(-6)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "First it's always useful to plot the the function to get an idea of how it looks."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": true
+ },
+ "outputs": [],
+ "source": [
+ "xvals = np.linspace(1, 5, 100) # 100 points between 1 and 5\n",
+ "plt.plot(xvals, Phi12(xvals))\n",
+ "plt.xlabel('r')\n",
+ "plt.ylabel('Phi12(r)')\n",
+ "plt.grid(True)\n",
+ "plt.show()\n",
+ "\n",
+ "# It will likely be hard to see features at this scale.\n",
+ "# We can use plt.axis([x1, x2, y1, y2]) before the\n",
+ "# plt.show() command to set the axes limits."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "By zooming in a little bit we can see there's a root between $r = 3.0$ and $r = 5.0$.\n",
+ "\n",
+ "Let's confirm the sign of our function is different at these two values:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": true
+ },
+ "outputs": [],
+ "source": [
+ "print(Phi12(3.0), Phi12(5.0))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Now lets make a function that will use the bisection method to find the value of $r$ to within some tolerance."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": true
+ },
+ "outputs": [],
+ "source": [
+ "# We'll use a scipy style docstring. This is a good habit to get into\n",
+ "# as it'll let you use \"help(function_name)\" to get detailed information\n",
+ "# on what the function does, what input it expects, and output it\n",
+ "# generates.\n",
+ "\n",
+ "def bisection(func, x0, x1, maxiter, tol):\n",
+ " '''\n",
+ " Find the root of a function with the bisection method.\n",
+ "\n",
+ " Parameters\n",
+ " ----------\n",
+ " func : function\n",
+ " A Python function we want to find a root of.\n",
+ " x0, x1 : float\n",
+ " The bounding values of the region to find the root.\n",
+ " maxiter : integer\n",
+ " The maximum number of iterations.\n",
+ " tol : float\n",
+ " Exit when the root is known within this tolerance\n",
+ " \n",
+ " Returns\n",
+ " -------\n",
+ " x : float\n",
+ " Root of the function between x0 and x1.\n",
+ " '''\n",
+ " f0 = func(x0)\n",
+ " f1 = func(x1)\n",
+ " sign_f0 = f0 / abs(f0)\n",
+ " sign_f1 = f1 / abs(f1)\n",
+ " # Lets be sure we have valid x0 and x1 before proceeding.\n",
+ " # We can use assert to raise an error if a condition isn't met.\n",
+ " assert sign_f0 != sign_f1, \"Error: func(x0) and func(x1) do not have opposite sign.\"\n",
+ " \n",
+ " # It's usually a good idea to enforce a limit on how many iterations a loop can\n",
+ " # do. Otherwise if something goes wrong, it may take some time to understand that\n",
+ " # your code is not converging.\n",
+ " for i in range(maxiter):\n",
+ " x = 0.5 * (x0 + x1)\n",
+ " # Exit when we know x within tol\n",
+ " if abs(x-x0) < tol:\n",
+ " # You can print out some info if you like. This is just for information\n",
+ " # in this example. You may not want output from functions cluttering a\n",
+ " # bigger code, and this will interfere with the use of %timeit\n",
+ " print(\"bisection converged in\", i, \"iterations.\")\n",
+ " return x\n",
+ "\n",
+ " fx = func(x)\n",
+ " sign_fx = fx / abs(fx)\n",
+ " # Half the range as appropriate before the next iteration\n",
+ " if sign_fx == sign_f0:\n",
+ " x0 = x\n",
+ " else:\n",
+ " x1 = x\n",
+ "\n",
+ " # If we get to here without returning a value, we haven't converged.\n",
+ " # \"raise\" is another good way to get your code to exit with an informative error if\n",
+ " # an input is not correct. There's a list of exceptions that can be raised at\n",
+ " # https://docs.python.org/3.6/library/exceptions.html\n",
+ " raise ValueError(\"Error: bisection failed to converge after max iterations.\") "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": true
+ },
+ "outputs": [],
+ "source": [
+ "help(bisection)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": true
+ },
+ "outputs": [],
+ "source": [
+ "bisection(func=Phi12, x0=3.0, x1=5.0, maxiter=100, tol=1e-10)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": true
+ },
+ "outputs": [],
+ "source": [
+ "# We can use '_' to use the output from the last evaluated cell\n",
+ "Phi12(_)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Doing this with SciPy\n",
+ "\n",
+ "Most numerical methods you might want to use have been implemented by someone else as a library or python module already. Python modules for potentially intensive calculations tend to be implemented in something that will run more quickly than native python, such as Fortran or C, so it is good to make use of them. Additionally libraries and modules are likely to be better optimized than code you are likely to write yourself.\n",
+ "\n",
+ "SciPy has an implementation of the bisection method in [`scipy.optimize.bisect`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.bisect.html)."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": true
+ },
+ "outputs": [],
+ "source": [
+ "# Uncomment the following for the internal help information on the function\n",
+ "# help(scipy.optimize.bisect)\n",
+ "\n",
+ "scipy.optimize.bisect(f=Phi12, a=3.0, b=5.0, xtol=1e-10)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": true
+ },
+ "outputs": [],
+ "source": [
+ "Phi12(_)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "- Try using the `%timeit` magic to compare the scipy bisection function with the one implemented above. \n",
+ " - Be sure to comment out the print statement in the function or `%timeit` will produce a lot of output."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Newton's Method\n",
+ "\n",
+ "This is typically quite a fast converging method, but you need to the know the function analytically. It should have a continuous first derivative, and you need a fairly good initial guess if you want to converge to the desired root.\n",
+ "\n",
+ "This works by using a linear expansion of the function using it's derivative at an initial guess to improve upon that guess as shown schematically in the image below:\n",
+ "\n",
+ "![Schematic of Newtons Method](images/newton.png)\n",
+ "\n",
+ "We can express this mathematically as\n",
+ "$$ x_{i+1} = x_i - \\frac{f(x_i)}{f'(x_i)} $$\n",
+ "which we can use to make a function that will iteratively find a root.\n",
+ "\n",
+ "Let's make a function that employs Newton's method to we can use to find the root of $\\Phi_{12}$ as above."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": true
+ },
+ "outputs": [],
+ "source": [
+ "def newton(func, dfunc, x0, maxiter, tol):\n",
+ " '''\n",
+ " Find the root of a function using Newton's method.\n",
+ "\n",
+ " Parameters\n",
+ " ----------\n",
+ " func : function\n",
+ " A Python function we want to find a root of.\n",
+ " dfunc : function\n",
+ " A Python function returning the derivative of func.\n",
+ " x0 : float\n",
+ " Initial guess at a root.\n",
+ " maxiter : integer\n",
+ " The maximum number of iterations.\n",
+ " tol : float\n",
+ " Exit when the root is known within this tolerance\n",
+ "\n",
+ " Returns\n",
+ " -------\n",
+ " x : float\n",
+ " Root of the function.\n",
+ " '''\n",
+ "\n",
+ " for i in range(maxiter):\n",
+ " x = x0 - func(x0)/dfunc(x0)\n",
+ " # Exit when we know x within tol\n",
+ " if abs(x-x0) < tol:\n",
+ " # You can print out some info if you like. This is optional, and you may\n",
+ " # not want output from functions cluttering a bigger code.\n",
+ " print(\"newton converged in\", i, \"iterations.\")\n",
+ " return x\n",
+ " x0 = x\n",
+ " \n",
+ " # If we get to here without returning a value, we haven't converged.\n",
+ " raise ValueError(\"Error: newton failed to converge after max iterations.\") "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "We'll also need to define a function for the derivative:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": true
+ },
+ "outputs": [],
+ "source": [
+ "def DPhi12(r):\n",
+ " '''Derivative of Lennard-Jones Potential for Argon (eV with r in Angstrom)'''\n",
+ " return -12*1.209E5*r**(-13) + 6*70.66*r**(-7)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Now let's find our root, starting from 2.0"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": true,
+ "scrolled": true
+ },
+ "outputs": [],
+ "source": [
+ "newton(func=Phi12, dfunc=DPhi12, x0=2.0, maxiter=100, tol=1e-10)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "As you can see, this took significantly fewer iterations to obtain the same accuracy.\n",
+ "\n",
+ "But this method is a bit more sensitive - for example if we take a somewhat higher initial guess we quickly run into trouble:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": true,
+ "scrolled": true
+ },
+ "outputs": [],
+ "source": [
+ "newton(func=Phi12, dfunc=DPhi12, x0=4.0, maxiter=100, tol=1e-10)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Doing this with SciPy\n",
+ "\n",
+ "Again, SciPy has an implementation of this we can use as [`scipy.optimize.newton`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.newton.html)."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": true
+ },
+ "outputs": [],
+ "source": [
+ "# Uncomment the following for the internal help information on the function\n",
+ "# help(scipy.optimize.newton)\n",
+ "\n",
+ "scipy.optimize.newton(func=Phi12, x0=3.0, fprime=DPhi12, tol=1e-10)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## The Secant Method\n",
+ "\n",
+ "It can often be tricky to generate a new function that accurately returns the derivative of another function. To get around this we can use the secant method.\n",
+ "\n",
+ "This works like Newton's method, but we don't need to have an analytic function for the derivative. Instead we approximate it from two previous guesses as\n",
+ "\n",
+ "$$ f'(x_i) \\approx \\frac{f(x_i) - f(x_{i-1})}{x_i - x_{i-1}} $$\n",
+ "\n",
+ "which we can combine with our expression for Newton's method to get\n",
+ "\n",
+ "$$ x_{i+1} = x_i - \\frac{f(x_i)(x_i - x_{i-1})}{f(x_i) - f(x_{i-1})} = \\frac{x_{i-1}f(x_i) - x_{i}f(x_{i-1})}{f(x_i) - f(x_{i-1})}$$"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": true
+ },
+ "outputs": [],
+ "source": [
+ "def secant(func, x0, x1, maxiter, tol):\n",
+ " '''\n",
+ " Use the secant method to find a root of a function\n",
+ " \n",
+ " Parameters\n",
+ " ----------\n",
+ " func : function\n",
+ " A Python function we want to find a root of.\n",
+ " x0 : float\n",
+ " Initial guess at a root.\n",
+ " x1 : float\n",
+ " A second guess at a root.\n",
+ " maxiter : integer\n",
+ " The maximum number of iterations.\n",
+ " tol : float\n",
+ " Exit when the root is known within this tolerance\n",
+ "\n",
+ " Returns\n",
+ " -------\n",
+ " x : float\n",
+ " Root of the function.\n",
+ " '''\n",
+ "\n",
+ " f0 = func(x0)\n",
+ " for i in range(maxiter):\n",
+ " f1 = func(x1)\n",
+ " x = (x0*f1 - x1*f0)/(f1 - f0)\n",
+ " # Exit when we know x within tol\n",
+ " if abs(x-x1) < tol:\n",
+ " # You can print out some info if you like. This is optional, and you may\n",
+ " # not want output from functions cluttering a bigger code.\n",
+ " print(\"secant converged in\", i, \"iterations.\")\n",
+ " return x\n",
+ " x0 = x1\n",
+ " x1 = x\n",
+ " f0 = f1\n",
+ "\n",
+ " # If we get to here without returning a value, we haven't converged.\n",
+ " raise ValueError(\"Error: secant failed to converge after max iterations.\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": true
+ },
+ "outputs": [],
+ "source": [
+ "secant(func=Phi12, x0=3.0, x1=3.1, maxiter=100, tol=1e-10)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": true
+ },
+ "outputs": [],
+ "source": [
+ "# It doesn't matter which of x0 or x1 is a better guess\n",
+ "secant(func=Phi12, x0=3.1, x1=3.0, maxiter=100, tol=1e-10)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": true
+ },
+ "outputs": [],
+ "source": [
+ "# It is still sensitive to the choice of initial guess though\n",
+ "secant(func=Phi12, x0=3.0, x1=3.9, maxiter=100, tol=1e-10)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Doing this with SciPy\n",
+ "\n",
+ "We can use the same [`scipy.optimize.newton`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.newton.html) function as above to use the secant method. This method is used automatically when a derivative function is not provided. Note that the scipy version does not require a second initial guess, it automatically generates the second value by adding a small amount to the provided initial guess."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": true
+ },
+ "outputs": [],
+ "source": [
+ "scipy.optimize.newton(func=Phi12, x0=3.0, tol=1e-10)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Brent's Methods\n",
+ "\n",
+ "Brent's method uses a combination of ideas from the approaches we discussed above. It decides on each iteration whether it's best to proceed with\n",
+ "- a step of bisection\n",
+ "- a step of secant\n",
+ "- or a step of inverse quadratic interpolation\n",
+ " - This is like the secant method, but uses results from three values to approximate a quadratic rather than a line.\n",
+ "\n",
+ "This gives you the advantages of each method and avoids their drawbacks to provide a robust and fast converging method to obtain a root in a bracketed interval.\n",
+ "\n",
+ "It's available in SciPy as [`scipy.optimize.brentq`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.brentq.html). **This is likely the best choice of approach available when trying to find a root with Python**."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": true
+ },
+ "outputs": [],
+ "source": [
+ "scipy.optimize.brentq(f=Phi12, a=3.0, b=5.0, xtol=1e-10, full_output=True)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": true
+ },
+ "outputs": [],
+ "source": [
+ "%timeit scipy.optimize.brentq(f=Phi12, a=2.0, b=5.0, xtol=1e-10)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": true
+ },
+ "outputs": [],
+ "source": [
+ "help(scipy.optimize.brentq)"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.6.3"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}
diff --git a/numerical_methods/01_Root_Finding/images/newton.odg b/numerical_methods/01_Root_Finding/images/newton.odg
new file mode 100644
index 0000000000000000000000000000000000000000..c45b3d950bb989dcc6ec5348fae344731fc87630
Binary files /dev/null and b/numerical_methods/01_Root_Finding/images/newton.odg differ
diff --git a/numerical_methods/01_Root_Finding/images/newton.png b/numerical_methods/01_Root_Finding/images/newton.png
new file mode 100644
index 0000000000000000000000000000000000000000..6a21f2a86942e303b30e3ed5842df50cf8e72285
Binary files /dev/null and b/numerical_methods/01_Root_Finding/images/newton.png differ
diff --git a/numerical_methods/02_Systems_of_Equations/Solving Systems of Equations.ipynb b/numerical_methods/02_Systems_of_Equations/Solving Systems of Equations.ipynb
new file mode 100644
index 0000000000000000000000000000000000000000..fea7911ceb7706fc2fb22bec0a8dc7e905c24cbf
--- /dev/null
+++ b/numerical_methods/02_Systems_of_Equations/Solving Systems of Equations.ipynb
@@ -0,0 +1,718 @@
+{
+ "cells": [
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": true
+ },
+ "outputs": [],
+ "source": [
+ "import numpy as np\n",
+ "import matplotlib.pyplot as plt\n",
+ "\n",
+ "# We'll be using several functions from scipy.linalg here.\n",
+ "import scipy.linalg"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Solving Systems of Equations\n",
+ "\n",
+ "Let's start with a simple example, where we want to solve the following equations simultaneously:\n",
+ "\n",
+ "$$\n",
+ "3x_1 + 5x_2 + 2x_3 = 11\\\\\n",
+ "2x_1 - 3x_2 - 3x_3 = -1\\\\\n",
+ "x_1 + x_2 + x_3 = 2\n",
+ "$$\n",
+ "\n",
+ "This would be equivalent to solving the matrix equation\n",
+ "\n",
+ "$$\n",
+ "\\mathbf{Mx}=\\mathbf{b}\n",
+ "$$\n",
+ "where\n",
+ "$$\n",
+ "\\mathbf{M} = \\begin{pmatrix} 3 & 5 & 2 \\\\ 2 & -3 & -3 \\\\ 1 & 1 & 1 \\end{pmatrix},\\quad \n",
+ "\\mathbf{x} = \\begin{pmatrix} x_1 \\\\ x_2 \\\\ x_3 \\end{pmatrix}, \\quad\n",
+ "\\mathbf{b} = \\begin{pmatrix} 11 \\\\ -1 \\\\ 2 \\end{pmatrix}.\n",
+ "$$\n",
+ "\n",
+ "In this case of a system with just three equations, this is not difficult to do by hand, but often we'll need to be able to solve systems with an arbitrarily large number of equations and variables.\n",
+ "\n",
+ "More generally a system of $n$ equations of $n$ unknowns, such as\n",
+ "\n",
+ "$$\n",
+ "m_{11} x_1 + m_{12} x_2 + \\cdots + m_{1n}x_n = b_1 \\\\\n",
+ "m_{21} x_1 + m_{22} x_2 + \\cdots + m_{2n}x_n = b_2 \\\\\n",
+ "\\qquad \\vdots \\qquad \\qquad \\qquad \\qquad \\vdots \\\\\n",
+ "m_{n1} x_1 + m_{n2} x_2 + \\cdots + m_{nn}x_n = b_n\n",
+ "$$\n",
+ "would be equivalent to solving the matrix equation\n",
+ "\n",
+ "$$\n",
+ "\\mathbf{Mx}=\\mathbf{b}\n",
+ "$$\n",
+ "where\n",
+ "$$\n",
+ "\\mathbf{M} = \\begin{pmatrix}m_{11} & m_{12} & \\cdots & m_{1n} \\\\\n",
+ " m_{21} & m_{22} & \\cdots & m_{2n} \\\\\n",
+ " \\vdots & \\vdots & \\ddots & \\vdots \\\\\n",
+ " m_{n1} & m_{n2} & \\cdots & m_{nn}\n",
+ " \\end{pmatrix},\\quad \n",
+ "\\mathbf{x} = \\begin{pmatrix} x_1 \\\\ x_2 \\\\ \\vdots \\\\ x_n \\end{pmatrix}, \\quad\n",
+ "\\mathbf{b} = \\begin{pmatrix} b_1 \\\\ b_2 \\\\ \\vdots \\\\ b_n \\end{pmatrix}.\n",
+ "$$\n",
+ "\n",
+ "So we need a systematic way to do this.\n",
+ "\n",
+ "When might it not be possible to do this?"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Gaussian Elimination\n",
+ "\n",
+ "The usual approach to solving this kind of linear system is to use an algorithm called Gaussian Elimination. This can be done with the following procedure\n",
+ "\n",
+ "- Combine the solution vector with matrix of parameters to form the augmented matrix $\\mathbf{\\tilde{M}}$ as\n",
+ "\n",
+ "$$ \\mathbf{\\tilde{M}} = \\begin{pmatrix}m_{11} & m_{12} & \\cdots & m_{1n} & b_1\\\\\n",
+ " m_{21} & m_{22} & \\cdots & m_{2n} & b_2\\\\\n",
+ " \\vdots & \\vdots & \\ddots & \\vdots & \\vdots\\\\\n",
+ " m_{n1} & m_{n2} & \\cdots & m_{nn} & b_n \\end{pmatrix} $$\n",
+ " \n",
+ "- Perform row reduction on this matrix such that $\\mathbf{M}$ is transformed to an upper triangular form:\n",
+ "\n",
+ "$$ \\mathbf{\\tilde{M}'} = \\begin{pmatrix}m_{11}' & m_{12}' & \\cdots & m_{1n}' & b_1'\\\\\n",
+ " 0 & m_{22}' & \\cdots & m_{2n}' & b_2'\\\\\n",
+ " \\vdots & \\vdots & \\ddots & \\vdots & \\vdots\\\\\n",
+ " 0 & 0 & \\cdots & m_{nn}' & b_n' \\end{pmatrix} $$\n",
+ "\n",
+ "- Row reduction consists of the following operations, which are repeated as necessary.\n",
+ " 1. Multiply any row by a constant.\n",
+ " 2. Add any row to another row.\n",
+ " 3. Swap the order of any two rows.\n",
+ "\n",
+ "Let's do this now for the simple example we gave above:\n",
+ "\n",
+ "We start with\n",
+ "$$\n",
+ "\\mathbf{\\tilde{M}}_0 = \\begin{pmatrix} 3 & 5 & 2 & 11\\\\ 2 & -3 & -3 & -1\\\\ 1 & 1 & 1 & 2\\end{pmatrix}\n",
+ "$$\n",
+ "\n",
+ "We want to have zeros below some diagonal elements, so let's use the following procedure:\n",
+ "- Given a diagonal element $m_{ii}$, to set element $m_{jk}$ to zero where $j>i$ (row) and $k*
+ "Members".
+ - In the box headed "Select members to invite" enter "eamonn.murray" and
+ select my account.
+ - Under "Choose a role permission" select "Reporter".
+ - This will allow me to clone your repository for grading.
+- Each assignment will be contained within its own folder: `Assignment1`,
+ `Assignment2`, `Assignment3` and `Assignment4`.
+- As you create the files required for the assignment you should add them to
+ your git repo and push the changes to the gitlab repo.
+- I will automatically download a copy of your gitlab repository at the
+ assignment deadline for grading.
+
+Assignments
+-----------
+
+- Assignments will be posted on the course blackboard page each week,
+ typically as a pdf file or jupyter/ipython notebook.
+- Where Python code is asked for, these should be created as executable Python
+ files suffixed as `.py` and conforming to the Python 3 standard, and
+ beginning with the line `#!/usr/bin/env python3`. You should **not** submit
+ jupyter notebooks unless specifically requested.
+- Python code should follow [the PEP8 style
+ guide](https://www.python.org/dev/peps/pep-0008/) as much as possible. Your
+ code should be properly commented. Functions should have docstrings that
+ outline what the function does, what the arguments it expects are, and what
+ it returns.
+- Any Python code you write should only import modules from the standard
+ Python library, NumPy, SciPy or MatPlotLib.
+- You will not necessarily have covered everything I will ask you to do in the
+ courses you have taken so far. Python and the various libraries you are
+ using in this course are very well documented online. It is important to
+ understand how to search this material to find how best to do something.
diff --git a/numerical_methods/README.md b/numerical_methods/README.md
new file mode 100644
index 0000000000000000000000000000000000000000..041edda9b21cc65b373dc7202392bc997d82392a
--- /dev/null
+++ b/numerical_methods/README.md
@@ -0,0 +1,169 @@
+Numerical Methods
+=================
+
+In this part of the course, we'll go through details of the main methods
+used for a particular type of problem, including some mathematical background
+where appropriate. I'll also show you how some of these methods might be
+implemented in Python, though I'll try to make these examples more
+understandable rather than more optimized. And in each section we'll include
+some examples of how to use Python packages that implement these methods.
+
+Sometimes you will need to implement solutions yourself. Python offers many
+ways to do the same thing. Even something as simple as summing over the
+elements of a list can be done in a number of ways, some of which will be much
+slower than others. If you can think of several ways to do something it's
+often best to just try them and see how they compare (keeping in mind it's
+only worth spending a lot of time on this if it's in a part of your code
+that's taking a lot of time). The
+[`%timeit`](http://ipython.readthedocs.io/en/stable/interactive/magics.html#magic-timeit)
+ipython/jupyter magic is a useful quick way to do this.
+
+Prerequisites
+-------------
+
+You should be comfortable programming in [Python 3](https://www.python.org/),
+have some familiarity with [NumPy](https://www.numpy.org/),
+[MatPlotLib](https://matplotlib.org/) and [SciPy](https://scipy.org/), and
+have access to a computer with all of these installed along with
+[jupyter](http://jupyter.org/). The provided course material is primarily in
+the form of ipython/jupyter notebooks.
+
+To download the course material, you can clone the git repository with
+`git clone https://gitlab.com/eamonn.murray/IntroToScientificComputing.git`.
+You could also create a gitlab account, which you will need to do for the
+[course assignments](Assignments.md), and [fork the
+repository](https://gitlab.com/eamonn.murray/IntroToScientificComputing/forks/new),
+which will create a copy in your gitlab account.
+
+I suggest you also `git pull` in your local copy of the course folder at the
+beginning of each class so you get the latest version. This part of the course
+is in the `numerical_methods` folder. [Other folders](../README.md) contain
+material you may also find useful.
+
+A fairly strong mathematical background will also be beneficial, particularly
+in linear algebra and calculus. Most methods have some discussion or examples
+of their use in the simulation of materials, but is otherwise quite general.
+
+Contents
+--------
+
+This will be added to as we progress through the course. The main topics are
+divided into directories, each containing an ipython/jupyter notebook with
+details of the methods and python examples.
+
+- [Initial Considerations](#initial-considerations)
+- [Root Finding](01_Root_Finding)
+- [Systems of Equations](02_Systems_of_Equations)
+
+Initial Considerations
+----------------------
+
+There are a number of things you should keep in mind when choosing a method
+for a problem, or when implementing a method.
+
+### Language
+
+In this course, we'll be using Python (version 3). However, choice of language
+can have a significant effect on how fast the code you write actually runs.
+
+Rather than thinking of this as some languages are slower than others, it's
+more the case that different languages have different sets of constraints that
+may or may not allow various optimizations when compiled into machine
+instructions. Often this means in practice there can be a trade-off between
+user-friendliness and speed when deciding what language to use. And some
+languages will generally produce faster executables for certain types of
+problems. For example, a Fortran or C implementation of a method will likely
+run several orders of magnitude faster than a native Python implementation.
+
+#### Compiled, interpreted and everything in between
+
+Often people distinguish between compiled and interpreted languages. But,
+strictly speaking, this is a property of an implementation of a language
+rather than some essential property of the language itself.
+- Compiled languages usually have a step where you invoke a compiler to
+ generate a separate executable file from your source code, for example
+ C/C++/Fortran. Once this step is done, you generally no longer need to worry
+ about the compiler, and can just run your executable.
+- Interpreted languages are executed directly more like a script. For example
+ Bash or Perl. These usually run somewhat more slowly than a compiled
+ executable, although are usually more flexible and easier to alter.
+- Many modern language implementations can do something in between, such as
+ Python which generates an intermediate representation when run, each time a
+ change to the source file is detected. The intermediate representation is
+ not in machine code and can't be directly executed but is in a form that can
+ be executed efficiently by an interpreter.
+- Another approach that's popular in implementations of many languages is
+ what's known as Just-In-Time (JIT) compilation, where the code is compiled
+ at run time rather than as a separate step before execution. Implementations
+ can combine the advantages (and disadvantages) of both compiled and
+ interpreted implementations. There is a JIT compiler for python called
+ [PyPy](https://pypy.org/).
+
+#### Static and dynamic typing
+
+- In a statically typed language, the type of each variable is set and its
+ compatibility with each expression in which it's used is checked at compile
+ time.
+ - This means the type of each variable must be declared (typically
+ explicitly, but also possibly implicitly) in the source code.
+ - A variable that's declared to store e.g. an integer, can only be used to
+ store an integer.
+- In a dynamically typed language, the types are only checked at runtime. The
+ types are associated with objects or values that variables can point at
+ rather than the variables themselves.
+ - Python is a dynamically typed language.
+ - You can assign e.g. an integer to a variable, and later assign a string
+ to the same variable without issue.
+ - Types are often set and converted automatically, which can be quite
+ convenient, but may also not always be what you expect.
+- Statically typed languages can usually use better optimizations than
+ dynamically typed, and so produce faster executables.
+- The [Cython](http://cython.org/) Python compiler allows you to declare
+ statically typed variables in a a Python code to produce much faster
+ executables.
+
+### Libraries and Python Modules
+
+Of course, you don't need to restrict yourself to a single language when
+approaching a problem. As you've seen in your other classes, you can use
+Python with Fortran. And it's also common to use Python with C or C++. Indeed,
+many numerically intensive modules you might use in Python are in fact
+interfaces to Fortran or C libraries rather than native implementations in
+Python.
+
+While it's important to understand the methods you use, how they scale, and
+their likely strengths and drawbacks, you will rarely need to implement the
+numerically intensive part of the solution yourself, as optimized
+implementations are almost always available as libraries (or as modules which
+allow you to use those libraries in Python).
+
+Libraries are files containing some chunk of code that you can use from your
+own codes rather than needing to reproduce yourself. They'll have some well
+defined interface, allowing you to call them e.g. as a normal subroutine in a
+Fortran code. They can then be linked either statically (built-in to your
+executable when your code is compiled), or dynamically (the separate library
+executable is called from your compiled code and needs to be locatable on the
+machine on which you execute your code). When you load a Python module, you
+make some functions available that you can call in you code, such that you
+don't need to worry about how they're implemented. As mentioned, many are
+actually interfaces to libraries written in some other language. One of the
+reasons for Python's popularity is the ease with which you can create modules
+that add useful functionality like this.
+
+Scaling and Big O Notation
+--------------------------
+
+One of the most important aspects of a method is how it scales. If a method
+scales poorly, while it may complete in a tolerable amount of time for a small
+problem, a slightly larger problem could take years to complete.
+
+The usual way the scaling of a method is indicated is with what's known as
+big O notation. This describes the limiting behaviour of the method as the
+size of the problem increases, and can refer both to the time required by the
+method or to the space (memory or disk) required by the method. For example,
+a method where the number of operations was given by `6*N+4N^2+2N^4` where
+N is the size of the problem (e.g. number of atoms simulated) would
+have O(n^4). The is because as N gets large, the number of operations is
+always less then some constant times n^4. Similarly O(n), implies the method
+scales linearly with problem size, while O(1) implies constant scaling
+(the number of operations is independent of n).
*