Commit 3cd09eaf authored by Klaus Strohmenger's avatar Klaus Strohmenger

readded non-exercise notebook bayes-theorem, repaired broken links in markdown files

parent 626e0e35
......@@ -9,30 +9,30 @@ For the course content, see [Bayesian Learning](./bayesian_learning.pdf).
* [exercise-expected-value](../notebooks/math/probability-theory/exercise-expected-value.ipynb)
* [exercise-variance-dependence-on-sample-size](../notebooks/monte-carlo-simulation/exercise-variance-dependence-on-sample-size.ipynb)
* [exercise-biased-monte-carlo-estimator](../notebooks/monte-carlo-simulation/exercise-baised-monte-carlo-estimator)
* [exercise-biased-monte-carlo-estimator](../notebooks/monte-carlo-simulation/exercise-biased-monte-carlo-estimator.ipynb)
* [exercise-entropy](../notebooks/information-theory/exercise-entropy.ipynb)
* [exercise-kullback-leibler-divergence](../notebooks/information-theory/exercise-kullback-leibler-divergence.ipynb)
* [exercise-multivariate-gaussian](../notebooks/math/probability-theory/distributions//exercise-multivariate-gaussian.ipynb)
* [exercise-multivariate-gaussian](../notebooks/math/probability-theory/distributions/exercise-multivariate-gaussian.ipynb)
* [exercise-bayes-rule](../notebooks/math/probability-theory/exercise-bayes-rule.ipynb)
#### (Maximum) likelihood and Maximum a posterior
* [exercise-univariate-gaussian-likelihood](../notebooks/courses/machine-learning-fundamentals/exercise-univariate-gaussian-likelihood.ipynb)
* [exercise-univariate-gaussian-likelihood](../notebooks/machine-learning-fundamentals/exercise-univariate-gaussian-likelihood.ipynb)
* [exercise-linear-regression-MAP](../notebooks/machine-learning-fundamentals/exercise-linear-regression-MAP.ipynb)
#### Bayesian Networks
* [exercise-bayesian-networks-by-example](../notebooks/graphical-models/directed/exercise-bayesian-networks-by-example.ipynb)
* [exercise-d-separation](../notebooks/graphical-models/directed/exercise-d-separation.ipynb)
* [exercise-forward-reasoning-probability-tables](../notebooks/graphical-models/directed/exercise-forward-reasoning-probability-tables.ipynb)
* [exercise-sensorfusion-and-kalman-filter-1d](../notebooks/courses/sequence-learning/exercise-sensorfusion-and-kalman-filter-1d.ipynb)
* [exercise-sensorfusion-and-kalman-filter-1d](../notebooks/sequence-learning/exercise-sensorfusion-and-kalman-filter-1d.ipynb)
#### EM-Algorithm
* [exercise-EM-simple-example](../notebooks/graphical-models/directed/exercise-EM-simple-example.ipynb)
* [exercise-1d-gmm-em](../notebooks/graphical-models/directed//exercise-1d-gmm-em.ipynb)
* [exercise-1d-gmm-em](../notebooks/graphical-models/directed/exercise-1d-gmm-em.ipynb)
#### Monte-Carlo / MCMC / Sampling
* [exercise-inverse-transform-sampling](../notebooks/monte-carlo-simulation/sampling/exercise-inverse-transform-sampling.ipynb)
* [exercise-importance-sampling](../notebooks/monte-carlo-simulation/sampling/exercise-importance-sampling.ipynb)
* [exercise-rejection-sampling](../notebooks/monte-carlo-simulation/sampling/exercise-rejection-sampling.ipynb)
* [exercise-inverse-transform-sampling](../notebooks/monte-carlo-simulation/sampling/exercise-sampling-inverse-transform.ipynb)
* [exercise-importance-sampling](../notebooks/monte-carlo-simulation/sampling/exercise-sampling-importance.ipynb)
* [exercise-rejection-sampling](../notebooks/monte-carlo-simulation/sampling/exercise-sampling-rejection.ipynb)
* [exercise-sampling-direct-and-simple-MCMC](../notebooks/monte-carlo-simulation/sampling/exercise-sampling-direct-and-simple-MCMC.ipynb)
#### Variance Reduction Techniques
......@@ -49,12 +49,12 @@ For the course content, see [Bayesian Learning](./bayesian_learning.pdf).
#### Probabilistic Programming
* [exercise-pyro-simple-gaussian](../notebooks/probabilistic-programming/pyro/exercise-pyro-simple-gaussian.ipynb)
* [exercise-pymc3-examples](../notebooks/probabilistic-programming/pymc/exercise-pymc3-examples.ipynb)
* [exercise-pymc3-bundesliga-predictor](../notebooks/probabilistic-programming/exercise-pymc3-bundesliga-predictor.ipynb)
* [exercise-pymc3-ranking](../notebooks/probabilistic-programming/exercise-pymc3-ranking.ipynb)
* [exercise-pymc3-bundesliga-predictor](../notebooks/probabilistic-programming/pymc/exercise-pymc3-bundesliga-predictor.ipynb)
* [exercise-pymc3-ranking](../notebooks/probabilistic-programming/pymc/exercise-pymc3-ranking.ipynb)
#### Bayesian Deep Learning Examples
* For the exercises you need [dp.py](https://gitlab.com/deep.TEACHING/educational-materials/tree/dev/notebooks/probabilistic-programming/dp.py).
* For the exercises you need [dp.py](https://gitlab.com/deep.TEACHING/educational-materials/tree/dev/notebooks/differentiable-programming/dp.py).
* [exercise-variational-autoencoder](../notebooks/variational/exercise-variational-autoencoder.ipynb)
* [exercise-bayesian-by-backprop](../notebooks/variational/exercise-bayesian-by-backprop.ipynb)
......
......@@ -32,7 +32,7 @@ Befire starting with convolution in neural networks, do this exercise to get an
* [exercise-convolution](../notebooks/feed-forward-networks/exercise-convolution.ipynb)
The following pen & paper exercise will provide you with the theoretical background about the vectorization of the convolutional layer.
[exercise-conv-net-pen-and-paper](../notebooks/feed-forward-networks/exercise-conv-net-pen-and-paper.pdf)
* [exercise-conv-net-pen-and-paper](../notebooks/feed-forward-networks/exercise-conv-net-pen-and-paper.pdf)
In this exercise, you will continue to implement the neural network framework, that you started in exercise e06_nn_framework. At the end of this exercise, the framework should be extended by a convolutional layer and a pooling layer so that you can create a simple ConvNets. You want your operations, especially the convolution, to be efficient, so it will not slow down the training process to an unacceptable ratio. Therefore your goal is to implement vectorized versions of the layers in the exercise.
* [exercise-cnn-framework](../notebooks/feed-forward-networks/exercise-cnn-framework.ipynb)
......
......@@ -15,16 +15,16 @@ In this course you will extend the logistic regression model to a fully connecte
## Notebooks
In [Introduction to Machine Learning](introduction-to-ml.md)(highly recommended), you calulated the gradient by hand and just used the final formula. In this exercise you will learn how to just derive the single individual functions and chain them programatically. This allows to programmatically build computational graphs and derive them w.r.t. certain variables, only knowing the derivatives of the most basic functions.
* [exercise-backprop](../notebooks/courses/htw-berlin/angewandte-informatik/course_1163150_cnns/exercise-backprop.ipynb)
* [exercise-backprop](../notebooks/feed-forward-networks/exercise-backprop.ipynb)
Here you will learn to visualize a neural network given matrices of wheights and compute the forward pass using matrix and vector operations.
* [exercise-nn-pen-and-paper](../notebooks/courses/htw-berlin/angewandte-informatik/course_1163150_cnns/exercise-nn-pen-and-paper.ipynb)
* [exercise-nn-pen-and-paper](../notebooks/feed-forward-networks/exercise-nn-pen-and-paper.ipynb)
Knowing how to compute the forward pass and the backward pass with backpropagation, you are ready for a simple neural neutwork. First you will refresh your knowledge about logistic regression, but this time, implement it using computational graph. Then you will add a hidden layer. Further you will understand what happens with the data in the hidden layer by plotting it.
* [exercise-simple-neural-network](../notebooks/machine-learning-fundamentals/exercise-simple-neural-network.ipynb)
For a better understanding of neural networks, you will start to implement a framework on your own. The given notebook explains some core functions and concepts of the framework, so all of you have the same starting point. Our previous exercises were self-contained and not very modular. You are going to change that. Let us begin with a fully connected network on the now well-known MNIST dataset. The Pipeline will be
* [exercise-nn-framework](../notebooks/courses/htw-berlin/angewandte-informatik/course_1163150_cnns/exercise-nn-framework.ipynb)
* [exercise-nn-framework](../notebooks/feed-forward-networks/exercise-nn-framework.ipynb)
In the previous exercises we used to initilize our weights with a normaldistribution centered arount 0. Although it was not the worst way we could have done this, there also exist better ways. One is the *Xavier* initilization \[GLO10\], which is still practically used in state-of-the-art neural network architectures.
* **TODO:** exercise-weight-initilization
......
......@@ -81,16 +81,17 @@ Learning about Named Entity Recognition and applicable algorithms, will provide
* Probability Theory
* [Bayes' Theorem](../notebooks/machine-learning-fundamentals/probability-theory/bayes-theorem.ipynb)
* [Exercise: Cookie Problem](../notebooks/machine-learning-fundamentals/probability-theory/exercise-cookie-problem.ipynb)
* [Bayesian Networks by Example](../notebooks/machine-learning-fundamentals/probability-theory/bayesian-networks-by-example.ipynb)
* [Exercise: D-Separation](../notebooks/machine-learning-fundamentals/probability-theory/exercise-d-separation.ipynb)
* [Bayes' Rule](../notebooks/math/bayes-theorem.ipynb)
* [Exercise: Bayes' Rule](../notebooks/math/exercise-bayes-rule.ipynb)
* [Exercise: Cookie Problem](../notebooks/math/exercise-cookie-problem.ipynb)
* [Bayesian Networks by Example](../notebooks/graphical-models/directed/bayesian-networks-by-example.ipynb)
* [Exercise: D-Separation](../notebooks/graphical-models/directed/exercise-d-separation.ipynb)
* Graphical Models
* Markov Models - [Exercise: Bi-Gram Language Model](../notebooks/text-information-extraction/sequences/exercise-bi-gram-language-model.ipynb)
* Hidden Markov Models (HMM) - [Exercise: Hidden Markov Models](../notebooks/natural-language-processing/text-information-extraction/sequences/exercise-hidden-markov-models.ipynb)
* Maximum Entropy Markov Models (MEMM) - [Exercise: MEMM](../notebooks/natural-language-processing/text-information-extraction/sequences/exercise-memm.ipynb)
* Markov Models - [Exercise: Bi-Gram Language Model](../notebooks/sequence-learning/exercise-bi-gram-language-model.ipynb)
* Hidden Markov Models (HMM) - [Exercise: Hidden Markov Models](../notebooks/sequence-learning/exercise-hidden-markov-models.ipynb)
* Maximum Entropy Markov Models (MEMM) - [Exercise: MEMM](../notebooks/sequence-learning/exercise-memm.ipynb)
* Linear-Chain Conditional Random Fields (CRF) [Exercise: Linear-Chain CRF](../notebooks/natural-language-processing/text-information-extraction/exercise-linear-chain-crf.ipynb)
......
This diff is collapsed.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment