This post contains the links to videos of the courses and my notes for them.

## General info

- The main URL of the specialization.
- There are in total 5 courses in the specialization:
- Course 1 β Neural Networks and Deep Learning π my notes.
- Course 2 β Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization π my notes.
- Course 3 β Structuring Machine Learning Projects π my notes.
- Course 4 β Convolutional Neural Networks π my notes.
- Course 5 β Sequence Models π my notes.

- You can use
`Ctr`+`F`to find quickly terms and corresponding notes.

## Syllabus

### Course 1

**Week 1**β**Introduction to deep learning**π my notes**Week 2**β**Neural Networks Basics**π my notes- Binary Classification β video
- Logistic Regression β video
- Logistic Regression Cost Function β video
- Gradient Descent β video
- Derivatives β video
- More Derivative Examples β video
- Computation graph β video
- Derivatives with a Computation Graph β video
- Logistic Regression Gradient Descent β video
- Gradient Descent on m Examples β video
- Vectorization β video
- More Vectorization Examples β video
- Vectorizing Logistic Regression β video
- Vectorizing Logistic Regressionβs Gradient Output β video
- Broadcasting in Python β video
- A note on python/numpy vectors β video
- Quick tour of Jupyter/iPython Notebooks β video
- Explanation of logistic regression cost function (optional) β video
- Pieter Abbeel interview β video

**Week 3**β**Shallow neural networks**π my notes- Neural Networks Overview β video
- Neural Network Representation β video
- Computing a Neural Networkβs Output β video
- Vectorizing across multiple examples β video
- Explanation for Vectorized Implementation β video
- Activation functions β video
- Why do you need non-linear activation functions? β video
- Derivatives of activation functions β video
- Gradient descent for Neural Networks β video
- Backpropagation intuition (optional) β video
- Random Initialization β video
- Ian Goodfellow interview β video

**Week 4**β**Shallow neural networks**π my notes- Deep L-layer neural network β video
- Forward Propagation in a Deep Network β video
- Getting your matrix dimensions right β video
- Why deep representations? β video
- Building blocks of deep neural networks β video
- Forward and Backward Propagation β video
- Parameters vs Hyperparameters β video
- What does this have to do with the brain? β video

### Course 2

**Week 1**β**Practical aspects of Deep Learning**π my notes- Train / Dev / Test sets β video
- Bias / Variance β video
- Basic Recipe for Machine Learning β video
- Regularization β video
- Why regularization reduces overfitting? β video
- Dropout Regularization β video
- Understanding Dropout β video
- Other regularization methods β video
- Normalizing inputs β video
- Vanishing / Exploding gradients β video
- Weight Initialization for Deep Networks β video
- Numerical approximation of gradients β video
- Gradient checking β video
- Gradient Checking Implementation Notes β video
- Yoshua Bengio interview β video

**Week 2**β**Optimization algorithms**π my notes- Mini-batch gradient descent β video
- Understanding mini-batch gradient descent β video
- Exponentially weighted averages β video
- Understanding exponentially weighted averages β video
- Bias correction in exponentially weighted averages β video
- Gradient descent with momentum β video
- RMSprop β video
- Adam optimization algorithm β video
- Learning rate decay β video
- The problem of local optima β video
- Yuanqing Lin interview β video

**Week 3**β**Hyperparameter tuning, Batch Normalization and Programming Frameworks**π my notes- Tuning process β video.
- Using an appropriate scale to pick hyperparameters β video.
- Hyperparameters tuning in practice: Panda vs. Caviar β video.
- Normalizing activations in a network β video.
- Fitting Batch Norm into a neural network β video.
- Why does Batch Norm work? β video.
- Batch Norm at test time β video.
- Softmax Regression β video.
- Training a softmax classifier β video.
- Deep learning frameworks β video.
- TensorFlow β video.

### Course 3

**Week 1**β**ML Strategy (1)**π my notes- Why ML Strategy β video
- Orthogonalization β video
- Single number evaluation metric β video
- Satisficing and Optimizing metric β video
- Train/dev/test distributions β video
- Size of the dev and test sets β video
- When to change dev/test sets and metrics β video
- Why human-level performance? β video
- Avoidable bias β video
- Understanding human-level performance β video
- Surpassing human-level performance β video
- Improving your model performance β video
- Andrej Karpathy interview β video

**Week 2**β**ML Strategy (2)**π my notes- Carrying out error analysis β video
- Cleaning up incorrectly labeled data β video
- Build your first system quickly, then iterate β video
- Training and testing on different distributions β video
- Bias and Variance with mismatched data distributions β video
- Addressing data mismatch β video
- Transfer learning β video
- Multi-task learning β video
- What is end-to-end deep learning? β video
- Whether to use end-to-end deep learning β video
- Ruslan Salakhutdinov interview β video

^{β’}Notes with this notation aren't good enough. They are being updated. If you can see this, you are so smart. ;)