This post contains the links to videos of the courses and my notes for them.

General info

  • The main URL of the specialization.
  • There are in total 5 courses in the specialization:
  • You can use Ctr + F to find quickly terms and corresponding notes.

Syllabus

Course 1

  • Week 1 – Introduction to deep learning πŸ‘‰ my notes
    • Welcome β€” video
    • What is a neural network? β€” video
    • Supervised Learning with Neural Networks β€” video
    • Why is Deep Learning taking off? β€” video
    • About this Course β€” video
    • Course Resources β€” video
    • Geoffrey Hinton interview β€” video
  • Week 2 – Neural Networks Basics πŸ‘‰ my notes
    • Binary Classification β€” video
    • Logistic Regression β€” video
    • Logistic Regression Cost Function β€” video
    • Gradient Descent β€” video
    • Derivatives β€” video
    • More Derivative Examples β€” video
    • Computation graph β€” video
    • Derivatives with a Computation Graph β€” video
    • Logistic Regression Gradient Descent β€” video
    • Gradient Descent on m Examples β€” video
    • Vectorization β€” video
    • More Vectorization Examples β€” video
    • Vectorizing Logistic Regression β€” video
    • Vectorizing Logistic Regression’s Gradient Output β€” video
    • Broadcasting in Python β€” video
    • A note on python/numpy vectors β€” video
    • Quick tour of Jupyter/iPython Notebooks β€” video
    • Explanation of logistic regression cost function (optional) β€” video
    • Pieter Abbeel interview β€” video
  • Week 3 – Shallow neural networks πŸ‘‰ my notes
    • Neural Networks Overview β€” video
    • Neural Network Representation β€” video
    • Computing a Neural Network’s Output β€” video
    • Vectorizing across multiple examples β€” video
    • Explanation for Vectorized Implementation β€” video
    • Activation functions β€” video
    • Why do you need non-linear activation functions? β€” video
    • Derivatives of activation functions β€” video
    • Gradient descent for Neural Networks β€” video
    • Backpropagation intuition (optional) β€” video
    • Random Initialization β€” video
    • Ian Goodfellow interview β€” video
  • Week 4 – Shallow neural networks πŸ‘‰ my notes
    • Deep L-layer neural network β€” video
    • Forward Propagation in a Deep Network β€” video
    • Getting your matrix dimensions right β€” video
    • Why deep representations? β€” video
    • Building blocks of deep neural networks β€” video
    • Forward and Backward Propagation β€” video
    • Parameters vs Hyperparameters β€” video
    • What does this have to do with the brain? β€” video

Course 2

  • Week 1 – Practical aspects of Deep Learning πŸ‘‰ my notes
    • Train / Dev / Test sets β€” video
    • Bias / Variance β€” video
    • Basic Recipe for Machine Learning β€” video
    • Regularization β€” video
    • Why regularization reduces overfitting? β€” video
    • Dropout Regularization β€” video
    • Understanding Dropout β€” video
    • Other regularization methods β€” video
    • Normalizing inputs β€” video
    • Vanishing / Exploding gradients β€” video
    • Weight Initialization for Deep Networks β€” video
    • Numerical approximation of gradients β€” video
    • Gradient checking β€” video
    • Gradient Checking Implementation Notes β€” video
    • Yoshua Bengio interview β€” video
  • Week 2 – Optimization algorithms πŸ‘‰ my notes
    • Mini-batch gradient descent β€” video
    • Understanding mini-batch gradient descent β€” video
    • Exponentially weighted averages β€” video
    • Understanding exponentially weighted averages β€” video
    • Bias correction in exponentially weighted averages β€” video
    • Gradient descent with momentum β€” video
    • RMSprop β€” video
    • Adam optimization algorithm β€” video
    • Learning rate decay β€” video
    • The problem of local optima β€” video
    • Yuanqing Lin interview β€” video
  • Week 3 – Hyperparameter tuning, Batch Normalization and Programming Frameworks πŸ‘‰ my notes
    • Tuning process – video.
    • Using an appropriate scale to pick hyperparameters – video.
    • Hyperparameters tuning in practice: Panda vs. Caviar – video.
    • Normalizing activations in a network – video.
    • Fitting Batch Norm into a neural network – video.
    • Why does Batch Norm work? – video.
    • Batch Norm at test time – video.
    • Softmax Regression – video.
    • Training a softmax classifier – video.
    • Deep learning frameworks – video.
    • TensorFlow – video.

Course 3

  • Week 1 – ML Strategy (1) πŸ‘‰ my notes
    • Why ML Strategy – video
    • Orthogonalization – video
    • Single number evaluation metric – video
    • Satisficing and Optimizing metric – video
    • Train/dev/test distributions – video
    • Size of the dev and test sets – video
    • When to change dev/test sets and metrics – video
    • Why human-level performance? – video
    • Avoidable bias – video
    • Understanding human-level performance – video
    • Surpassing human-level performance – video
    • Improving your model performance – video
    • Andrej Karpathy interview – video
  • Week 2 – ML Strategy (2) πŸ‘‰ my notes
    • Carrying out error analysis – video
    • Cleaning up incorrectly labeled data – video
    • Build your first system quickly, then iterate – video
    • Training and testing on different distributions – video
    • Bias and Variance with mismatched data distributions – video
    • Addressing data mismatch – video
    • Transfer learning – video
    • Multi-task learning – video
    • What is end-to-end deep learning? – video
    • Whether to use end-to-end deep learning – video
    • Ruslan Salakhutdinov interview – video

Course 4

Course 5

β€’Notes with this notation aren't good enough. They are being updated. If you can see this, you are so smart. ;)