This note serves as a reminder of the book's content, including additional research on the mentioned topics. It is not a substitute for the book. Most images are sourced from the book or referenced.

I've noticed that taking notes on this site while reading the book significantly extends the time it takes to finish the book. I've stopped noting everything, as in previous chapters, and instead continue reading by highlighting/hand-writing notes instead.

*I plan to return to the detailed*style when I have more time.This book contains

**1007 pages**of readable content. If you read at a pace of**10 pages per day**, it will take you approximately**3.3 months**(without missing a day) to finish it. If you aim to complete it in**2 months**, you'll need to read**at least 17 pages per day**.## Information

## List of notes for this book

**The Perceptron**: one of the simplest ANN architectures (ANN = Artificial Neural Networks)

- Most commond step function is
*Heaviside step function*, sometimes sign function is used.

- How is a perceptron trained? → follows Hebb’s rule. “Cells that fire together, wire together” (the connection weight between two neurons tends to increase when they fire simultaneously.)

- perceptrons has limit (eg. cannot solve XOR problem) → use
**multiplayer perceptron (MLP)**

- perceptrons do not output a class probability → use logistic regression instead.

- When an ANN contains a deep stack of hidden layers →
**deep neural network (DNN)**

- Thời xưa, máy tính chưa mạnh → train MLPs is a problem kể cả khi dùng gradient descent.

- →
**Backpropagation**: an algo to minimize the cost function of MLPs. **Forward propagation**: from X to compute the cost J**Backward propagation**: compute derivaties and optimize the params → update params

→ Read this note.

- Watch more: Neural networks | 3Blue1Brown - YouTube