This note serves as a reminder of the book's content, including additional research on the mentioned topics. It is not a substitute for the book. Most images are sourced from the book or referenced.
π List of all notes for this book. IMPORTANT UPDATE Nov 18, 2024: I've stopped taking detailed notes from the book and now only highlight and annotate directly in the PDF files/book. With so many books to read, I don't have time to type everything. In the future, if I make notes while reading a book, they'll contain only the most notable points (for me).
- These notes are for the version 3.
- Author: AurΓ©lien GΓ©ron.
This book is organized in 2 parts:
- Andrew Ngβs ML course on Coursera (my notes for the old version of this course).
- Scikit-Learnβs User Guide.
- Blogs listed on Quora.
The chapter 1 introduces a lot of fundamental concepts (and jargon) that every data scientist should know by heart. If you already familiar with machine learning basics, you may want to skip directly to Chapter 2.
A computer program is said to learn from experience E with respect to some task T and some performance measure P, if its performance on T, as measured by P, improves with experience E. β Tom Mitchell, 1997.
Example: email spam filter β give it examples of spam/non-spam emails so that it can learn to flag spam.
- Training set: examples the system uses to learn. Each training example is call training instance (or sample).
- Model: The part of ML system that learns and makes predictions. Example: Neural Networks, Random Forest,β¦
- T = task to flag spam for new emails. E = training data. The perfomance measure P needs to bedfined β itβs called accuracy.
For example, some words like β4Uβ in the subject,
- Use (1), we will ignore all of these words (ignore all patterns we think) β spammer changes to use βFor Uβ β we need to update (1) again β β¦ β bad!
- (2) will detects the frequent patterns of words in the spam examples and detect the new ones.
Data Mining = digging into large amounts of data to discover hidden patterns.
Machine learning is great for:
- Problems for which existing solutions require a lot of fine-tuning or long lists of rules (a machine learning model can often simplify code and perform better than the traditional approach)
- Complex problems for which using a traditional approach yields no good solution (the best machine learning techniques can perhaps find a solution)
- Fluctuating environments (a machine learning system can easily be retrained on new data, always keeping it up to date)
- Getting insights about complex problems and large amounts of data
- Analyzing images of products on a production line to automatically classify them β Image Classification β using CNNs, Transformer.
- Detecting tumors in brain scans β Image Segmentation β also using CNNs and Transformers.
- Automatically classifying news articles β NLP (Natural Language Processing) β use RNN (Recurrent Neural Networks), Transformers.
- Automatically flagging offensive comments on discussion forums β Text classifications.
- Summarizing long documents automatically β Text summarization.
- Creating a chatbot or a personal assistant β NLU (Natural Language Understanding), Question-Answering modules.
- Forecasting your companyβs revenue next year, based on many performance metrics β Linear Regression, Polynomial Regression, SVM (Support Vector Machine), Random Forest, Neural Networks.
- Making your app react to voice commands β Speech Recognition β RNNs, CNNs, Transformers.
- Detecting credit card fraud β Anomaly Detection β Isolation Forests, Gaussian mixture models, Autoencoders.
- Segmenting clients based on their purchases so that you can design a different marketing strategy for each segment β Clustering β K-Means, DBSCAN,β¦
- Representing a complex, high-dimensional dataset in a clear and insightful diagram β Data Visualization, Dimentionality Reduction
- Recommending a product that a client may be interested in, based on past purchases β Recommender System.
- Building an intelligent bot for a game β Reinforcement Learning
Classify types of ML based on:
- Supervised during training? β supervised, unsupervised, semi-supervised, self-supervised,β¦
- Can they learn incrementally on the fly? β onlive learning vs batch learning
- Comparing new data to known data? Or detecting new patterns? β Instance-based learning vs model-based learning.
Above types can be used together.
- Training set fed to the algo includes the solutions β labels
- Classification: train examples with their classes β it classifies new instance.
- Regression: predicts a target (eg. price of car) given a set of features/predictors/attributes (eg. mileage, age, brand,β¦). Regression model can be used for classification. β Logistic regression
- Training data is unlabeled. β clustering can be used to detect group of similar data. If you use hierarchical clustering, it may subdivide each group into smaller groups.
- Visualization is an example of unsupervised learning. β These algorithms try to preserve as much structure as they can.
- Dimensionality reduction: simplify the data without losing too much information. β merge correlated features into one.
- Anomaly Detection: eg. detecting unusual credit card transactions, catching manufacturing defects, or automatically removing outliers from a dataset before feeding it to another learning algorithm. β system learns the normal + meet new instance β itβs βarnomalβ or not.
- Novelty detection: alike anomaly, it looks for new instances that look different from all in the training set.
- Association rule learning: dig into large amount of data β find the patterns, relation between features. Eg. relation between products bought in a supermaket.
Itβs algos dealing with data that partially labeled. Eg. Google photos labels your face in the new photos or label all faces in a photo.
Most semi-supervised = unsupervised + supervised. Eg: using clustering to label unlabled data and then use supervised algo with this new all-labeled data.
Generate a fully labeled dataset from a fully unlabeled one.
A large amount of unlabeled data can be processed by masking certain parts in an image and training a model to reconstruct the missing parts. Additionally, the model can classify species such as cats and dogs, although it may not know their specific names yet. Later on, we can map this knowledge to the labeled names that humans use.
Transfer learning = transfering knowledge from one task to another task. β one of the important techniches in ML.
Agent = the learning system. β it can observe the env + select and perform actions + get rewards (or penalties). β it must find the best strategy (policy).
Example: DeepMindβs AlphaGo beats Ke Jie (number one in Go game) by learning from milions of games and play with itself.
Itβs trained from all the available data, done offline. β Offline Learning.
- Model tends to decay because the world keep changes β model rot or data drift.
- If you want Batch Learning to know new data β retrain on the full dataset (new + old).
- Itβs not effective (time / resources consumption).
- It feeds the system data sequentially (mini batches) β quick and cheap, new data can be learnt on the fly.
- Can be used if the data changes fast or you have limited computing resources (out-of-core learning).
- Can be used to train huge data (cannot be trained at once)
- Learning rate = how fast the system should adapt to the data changes. Too high β quickly adapt but quickly forget and vice versa.
- Weakness: The system is vulnerable to bad data being fed while it is live. To address this, set up a mechanism to turn off learning if a drop in performance is detected.
- One way to categorize ML systems is by how they generalize.
- Should: good performance in both training and predict.
Learn by heart + ability of measure of similarity to detect βlook-alikeβ spam emails, for example.
Generalize from dataset β build a model β use this model to make predictions.
You want to know if money makes people happy?
- From dataset, you plot β data studying
- Based on the plot, it looks like a linear regression (satisfaction goes up/down linearily as GDP) β model selection
- Plot the model
- How we know which model is the best? β measure the good by a utility function (or fitness function) or measure the bad by cost function. β For linear regression, we usually use cost function (measures the distance between the linear modelβs predictions and the training examples) β objective: minimize the cost function!
- Predict new data β inference
2 things can go wrong in training models β βbad modelβ & βbad dataβ.
- Insufficient Quantity of Training Data β For child, itβs easy for recognizing βan appleβ, not ML models, we need a lot of data for it!
In this paper, MS researchers show that, with enough data, different models perform almost identically results!
The idea that data matters more than algorithms for complex problems!
However, data is usually not enough!
- Nonrepresentative Training Data β to generalize well, training data need to be representative of new cases!
- Sample is too small β sampling noise (nonrepresentative data). Large sample can be nonrepresentative if sampling method is flawed β sampling bias!
- Poor-Quality Data: itβs worthy to spend time cleaning up the training data. β most data scientist spend a significant part of their time to do that!
- Irrelevant Features: garbage in, garbage out. A critical part of the success of a machine learning project is coming up with a good set of features to train on β Feature engineering. 2 steps:
- Feature selection: select the most useful features to train.
- Feature extraction: combine existing features to make a more useful one.
- Overfitting the training data: the model performs well on the training data, but it does not generalize well.
- Simplify model: fewer parameters, reducing the number of attributes, constraining the model,β¦
- Gather more data.
- Reduce the noise in data.
Overfitting happens when model is too complex relative to the amount of data. Possible solutions:
Regularization = constraining a model to make it simpler and reduce the overfitting. The amount of refularization to apply during learning is controlled by hyperparameters.
You want to find the right balance between fitting the training data perfectly and keeping the model simple enough to ensure that it will generalize well.
β Tuning hyperparameters is an important part of building a machine learning system!
- Underfitting the training data: your model is too simple to learn the underlying structure of the data. Possible solutions:
- Select a more powerful model (more parameters)
- Better features (feature engineering)
- Reduce the constraints on the model (eg. reducing the regularization hyperparameter).
- Split data into 2 sets: training set (train the model using it) & test set (test if the model works well using it). Commonly use 80% training and 20% test (but not all the cases depending on the size of dataset).
- Evaluate your model with test set β get generalization error (out-of-sample error).
- If training error is low but generalization error is high β overfitting.
Problem: You have a model β how to choose value of regularization hyperparameters? β train 100 different models using 100 different values β test with test set β get the best value β but when you apply to real data, itβs bad β Why? Because itβs fixed to the test data itself!
β Common solution is holdout validation
Holdout validation: split training set into βnewβ training set + validation set (or development set or dev set).
Process: train multiple models (various hyperparameters) with βnewβ training set β select model performed best on validation set β retrain the best model on the whole training set (new + validation) β final model β evaluate with test set.
Problem:
- Validation set is too small β model may be a βsuboptimalβ one.
- Validation set is too large β remaining training is much smaller than the full training set β Itβs bad because it likes βselecting the fastest sprinter to participate in a marathonβ β solution: perform repeated cross-validation (use multiple validation sets and get the average) β weakness: training time takes longer!
It is easy to obtain a large amount of data, but such data may not be representative enough to be used in production.
For example, when building a mobile app to detect flowers, training the model using data downloaded from the web may not yield accurate results. β we donβt know when model is bad because there is overfitting or mismatch!
Remember: validation set & test set must be representative of the data you expect to use in production!
Solution: use train-dev set β Idea: train on βtrainβ + evaluate on βtrain-devβ β if itβs poor, itβs overfitting. Otherwise β no overfitting β evaluate on βdevβ β if itβs poor, itβs mismatch! β when itβs good β evaluate on test β when itβs good β production.
No free lunch theorem
If you make absolutely no assumption about the data, then there is no reason to prefer one model over any other. In practice you make some reasonable assumptions about the data and evaluate only a few reasonable models.
Read the book.
Β