- Kaggle Dogs v Cats dataset - very famous dataset on Kaggle.
- Crop an image make the predict better!
- Make a larger dataset by rotating, scaling, cropping,...
π Notebook: Using more sophisticated images with CNN.
π Notebook: Cat vs Dog simple DNN.
Shuffle images and split/copy images to training/testing folder for each cat and dog.
π Notebook: Cats v Dogs using augmentation & the final exercise (more data).
π Notebook: Human vs Horse using augmentation.
π Notebook: Human vs Horse using augmentation.
- Create multiple "other" images from original images without saving them to the memory + quickly.
- Image augmentation helps you avoid overfitting.
- Meaning of params, check this video.
- Broad set of images for BOTH training and testing sets!
π Notebook: Coding transfer learning from the inception mode. β Video explains this notebook.
π Notebook: Horses v Humans using callBack, Augmentation, transfer learning (final exercise).
π Notebook: Horses v Humans using callBack, Augmentation, transfer learning (final exercise).
- Transfer learning = Taking existing model that's trained on far more data + use the features that model learned.
- (Tensorflow tutorial) Transfer learning and fine-tuning
- Inception/GoogLeNet network: Inception Modules are used in CNN to allow for more efficient computation and deeper Networks through a dimensionality reduction with stacked 1x1 convolutions. The modules were designed to solve the problem of computational expense, as well as overfitting, among other issues. The solution, in short, is to take multiple kernel filter sizes within the CNN, and rather than stacking them sequentially, ordering them to operate on the same level. (ref) Check more in Refereces 1, 2, 3.
- Dropout: remove a random number of neurons in your NN. It works well because:
- neighboring neurons often end up with similar weights, which can lead to overfitting.
- a neuron can over-weigh the input from a neuron in the previous layer
π Rock-Paper-Scissors dataset (generated using CGI techniques)
The codes are quite the same as in the case of binary classification, the differences are
- Applying Convolutions on top of our Deep neural network will make training βͺ It depends on many factors. It might make your training faster or slower, and a poorly designed Convolutional layer may even be less efficient than a plain DNN!