Multi-Class Deep Learning Model for CIFAR-10 Object Recognition Using Keras Take 8

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: The purpose of this project is to construct a predictive model using various machine learning algorithms and to document the end-to-end steps using a template. The CIFAR-10 dataset is a multi-class classification situation where we are trying to predict one of several (more than two) possible outcomes.

INTRODUCTION: The CIFAR-10 is a labeled subset of the 80 million tiny images dataset. They were collected by Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. The CIFAR-10 dataset consists of 60,000 32×32 color images in 10 classes, with 6,000 images per class. There are 50,000 training images and 10,000 test images.

The dataset is divided into five training batches and one test batch, each with 10,000 images. The test batch contains exactly 1,000 randomly selected images from each class. The training batches contain the remaining images in random order, but some training batches may contain more images from one class than another. Between them, the training batches contain exactly 5,000 images from each class.

In iteration Take1, we constructed a simple VGG convolutional model with 1 VGG block to classify the images. This model serves as the baseline for the future iterations of modeling.

In iteration Take2, we constructed a few more VGG convolutional models with 2 and 3 VGG blocks to classify the images. The additional models enabled us to choose a final baseline model before applying other performance-enhancing techniques.

In iteration Take3, we tuned the VGG-3 model with various hyperparameters and selected the best model.

In iteration Take4, we added some dropout layers as a regularization technique to reduce over-fitting.

In iteration Take5, we applied data augmentation to the dataset as a regularization technique to reduce over-fitting.

In iteration Take6, we applied both dropout layers and data augmentation to the dataset for reducing over-fitting.

In iteration Take7, we applied dropout layers, data augmentation, and batch normalization to the dataset for reducing over-fitting.

In this iteration, we will tune the Take7 model further by experimenting with different optimizers and different learning rates.

ANALYSIS: In iteration Take1, the performance of the Take1 model with the default parameters achieved an accuracy score of 66.39% on the validation dataset after training for 50 epochs. After tuning the hyperparameters, the Take1 model with the best hyperparameters processed the training dataset with an accuracy of 100.00%. The same model, however, processed the test dataset with an accuracy of only 67.01%. We will need to explore other modeling approaches to make a better model that reduces over-fitting.

In iteration Take2, the performance of the VGG-1 model with the default parameters achieved an accuracy score of 66.69% on the validation dataset after training for 50 epochs. The VGG-2 model achieved an accuracy score of 71.35% on the validation dataset after training for 50 epochs. The VGG-3 model achieved an accuracy score of 73.81% on the validation dataset after training for 50 epochs. The additional VGG blocks helped the model, but we still need to explore other modeling approaches to make a better model that reduces over-fitting.

In iteration Take3, the performance of the VGG-3 Take3 model with the default parameters achieved a maximum accuracy score of 73.43% on the validation dataset after training for 50 epochs. After tuning the hyperparameters, the Take1 model with the best hyperparameters processed the training dataset with an accuracy of 98.09%. The same model, however, processed the test dataset with an accuracy of only 73.44%. Even with VGG-3 and hyperparameter tuning, we still have an over-fitting problem with the model.

In iteration Take4, the performance of the Take4 model with the default parameters achieved a maximum accuracy score of 76.96% on the validation dataset after training for 50 epochs. We can see from the graph that the accuracy and loss curves for the training and validation sets were moving in the same direction and converged well. After increasing the number of epochs, the Take4 model processed the training dataset with an accuracy of 82.22% after 100 epochs. The same model processed the test dataset with an accuracy of 82.35%. This iteration indicated to us that having dropout layers can be a good tactic to improve the model’s predictive performance.

In iteration Take5, the performance of the Take5 model with the default parameters achieved a maximum accuracy score of 81.41% on the validation dataset after training for 50 epochs. We can see from the graph that the accuracy and loss curves for the training and validation sets were moving in the same direction and converged well. After increasing the number of epochs, the Take5 model processed the training dataset with an accuracy of 91.60% after 100 epochs. The same model processed the test dataset with an accuracy of 84.43%. This iteration indicated to us that applying data augmentation can be a good tactic to improve the model’s predictive performance.

In iteration Take6, the performance of the Take6 model with the default parameters achieved a maximum accuracy score of 84.12% on the validation dataset after training for 100 epochs. We can see from the graph that the accuracy and loss curves for the training and validation sets were moving in the same direction and converged well. After increasing the number of epochs, the Take6 model processed the training dataset with an accuracy of 87.21% after 200 epochs. The same model processed the test dataset with an accuracy of 85.79%. This iteration indicated to us that applying dropout layers and data augmentation together can be a good tactic to improve the model’s predictive performance.

In iteration Take7, the performance of the Take7 model with the default parameters achieved a maximum accuracy score of 87.15% on the validation dataset after training for 200 epochs. We can see from the graph that the accuracy and loss curves for the training and validation sets were moving in the same direction and converged well. After increasing the number of epochs, the Take7 model processed the training dataset with an accuracy of 90.15% after 400 epochs. The same model processed the test dataset with an accuracy of 89.02%. This iteration indicated to us that applying dropout layers, data augmentation, and batch normalization together can be a good tactic to improve the model’s predictive performance.

In this iteration, the performance of the Take8 model with the default parameters achieved a maximum accuracy score of 88.35% on the validation dataset after training for 400 epochs. After trying out different optimizers and settings, the best Take8 model processed the training dataset with an accuracy of 92.92%. The same model processed the test dataset with an accuracy of 90.36%. This iteration indicated to us that using an RMSprop optimizer can be a good option to improve the model’s predictive performance for this dataset.

CONCLUSION: For this dataset, the model built using Keras and TensorFlow achieved a satisfactory result and should be considered for future modeling activities.

Dataset Used: The CIFAR-10 Dataset

Dataset ML Model: Multi-class classification with numerical attributes

Dataset Reference: https://www.cs.toronto.edu/~kriz/cifar.html

One potential source of performance benchmarks: https://machinelearningmastery.com/how-to-develop-a-cnn-from-scratch-for-cifar-10-photo-classification/

The HTML formatted report can be found here on GitHub.