Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.
SUMMARY: The purpose of this project is to construct a predictive model using various machine learning algorithms and to document the end-to-end steps using a template. The CIFAR-10 dataset is a multi-class classification situation where we are trying to predict one of several (more than two) possible outcomes.
INTRODUCTION: The CIFAR-10 is a labeled subset of the 80 million tiny images dataset. They were collected by Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. The CIFAR-10 dataset consists of 60,000 32×32 color images in 10 classes, with 6,000 images per class. There are 50,000 training images and 10,000 test images.
The dataset is divided into five training batches and one test batch, each with 10,000 images. The test batch contains exactly 1,000 randomly selected images from each class. The training batches contain the remaining images in random order, but some training batches may contain more images from one class than another. Between them, the training batches contain exactly 5,000 images from each class.
In iteration Take1, we constructed a simple VGG convolutional model with 1 VGG block to classify the images. This model serves as the baseline for the future iterations of modeling.
For this iteration, we will construct a few more VGG convolutional models with 2 and 3 VGG blocks to classify the images. The additional models will enable us to choose a final baseline model before applying other performance-enhancing techniques.
ANALYSIS: In iteration Take1, the performance of the Take1 model with the default parameters achieved an accuracy score of 66.39% on the validation dataset after training for 50 epochs. After tuning the hyperparameters, the Take1 model with the best hyperparameters processed the training dataset with an accuracy of 100.00%. The same model, however, processed the test dataset with an accuracy of only 67.01%. We will need to explore other modeling approaches to make a better model that reduces over-fitting.
In this iteration, the performance of the VGG-1 model with the default parameters achieved an accuracy score of 66.69% on the validation dataset after training for 50 epochs. The VGG-2 model achieved an accuracy score of 71.35% on the validation dataset after training for 50 epochs. The VGG-3 model achieved an accuracy score of 73.81% on the validation dataset after training for 50 epochs. The additional VGG blocks helped the model, but we still need to explore other modeling approaches to make a better model that reduces over-fitting.
CONCLUSION: For this dataset, the model built using Keras and TensorFlow achieved a satisfactory result and should be considered for future modeling activities.
Dataset Used: The CIFAR-10 Dataset
Dataset ML Model: Multi-class classification with numerical attributes
Dataset Reference: https://www.cs.toronto.edu/~kriz/cifar.html
One potential source of performance benchmarks: https://machinelearningmastery.com/how-to-develop-a-cnn-from-scratch-for-cifar-10-photo-classification/
The HTML formatted report can be found here on GitHub.