Multi-Class Image Classification Deep Learning Model for ASL Alphabet Images Using TensorFlow Take 2

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: This project aims to construct a predictive model using a TensorFlow convolutional neural network (CNN) and document the end-to-end steps using a template. The ASL Alphabet Images dataset is a multi-class classification situation where we attempt to predict one of several (more than two) possible outcomes.

INTRODUCTION: The data set is a collection of alphabets from the American Sign Language, separated into 29 folders representing the various classes. The training data set contains 87,000 images which are 200×200 pixels. There are 29 classes, of which 26 are for the letters A-Z and three labels for SPACE, DELETE, and NOTHING. The test data set contains only 28 images to encourage the use of real-world test images.

In this Take2 iteration, we will construct a CNN model based on the VGG19 architecture to predict the ASL alphabet letters based on the available images.

ANALYSIS: In this Take2 iteration, the VGG19 model’s performance achieved an accuracy score of 100% after ten epochs using the training dataset. The same model processed the validation dataset with an accuracy measurement of 95.33%. Finally, the final model processed the test dataset with an accuracy score of 100%.

CONCLUSION: In this iteration, the VGG19-based CNN model appeared to be suitable for modeling this dataset. We should consider experimenting with TensorFlow for further modeling.

Dataset Used: Kaggle ASL Alphabet Images

Dataset ML Model: Multi-class image classification with numerical attributes

Dataset Reference: https://www.kaggle.com/grassknoted/asl-alphabet

One potential source of performance benchmarks: https://www.kaggle.com/grassknoted/asl-alphabet/code

The HTML formatted report can be found here on GitHub.