Multi-Class Model for Crop Mapping with Fused Optical and Radar Data Using TensorFlow

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: This project aims to construct a predictive model using various machine learning algorithms and document the end-to-end steps using a template. The Crop Mapping with Fused Optical Radar Data dataset is a multi-class modeling situation where we attempt to predict one of several (more than two) possible outcomes.

INTRODUCTION: This dataset combines optical and PolSAR remote sensing images for cropland classification. The organization collected the images using RapidEye satellites (optical) and the Unmanned Aerial Vehicle Synthetic Aperture Radar (UAVSAR) system (radar) over an agricultural region near Winnipeg, Manitoba, Canada, in 2012. There are two sets of 49-radar features and two sets of 38-optical features for 05 and 14 July 2012. Seven crop type classes exist for this data set: 1-Corn; 2-Peas; 3-Canola; 4-Soybeans; 5-Oats; 6-Wheat; and 7-Broadleaf.

ANALYSIS: The performance of the preliminary TensorFlow models achieved an average accuracy benchmark of 0.9942 after running for 20 epochs. When we applied the final model to the test dataset, the model achieved an accuracy score of 0.9951.

CONCLUSION: In this iteration, the simple TensorFlow model appeared to be a suitable algorithm for modeling this dataset.

Dataset Used: Crop Mapping with Fused Optical Radar Data

Dataset ML Model: Multi-class classification with numerical attributes

Dataset Reference: https://archive-beta.ics.uci.edu/ml/datasets/crop+mapping+using+fused+optical+radar+data+set

The HTML formatted report can be found here on GitHub.