Multi-Class Model for Crop Mapping with Fused Optical and Radar Data Using Scikit-learn

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: This project aims to construct a predictive model using various machine learning algorithms and document the end-to-end steps using a template. The Crop Mapping with Fused Optical Radar Data dataset is a multi-class modeling situation where we attempt to predict one of several (more than two) possible outcomes.

INTRODUCTION: This dataset combines optical and PolSAR remote sensing images for cropland classification. The organization collected the images using RapidEye satellites (optical) and the Unmanned Aerial Vehicle Synthetic Aperture Radar (UAVSAR) system (radar) over an agricultural region near Winnipeg, Manitoba, Canada, in 2012. There are two sets of 49-radar features and two sets of 38-optical features for 05 and 14 July 2012. Seven crop type classes exist for this data set: 1-Corn; 2-Peas; 3-Canola; 4-Soybeans; 5-Oats; 6-Wheat; and 7-Broadleaf.

ANALYSIS: The average performance of the machine learning algorithms achieved an accuracy benchmark of 0.9908 using the training dataset. Furthermore, we selected Extra Trees as the final model as it processed the training dataset with a final accuracy score of 0.9975. When we processed the test dataset with the final model, the model achieved an accuracy score of 0.9976.

CONCLUSION: In this iteration, the Extra Trees model appeared to be a suitable algorithm for modeling this dataset.

Dataset Used: Crop Mapping with Fused Optical Radar Data

Dataset ML Model: Multi-class classification with numerical attributes

Dataset Reference: https://archive-beta.ics.uci.edu/ml/datasets/crop+mapping+using+fused+optical+radar+data+set

The HTML formatted report can be found here on GitHub.