Video: Using Convolutional Neural Networks to Automatically Analyze Aerial and Satellite Imagery

Video: Using Convolutional Neural Networks to Automatically Analyze Aerial and Satellite Imagery

In this recording of our most recent Technical Staff Meeting, we walk through our team’s work on Raster Vision, a set of open source tools for automatically analyzing aerial and satellite imagery using convolutional neural networks.

As part of Raster Vision, we have implemented approaches to tagging (predicts a set of tags for each image) and semantic segmentation (predicts the category of each pixel in an image). We’re also working on methods for object detection (localizes objects of interest in imagery).

Video Outline

  • a review of convolutional neural networks
  • our approaches to tagging and semantic segmentation for two machine learning contests
  • demo of a tool for visualizing output on an interactive map.
    • Seeing the results on a map can give a great sense of where the algorithms get it right, and where they get it wrong, and where they amusingly have a tough time figuring it out (e.g. a large food truck: is it a car or a building?)
  • our use of AWS Batch for running experiments
  • a preview of some new work on object detection

Using the tools discussed in this talk, we obtained good results in two machine learning contests: the “ISPRS Semantic Labeling (2D)” contest and the “Planet: Understanding the Amazon from Space” contest. Eventually, we plan on integrating this functionality into Raster Foundry, an Azavea product that allows users to easily ingest, find, analyze, and visualize earth imagery.

Want to learn more about our research? Read this post that outlines our research on deep learning for semantic segmentation of aerial imagery in more detail.

Also, visit our Research and Github pages to learn about our other open source work.