Introducing: Raster Vision v0.20

Introducing: Raster Vision v0.20

This Raster Vision release brings new features, improved documentation, and an entirely new way to use Raster Vision. If you found Raster Vision too inflexible in the past, we invite you to give it another try!

We highlight some major changes below. For a full list of changes, see the changelog.

Highlights

Raster Vision is now also a library

Raster Vision has historically been a kind of low-code framework where users could configure a full pipeline in one go and then execute it all at once (or in stages). While this has obvious advantages for some workflows, it also brings with it the usual drawbacks of frameworks. Specifically, in Raster Vision’s case, the framework paradigm made it too inscrutable and, thus, harder to debug, customize, and extend.

Based on these concerns, and the trajectory of other similar projects in the past, we undertook a major refactoring effort to make Raster Vision also usable as a library from which users can pick individual components and use them to build new things.

One major benefit (and indeed one of the motivations) of this is that Raster Vision can now be used in Jupyter notebooks!

Another, even bigger, benefit is that you can now combine Raster Vision with Lightning (or other machine learning libraries) allowing you to use the latest and greatest in machine learning techniques while letting Raster Vision handle the geospatial data reading and writing.

Screenshot of a snippet of code above a side-by-side of a satellite image of some houses (input) next to a black square with orange polygons (ground truth) modeling where the houses were in the image to the left.
Example showing how parts of Raster Vision can now be imported and used in Jupyter notebooks.

Improved documentation and tutorials

In this release, we have also significantly improved Raster Vision’s documentation. Most notably, it now contains detailed tutorial notebooks as well as a full API reference. The documentation for the Raster Vision pipeline, which used to make up most of the project’s documentation in previous versions, is now located in The Raster Vision Pipeline section.

Screenshot of tutorial thumbnails in raster vision including a tutorial for Reading raster data, reading vector data, reading labels, sampling training data, scenes and AOIs, Plot samples from Datasets using visualizers, training a model, and prediction and evaluation.
The documentation now contains multiple tutorials. Check them out here.

Support for multiband imagery and external models extended to all computer vision tasks

In version 0.13, we introduced the ability to use multiband imagery for semantic segmentation. This functionality has now been extended to chip classification and object detection as well.

Similarly, the ability to use external models has been extended to object detection where it was previously missing.

Two side-by-side arial images of a car on a road with a red box around the car in both images. The image on the right is darker than the one on the left. The image to the left has a header reading "RGB" where the image on the right reads "IR"
You can now do object detection on multiband imagery. This example shows the use of a 4-band (RGB plus IR) image for object detection.

Improved data fusion

Raster Vision can now combine bands from multiple sources of raster data even if they have different resolutions and extents.

13 side-by-side images with varying degrees of blurriness. Example of a chip sampled from a MultiRasterSource that combines the 13 Sentinel-2 bands.
Example of a chip sampled from a MultiRasterSource that combines the 13 Sentinel-2 bands. Despite coming from 13 different GeoTIFFs with varying resolutions, the bands can be seen to be geographically aligned.

Cleaner semantic segmentation output

You can now discard edges of predicted chips in semantic segmentation in order to reduce boundary artifacts. This technique can be used in addition to the previously introduced ability to average overlapping regions in adjacent chips.

4 side-by-side images with a blue background and bright splotches. The first image shows the default behavior with very obvious boundary artifacts. The second image shows an improvement using a smaller stride and averaging. The third image shows an improvement using edge cropping. The fourth image shows a combination of averaging and cropping.
Example showing how you can control the quality of semantic segmentation predictions using a couple of parameters. The first image shows the default behavior with very obvious boundary artifacts. The second image shows an improvement using a smaller stride and averaging. The third image shows an improvement using edge cropping. The fourth image shows a combination of averaging and cropping.

Miscellaneous quality-of-life improvements

We have added progress-bars to all downloads and uploads as well as other time-consuming operations that take longer than 5 seconds so you no longer have to wonder whether Raster Vision is stuck. You will also notice significant speedups due to fewer downloads compared to older versions thanks to better caching.

For more information, and to get started using Raster Vision, view the resources below.
Website: https://rastervision.io/
GitHub repo: https://github.com/azavea/raster-vision
Documentation: https://docs.rastervision.io/en/0.20/index.html
Tutorials: https://docs.rastervision.io/en/0.20/usage/tutorials/index.html
Changelog: https://docs.rastervision.io/en/0.20/changelog.html#raster-vision-0-20