Serving Tiles with GeoTrellis, Lambda, and API Gateway

Serving Tiles with GeoTrellis, Lambda, and API Gateway


Serverlessly rendered tiles from Potsdam
Serverlessly rendered tiles in Potsdam

Tile servers are responsible for querying huge quantities of indexed geospatial data, turning sections into images, and returning those images to users. Normally a tile server has to do nothing, but when it has to do anything, it has to do a lot all at once. For example, loading the world map on launches about 20 to 30 requests basically simultaneous requests for tiles every time you zoom or if you pan a long distance.

This pattern has a few solutions. Conservatively, you could keep extra resources around at all times. As long as you’re correct about what counts as “extra”, this strategy is low risk and only costs you the cost of your machines. If each instance of your tile server requires one gigabyte of ram and one processor core, you can add t2.micro instances in Amazon Web Services for about $9/month each and keep them up whether they’re doing anything or not. You could be smarter about your fleet of tile servers with scaling policies, but scaling isn’t instantaneous, so you’ll have to hope you get the alarms and policies right or peak load could still crush your server.

Instead, you could try a serverless solution (so hot right now). With serverless solutions, you pay only for resources used, and the provider (Google, Amazon, Microsoft, etc.) is responsible for automatic scaling. Because the functions execute in response to events, there’s no additional overhead to add executions in response to more events, and when functions aren’t running, they cost nothing. For example, with AWS Lambda, $9 would get you 22.5 million requests and 6.25 total days of run-time with a 512mb function, without needing to know your runtime needs in advance.

I built a serverless tile server with the Serverless framework, AWS Lambda, AWS API Gateway, and GeoTrellis to see how this could work.


Zooming out and fetching new tiles over the Rockies
Zooming out and fetching new tiles over the Rockies (click to view)

The dataset in the gif above is the National Elevation Dataset, which contains about 3m resolution of ground surface elevation for the entire U.S. In AWS S3, the total size on disk is about 690 gb. Each tile was generated on-the-fly using a Lambda function that’s invoked in response to a request to an API Gateway endpoint. The API is entirely serverless, and caching is disabled.

Technical Overview

Lambda provides, currently, four runtimes — python, node, C#, and java. While scala isn’t on that list, packaging scala code into jars makes it from Lambda’s perspective indistinguishable from Java code. That’s great, because it means I can use GeoTrellis.

Since Lambda handles horizontal scaling and API Gateway handles routing, the task was effectively “take good data in and send tiles out”. That should be easy! I also wanted to support different rendering options and to parameterize the input data’s location in the URL. This problem breaks down into four parts:

  • parse the request into a useful case class
  • use that case class to fetch and render a tile
  • return that tile to the requester
  • deploy the solution

The “good data” this function takes in is in the form of geospatial data that has been through the GeoTrellis ingest process. The ingest process transforms raw geographic data into data layers indexed for fast reads at specific locations. The result is a dataset that can be queried for images that “look good” at specific zoom levels and (x, y) coordinates.

Parsing the request

I chose to use Circe for json parsing. Circe is convenient because of auto-derivation and because decoding incoming json returns an Either. That Either can be useful to return more useful error messages to end-users than the built-in beans-based json deserialization that Lambda uses by default. The cost is a larger deployment package.

I parsed requests into one of two types of requests: EmptyRequests or DefaultRequests. EmptyRequests are boring, so that’s enough about them.

DefaultRequests contain the information necessary to fetch data from S3 and convert it into a PNG.

case class DefaultRequest (
  x: Int,
  y: Int,
  z: Int,
  bucket: String,
  prefix: String,
  layerName: String,
  vizType: String

Decorating with @JsonCodec tells Circe enough to generate codecs for encoding and decoding the case class. In a DefaultRequest, x, y, and z are location and zoom level, bucket refers to an S3 bucket, prefix refers to a prefix in S3 to an ingested layer, layerName picks out a specific layer, and vizType sets visualization parameters, like RGB or viridis.

Fetching and rendering tiles

The tiles to return are PNGs from ingested rasters stored on S3. The ingest process transforms geospatial data into an indexed dataset that GeoTrellis can read at specific zoom levels and spatial indices. GeoTrellis has built-in support for S3 as a datastore:

private def fetchValue[T: AvroRecordCodec](default: => T)(implicit logger: LambdaLogger) = {
  val p = URLDecoder.decode(prefix)
  val l = URLDecoder.decode(layerName)
  val store = S3AttributeStore(bucket, p)
  val layerId = new LayerId(l, z)
  val reader = new S3ValueReader(store).reader[SpatialKey, T](layerId)
  try {, y))
  } catch {
    case e: ValueNotFoundError =>
    logger.log(s"Empty tile: ${bucket} ${p} ${l} ${e}")

That block reads the values at a specific spatial key (the x/y/ in z/x/y/) from S3 and returns either Tiles or MultibandTiles, which are two Geotrellis types.

Rendering a tile of either type is straightforward once it’s been fetched: tiles have a renderPng method that optionally takes a colormap.

Return the tile to the requester

As of November last year, API Gateway can return binary data to requesters, so long as the data are Base64-encoded bytes.

When GeoTrellis creates a PNG, the resulting object has a bytes attribute. Encoding the byte array to Base64 is trivial. Writing them to an OutputStream parameter that API Gateway and Lambda understand sends them back to the user.

Deploy the solution

The short deployment story is serverless deploy, then go home. In practice I dealt with a little bit of extra complication. API Gateway uses a “body mapping template” to map the incoming request into the event that it sends to Lambda functions. By default, you declare this template in yaml strings in the serverless.yml configuration file or in single-line json in additional files. I found this setup error prone (I was error prone, at least) and annoying to edit, so I moved my template into a separate json file and wrote some tooling to add it to serverless.yml file right before deployment. I wrapped that tooling up in the publish script.

After calling scripts/publish, the last deployment step is to let API Gateway know to convert responses to binary. I never found a way to automate that, so that step is manual in the repository’s README.

Lessons Learned

Under the right conditions, serverless applications can fly. Creating the serverless tile server didn’t require very much code and has no standing maintenance costs when it’s not in use. These two attributes make it an excellent layer viewer if you have ingested data in S3.

However, the costs can bite sometimes. Lambda limits the size of each function’s deployment package to 50 mb. The cost of including extra dependencies like Circe in the basic tile server above is that I then can’t use that space on other dependencies I might need elsewhere. For example, I tried integrating the serverless tile server into the Raster Foundry application, and in conjunction with the dependencies baked into the that application’s existing tile server, I couldn’t find a way to make the deployment package small enough.


All code for the project is available on github. If you have an AWS account and some ingested data, you can have your own serverless tile server up and running in minutes.