Azavea Labs

Where software engineering meets GIS.

Converting Mapbox Studio Vector Tiles to Rasters

Screenshot of pirate styled map

Pirate map by AJ Ashton on Mapbox.com

If you’ve tried to make your own custom map styles before you’ve probably used MapBox Studio or its predecessor, Tilemill. Mapbox is doing a huge amount of work around custom maps and map data. As part of this, they’ve developed many open source tools and some file specifications as well. It’s hard not to be impressed by the quality and usefulness of products like Mapbox Studio.

Recently, I had a tricky task that involved working with map tiles generated by Mapbox Studio. We developed an application for use on tablets that needed to use a custom map AND work offline. This ruled out many common options for tile serving. Still, we developed our custom map style in Mapbox Studio because it is an excellent tool. The challenge was taking the on-the-fly tile rendering based on Mapbox Studio vectors and getting them into a format that could be packaged with our application.

Some Background

Customizing the presentation of map data isn’t a simple task. To make this task a little more pleasant and intuitive for developers accustomed to style languages such as CSS, Mapbox developed CartoCSS, which looks much like CSS. To make a fast feedback cycle that would give a similar experience to developing HTML/CSS, the Mapbox developers created Tilemill, which allows users to set up a map, write some CartoCSS and see the changes nearly immediately. It’s great software and I think it has created a proliferation of smart and beautiful cartography. Technical.ly Philly recently celebrated a blueprint styled map created by Lauren Ancona.

Blue Print Map Screenshot

Lauren Ancona’s Blue Print map served from MapBox.com

Tilemill could upload your tiles to Mapbox which would act a tile server for your projects. This service is part of the economic model for Mapbox. For most users, offloading tile serving to a specialized and highly performant cluster of Mapbox servers is an obvious choice and well worth the cost. However being a very open platform from a company with roots in open source software, Tilemill also allowed users to save the tiles directly out to disk as an mbtiles file. This is a special file format also developed my Mapbox. It’s actually just a SQLite database with a known format. Some users opted to self host map tiles but probably a small fraction.

Mapbox Studio is the next iteration of Tilemill. It has many of the same features and many improvements. One of the biggest changes between Tilemill and Mapbox Studio is that Tilemill relied on raster images and Mapbox Studio relies on vector tiles. This is a big gap in how images are generated. In the raster model, the server either pre-renders images for each tile of the map and saves it on disk or it generates images on the fly as they are requested and sends those back in the response. In either event, the data being transferred to the users who see a map, are images. Images are large and somewhat clunky from a data perspective. If you want to change the map style, you have to regenerate all those images. It takes a lot of computing power and all that load is on the server.

Mapbox has been moving to a vector model for tile serving and they’ve developed specifications on how to do this. The idea in the vector tile model is that the server sends the data that goes on a map tile to the user. This means, that the names of roads, the shapes of buildings, the position of rivers is accounted for, but not what color to make those shapes and not the fonts to use in them. In most cases sending back only the raw data and not the rendered tile is much faster and puts less load on the server. The vector data is then mixed with the style data by the client where it is rendered. This is similar to how CSS and HTML work. The server sends the style information along with the content data, and the browser creates a visual presentation. The work of generating the presentation has been offloaded to the user. Distributed computing at it’s best!

Because Mapbox is moving to a vector based model for tiles, in Mapbox Studio the ability to export the rendered tiles to an mbtiles file was removed. Under most circumstances this is fine; if you really need raster functionality, you can still use Tilemill and if you really want to serve your own tiles, you can actually serve your own vector tiles. One confusing aspect to this change from Tilemill to Mapbox Studio and raster to vector tiles is that Mapbox Studio can export an mbtiles file but it stores vector data rather than raster. Same filename different (and incompatible) data.

Sometime you just need rasters

Screen shot of Mapbox's Pencil Map (https://www.mapbox.com/blog/pencil-drawn-style/)

Screen shot of Mapbox’s Pencil Map (https://www.mapbox.com/blog/pencil-drawn-style/)

Since our application needed to work offline, and since our map didn’t cover a large area, packaging the tiles with the application was the most reasonable option. Our Data Analytics team worked on a nice map style in Mapbox Studio based on the pencil style developed by Mapbox and it wasn’t an option to backport it to Tilemill. I started looking for ways to convert Mapbox Studio vector tiles to raster tiles. There’s surprisingly little information out there about how to do this. Luckily, being open source, many of the features of Mapbox Studio are separated into their own libraries and there are node.js modules that can use these or add extra features to them. I found a small module called “tl” which I presume is an abbreviation of “tilelive” for which this module provides features. This is a command line utility that will grab raster tiles delivered to the client by the rendering server in Mapbox Studio and stream them to an old-school raster based mbtiles file. It takes a long time to do this so it’s really only good for a small area but that’s all we really needed for our application.

I also found a tile server called tessera (from the same developer as tl), which can serve tiles in just about any format you can think of. I decided to use tessera to start debugging. I figured if I could get tessera to serve my tiles, then at least I would know they work and I could figure out how to get tl to save them.

This is where you can start following along if you are looking for a tutorial, however if you want the spoilers, you can skip to the end where I link to an Ansible role that provisions a Vagrant/Virtualbox server and gets everything ready for you.

Try it yourself

These projects are all node.js programs so if you don’t have it installed already you need to get it. I recommend grabbing the latest from the v0.10 series. I did all my development and testing on v0.10.28. I should also mention that the commands listed are for Linux/Unix/OSX. If you are a windows user you’ll have to modify the commands slightly or use a Linux virtual machine (which I recommend anyway).

Mapbox Studio stores all your projects in a folders that ends in .tm2 (for Tilemill2). They’re all stored in your user directory. Getting tessera to serve your Mapbox Studio tm2 project is supposed to be as easy as installing tessera and tilelive providers (APIs for different kinds of tile and style data) and then running a command to kick off the server with your tm2 project.

npm install -g tessera tl mbtiles mapnik tilelive tilelive-file tilelive-http tilelive-mapbox tilelive-mapnik tilelive-s3 tilelive-tmsource tilelive-tmstyle tilelive-utfgrid tilelive-vector tilejson

tessera tmstyle:///path/to/your/mapproject.tm2

You might need to run the npm command as root or prefix with with sudo (sudo npm…) but that is supposed to work. For me it did not.

First, I got complaints that the network request couldn’t be completed. The reason for this is that Mapbox Studio streams data from mapbox.com and uses an API token. The reason you need an account with Mapbox just to use the software is that they associate that API token with your program. To make the requests from the command line, you need to provide the token as a variable that is passed with each request. It took some digging but I found that this variable is named MAPBOX_ACCESS_TOKEN. Log on to mapbox.com and visit the projects page. You should see your token at the top of the page. You can supply this as an environment variable for your current session by issuing the following command:

export MAPBOX_ACCESS_TOKEN=mytokenhere

Make sure it worked by having it echo back to you:

echo $MAPBOX_ACCESS_TOKEN

This should show you your token. At this point I tried running the tessera command again but it still failed. This time I got an error message about missing fonts. Mapnik, which is the realtime map tile renderer needed to know the location of all the fonts being used in the project. Again it took some digging. But I found that this can also be supplied via the environment variable MAPNIK_FONT_PATH. I moved all the fonts into a known directory and then issued the following command:

export MAPNIK_FONT_PATH=/path/to/font/directory

In my case I just used the global font directory (/usr/share/fonts) for ubuntu which gave Mapnik access to all my fonts.

After this, tessera worked.

tessera tmstyle:///path/to/your/mapproject.tm2

This started a server on port 8080. Visiting localhost:8080 gave me a leaflet map that I could explore.

Next I tried using tl to export the tiles to an mbtiles file. This program has a copy command. You’ll need to supply information about what part of the map to copy.

tl copy -z 17 -Z 18 -b "-75.171375 39.945049 -75.15554 39.956991" tmstyle:///absolute/path/to/project.tm2 mbtiles:///path/to/save/tiles.mbtiles

This will grab tiles for zoom levels 17 and 18 for the area of Philadelphia around City Hall and save it to a classic raster mbtiles file. Lowercase “z” is the starting zoom, uppercase “Z” is the ending zoom and “b” is the bounding box for the map constraints.

Finally with an mbtiles file I used mbutil, another of Mapbox’s great libraries to extract the tiles and embed them in our application.

I packaged all this up into an Ansible role and Vagranfile. The results are a tile converter virtual machine. To use this you’ll need Ansible, Virualbox, and Vagrant installed. Once this is done follow the instructions in the readme and you should be good to go.

Selecting a NAT Instance Size on EC2

We’ve been using the Amazon Web Services (AWS) Virtual Private Cloud (VPC) functionality to create an isolated and secure hosting environment for our SaaS product, HunchLab.  When EC2 servers in a VPC with only private IP addresses need access to S3 (or to the Internet) the network traffic must be routed through a NAT instance.  This architecture provides increased security by reducing the external surface area of the application.

There are many resources about setting up a NAT instance in AWS.  Many examples setup NAT instances as the m1.small or t2.micro instance sizes.  Both instance sizes are low-cost and so a natural starting point for experimentation.

The m1.small is a prior generation EC2 instance type with Amazon recommending an upgrade path to the m3 instance family.  The m3 family does not, however, have a small instance where only a limited amount of memory is required.  The t2 instances seem like a natural fit from a cost perspective but Amazon lists their network performance as ‘low to moderate’, which wasn’t very assuring given that the primary purpose of a NAT instance is to provide network connectivity to the rest of the servers within the application.

Given that EC2 does not provide a network focused instance family like they do with compute, memory, and storage optimized families, my question was:

Which NAT instance size should we use in production?

I decided to answer this question by benchmarking several instance sizes.  I tested the m1.small instance size and it’s closest replacement, the m3.medium. I also tested all three t2 instances (t2.micro, t2.small, t2.medium) because they are low cost and a new instance family which likely benefits from the latest back-end EC2 architecture improvements.

AWS rates the network performance of each instance type as low, moderate, high, or 10 Gigabit. To include instances with “enhanced networking” enabled, I also included the c3.large and c3.2xlarge instance sizes.  Enhanced networking is designed to improve packets per second and reduce latency through better virtualization. The c3.2xlarge is also rated as high network performance.  For all instance types I used the latest stock NAT AMI provided by AWS for my testing.

One component of our application generates large files that we store within S3.  To benchmark the throughput of the different NAT instances I stored the Ubuntu 14.04 Server ISO file within a bucket in S3 in the same region as our servers. For each instance size, I downloaded the ISO file 10 times using wget from a server behind the NAT instance and recorded the throughput in MBps for each sample.  I then calculated the median bandwidth and the TP80 metric (the top 80% of the samples).

I also recorded the price per hour to run each instance type in our region using reservation pricing for instances that are part of current generations.   Finally, I calculated the bandwidth per unit of cost to determine the sweet spot along the performance-cost curve.  Here are the results.

Results

NAT Instance Median Bandwidth TP80 Bandwidth Cents / Hour Median Bandwidth / Cost TP80 Bandwidth / Cost
m1.small 8.3 MBps 3.5 MBps 4.40 cents 1.88 MBps / cent 0.80 MBps / cent
t2.micro 2.7 MBps 1.7 MBps 0.86 cents 3.14 MBps / cent 1.98 MBps / cent
t2.small 13.9 MBps 10.2 MBps 1.72 cents 8.08 MBps / cent 5.92 MBps / cent
t2.medium 20.7 MBps 19.14 MBps 3.45 cents 6.00 MBps / cent 5.55 MBps / cent
m3.medium 20.4 MBps 16.6 MBps 4.25 cents 4.79 MBps / cent 3.91 MBps / cent
c3.large 43.2 MBps 32.76 MBps 6.19 cents 6.98 MBps / cent 5.29 MBps / cent
c3.2xlarge 43.3 MBps 39.02 MBps 24.77 cents 1.75 MBps / cent 1.58 MBps / cent

 

The m1.small instance, which most examples utilize, offers quite limited bandwidth and is not a good choice for a production environment.   The t2.micro instance is even worse. The t2.small and t2.medium instances seem like good fits for production environments where cost is a concern. The c3 instances with enhanced networking clearly realize a performance boost compared to the other instances but come at a higher cost.   For a single simultaneous transfer from S3 the c3.2xlarge instance does not realize much of an improvement over the c3.large, but I imagine that more concurrent transfers would realize a higher overall throughput.

This benchmark is of course subject to the particular hosts that I landed on during my testing.  If I repeated the test, I would expect variability in the benchmarks for the t2 family due to their burstable design.  For our use case, the t2.medium seems like a good choice.

 

Running Vagrant with Ansible Provisioning on Windows

At Azavea we use Ansible and custom ansible roles quite a bit.

We’ve also been using Vagrant for quite some time to create project-specific development environments.  Adding Ansible as a provisioner makes setting up a development environment wonderfully smooth.

Unfortunately, Ansible is not officially supported with Windows as the control machine.

It is possible to get Ansible running in a Cygwin environment.  With a bit of work, you can get it running from Vagrant too!

Installing Cygwin

The first step to getting Ansible running is installing Cygwin.  You can follow the normal installation instructions for Cygwin if you’d like to, or if you already have a Cygwin environment set up that’s great too!

We’re using babun instead of Cygwin’s normal installer for a simpler installation and package installation process.  If you’re new to using Cygwin or having trouble with the standard installer I’d recommend this.

Setting up Ansible

Once you’ve got Cygwin installed, you’ll want to open up a terminal. You’ll need to use a Cygwin terminal, and not cmd.exe, whenever you want to run ansible-playbook or vagrant.

You’ll need to install pip, to be able to install Ansible. You’ll also need some packages Ansible needs to run that can’t be installed by pip. If you’re using the standard Cygwin installer, run it again and make sure python, python-paramiko, python-crypto, gcc-g++, wget, openssh python-setuptools are all installed. We need gcc-g++ to compile source code when installing PyYAML from PyPi.

If you’re using babun, this is:

pact install python python-paramiko python-crypto gcc-g++ wget openssh python-setuptools

You might get the following error if you try to run python: ImportError: No module named site.
If you see that error add the following to your ~/.bashrc or ~/.zshrc (in your Cygwin home folder) and source it:

export PYTHONHOME=/usr
export PYTHONPATH=/usr/lib/python2.7

Next lets get pip installed, and install Ansible itself.

python /usr/lib/python2.7/site-packages/easy_install.py pip
pip install ansible

Making Ansible Run From Vagrant

Once that is done, you should be able to run ansible-playbook from bash or zsh.

However, that isn’t enough to use Ansible as a Vagrant provisioner. Even if you call vagrant from bash or zsh, vagrant won’t be able to find ansible-playbook, because it isn’t on the Windows PATH. But even if we put ansible-playbook on the Windows PATH, it won’t run, because it needs to use the Cygwin Python.

To ensure we’re using the Python in our Cygwin environment, we need a way to run ansible-playbook through bash. The solution we came up with was to create a small Windows batch file and place it somewhere on the Windows PATH as ansible-playbook.bat:

@echo off

REM If you used the stand Cygwin installer this will be C:\cygwin
set CYGWIN=%USERPROFILE%\.babun\cygwin

REM You can switch this to work with bash with %CYGWIN%\bin\bash.exe
set SH=%CYGWIN%\bin\zsh.exe

"%SH%" -c "/bin/ansible-playbook %*"

This is enough to let Vagrant find ansible-playbook and run the Ansible provisioner.

You’ll likely run into the following error when you try and provision your first Vagrant VM:

GATHERING FACTS ***************************************************************
fatal: [app] => SSH encountered an unknown error during the connection. We recommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue

To get around this, we had to create a ~/.ansible.cfg in our Cygwin home directory (this can also go in your project directory as ansible.cfg) changing what the ssh ControlPath was set to:

[ssh_connection]
control_path = /tmp

And with that you should be ready to provision using Ansible!

If you want to run other Cygwin programs from your Vagrantfile, such as ansible-galaxy, you’ll have to make another batch file. For an example of how to easily make a bunch of wrapper batch files, checkout this gist.

Creating Ansible Roles from Scratch: Part 2

In part one of this series, we created the outline of an Ansible role to install Packer with ansible-galaxy, and then filled it in. In this post, we’ll apply the role against a virtual machine, and ultimately, install Packer!

A Playbook for Applying the Role

After all of the modifications from the previous post, the directory structure for our role should look like:

├── README.md
├── defaults
│   └── main.yml
├── meta
│   └── main.yml
└── tasks
    └── main.yml

Now, let’s alter the directory structure a bit to make room for a top level playbook and virtual machine definition to test the role. For the virtual machine definition, we’ll use Vagrant.

To accommodate the top level playbook, let’s move the azavea.packer directory into a roles directory. At the same level as roles, let’s also create a site.yml playbook and a Vagrantfile. After those changes are made, the directory structure should look like:

├── Vagrantfile
├── roles
│   └── azavea.packer
│       ├── README.md
│       ├── defaults
│       │   └── main.yml
│       ├── meta
│       │   └── main.yml
│       └── tasks
│           └── main.yml
└── site.yml

The contents of the site.yml should contain something like:

---
- hosts: all
  sudo: yes
  roles:
    - { role: "azavea.packer" }

This instructs Ansible to apply the azavea.packer role to all hosts using sudo.

And the contents of the Vagrantfile should look like:

# -*- mode: ruby -*-
# vi: set ft=ruby :

VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  config.vm.box = "ubuntu/trusty64"

  config.vm.provision "ansible" do |ansible|
    ansible.playbook = "site.yml"
  end
end

Here we’re making use of the ubuntu/trusty64 box on Vagrant Cloud, along with the Ansible provisioner for Vagrant.

Running vagrant up from the same directory that contains the Vagrantfile should bring up a Ubuntu 14.04 virtual machine, and then attempt use ansible-playbook to apply site.yml. Unfortunately, that attempt will fail, and we’ll be met with the follow error:

ERROR: cannot find role in /Users/hector/Projects/blog/roles/azavea.unzip or
/Users/hector/Projects/blog/azavea.unzip or /etc/ansible/roles/azavea.unzip

Ansible failed to complete successfully. Any error output should be
visible above. Please fix these errors and try again.

Where is this reference to azavea.unzip coming from? Oh, that’s right, we had it listed as a dependency in the Packer role metadata…

Role Dependencies

Role dependencies are references to other Ansible roles needed for a role to function properly. In this case, we need unzip installed in order to extract the Packer binaries from packer_0.7.1_linux_amd64.zip.

To resolve the dependency, azavea.unzip needs to exist in the same roles directory that currently houses azavea.packer. We could create that role the same way we did azavea.packer, but azavea.unzip already exists within Ansible Galaxy (actually, so does azavea.packer).

In order to install azavea.unzip into the roles directory, we can use the ansible-galaxy command again:

$ ansible-galaxy install azavea.unzip -p roles
 downloading role 'unzip', owned by azavea
 no version specified, installing 0.1.0
 - downloading role from https://github.com/azavea/ansible-unzip/archive/0.1.0.tar.gz
 - extracting azavea.unzip to roles/azavea.unzip
azavea.unzip was installed successfully

Now, if we try to reprovision the virtual machine, the Ansible run should complete successfully:

$ vagrant provision
==> default: Running provisioner: ansible...

PLAY [all] ********************************************************************

GATHERING FACTS ***************************************************************
ok: [default]

TASK: [azavea.unzip | Install unzip] ******************************************
changed: [default]

TASK: [azavea.packer | Download Packer] ***************************************
changed: [default]

TASK: [azavea.packer | Extract and install Packer] ****************************
changed: [default]

PLAY RECAP ********************************************************************
default                    : ok=4    changed=3    unreachable=0    failed=0

Before we celebrate, let’s connect to the virtual machine and ensure that Packer was installed properly:

$ vagrant ssh
vagrant@vagrant-ubuntu-trusty-64:~$ packer
usage: packer [--version] [--help]  []

Available commands are:
    build       build image(s) from template
    fix         fixes templates from old versions of packer
    inspect     see components of a template
    validate    check that a template is valid

Globally recognized options:
    -machine-readable    Machine-readable output format.

Excellent! The Packer role we created has successfully installed Packer!

Creating Ansible Roles from Scratch: Part 1

Within Ansible there are two techniques for reusing a set of configuration management tasks, includes and roles. Although both techniques function in similar ways, roles appear to be the official way forward. Ansible Galaxy was built as a repository for roles, and as we’ll see in this post, ansible-galaxy exists to aid in installing and creating them.

Creating a New Role

Let’s start off by creating a role for Packer.

Packer is a useful tool for producing different machine image types with the same set of configuration management tasks. For example, Packer can be used to take a set of Ansible instructions, funnel them through itself, and produce both an AMI and Docker image.

Enough about Packer though, let’s get back to creating an Ansible role for installing Packer.

The first step in creating a role is creating its directory structure. In order to create the base directory structure, we’re going to use a tool bundled with Ansible (since 1.4.2) called ansible-galaxy:

$ ansible-galaxy init azavea.packer
azavea.packer was created successfully

That command will create an azavea.packer directory with the following structure:

├── README.md
├── defaults
│   └── main.yml
├── files
├── handlers
│   └── main.yml
├── meta
│   └── main.yml
├── tasks
│   └── main.yml
├── templates
└── vars
    └── main.yml

Explaining the Role Directory Structure

A role’s directory structure consists of defaults, vars, files, handlers, meta, tasks, and templates. Let’s take a closer look at each:

defaults

Within defaults, there is a main.yml file with the default variables used by a role. For the Packer role, there is only a packer_version default variable. As of this post, the most recent version of Packer is 0.7.1, so we’ll set it to that:

---
packer_version: "0.7.1"

vars

vars and defaults house variables, but variables in vars have a higher priority, which means that they are more difficult to override. Variables in defaults have the lowest priority of any variables available, which means they’re easy to override. Placing packer_version in defaults instead of vars is desirable because now it is easier to override when you want to install an older or newer version of Packer:

---
- hosts: all
  sudo: yes
  roles:
    - { role: "azavea.packer", packer_version: "0.7.0" }

All of that said, we’re set with packer_version in defaults, so the vars directory is not needed either.

files

files is where you put files that need to be added to the machine being provisioned, without modification. Most of the time, files in files are referenced by copy tasks.

The Packer role has no need for files, so we’ll delete that directory.

handlers

handlers usually contain targets for notify directives, and are almost always associated with services. For example, if you were creating a role for NTP, you might have an entry in handlers/main.yml for restarting NTP after a task finishes altering the NTP configuration file.

Packer isn’t a service, so there is no need for the handlers directory.

meta

meta/main.yml houses one of the biggest differences between includes from roles: metadata. The metadata of an Ansible role consists of attributes such as author, supported platforms, and dependencies. Most of this file is commented out by default, so I usually go through and fill in or uncomment relevant attributes, then delete anything else.

For the Packer role, I trimmed things down to:

---
galaxy_info:
  author: Hector Castro
  description: An Ansible role for installing Packer.
  company: Azavea Inc.
  license: Apache
  min_ansible_version: 1.2
  platforms:
  - name: Ubuntu
    versions:
    - trusty
  categories:
  - cloud
  - system
dependencies:
  - { role: "azavea.unzip" }

Ignore the dependencies bit for right now. We’ll come back to it later.

tasks

tasks houses a series of Ansible plays to install, configure, and run software. For Packer, we need to download a specific version, and since it’s packaged as a compiled binary in a ZIP archive, extract it. Accomplishing that with Ansible’s built-in get_url and unarchive modules looks like this:

---
- name: Download Packer
  get_url: >
   url=https://dl.bintray.com/mitchellh/packer/packer_{{ packer_version }}_linux_amd64.zip
   dest=/usr/local/src/packer_{{ packer_version }}_linux_amd64.zip

- name: Extract and install Packer
  unarchive: src=/usr/local/src/packer_{{ packer_version }}_linux_amd64.zip
             dest=/usr/local/bin
             copy=no

templates

templates is similar to files except that templates support modification as they’re added to the machine being provisioned. Modifications are achieved through the Jinja2 templating language. Most software configuration files become templates.

Packer takes most of its configuration parameters via command-line arguments, so the templates directory is not needed.

Conclusion

Congratulations! You now have all of the components necessary for an Ansible role. In part two of this series, we’ll take a look at creating a small playbook to apply the role against a local virtual machine. We’ll also take a closer look at the dependencies listed in the role metadata.