Azavea Atlas

Maps, geography and the web

Analyzing OpenTreeMap Data with PostGIS

Screenshot of Alberta's OpenTreeMap

In this blog, I’ll be continuing the work I started in Loading Spatial Data into PostGIS with QGIS. Last time I talked about how QGIS could be used as a means of loading data into PostGIS databases quickly with the QGIS DB Manager and Boundless’ OpenGeo Suite.

This time we’ll be using PostGIS to investigate the distribution of mapped trees in Edmonton’s  OpenTreeMap site, yegTreeMap, with simple PostGIS queries. Each city that participates in OpenTreeMap has access to a wealth of insights. All it takes to unlock them is a spatial database and a few lines of code.

Requirements

This tutorial assumes that you have the OpenTreeMap data for Edmonton in your PostGIS database. My previous blog entry in this series should get you up to speed downloading the data and loading it into PostGIS.

Note that while the file and field names can change, just about any overlapping point and polygon data will suffice for these queries–it just has to be in a PostGIS database. This also means that you may use CartoDB for the analysis since it stores its data in PostGIS.

Getting Started in QGIS

First, make sure your PostGIS database is running. Usually that just involves running the PostGIS executable on your machine. Next, open up ‘DB Manager’ in QGIS. If you’re using CartoDB to follow along, just open the SQL tab. In QGIS, open up the DB Manager:

Picture showing the DB Manager in QGIS

The DB Manager in QGIS

Once in DB Manager, click the SQL Query Icon (a paper with a wrench) and it will open the query manager. The top empty box is where you write SQL queries, and the bottom box shows the results.

Picture of the SQL query window

SQL query editor

Spatial Queries with PostGIS

Each of the following queries uses what’s known as a spatial join. Spatial joins append information from one layer to another based on overlap, and are one of the key features of a spatial database. In a way, the shared key for the join is space rather than an ID field. The queries below assume you have two different datasets in your database: Edmonton trees, and neighborhoods.

PostGIS database with files successfully loaded via Spit

PostGIS database with files successfully loaded via Spit

Count Trees and Display by Neighborhoods

We’ll start out with this simple query to count up the number of trees per neighborhood using the standard SQL COUNT() function. The spatial part is the PostGIS function ST_Intersects(). ST_Intersects() looks for overlap in the geometry, which is stored in a field called the_geom. CartoDB uses an additional geometry field called the_geom_webmercator. CartoDB’s documents explain more about how to use that geometry field. Using the top SQL window, run the following query:

SELECT hoods.hoodname, 
COUNT(trees.gid) AS tree_count 
FROM edmonton_trees AS trees 
JOIN neighborhoods AS hoods 
ON ST_Intersects(trees.the_geom, hoods.the_geom) 
GROUP BY hoods.hoodname 
ORDER BY tree_count DESC 

The query should result in a table of the number of trees per neighborhood in descending order:

Picture of table result from count of trees per neighborhood

Resulting count of trees per neighborhood

We can see that the neighborhood with the most overall trees is Summerside, Followed by River Valley Rundle, and then Terwillegar Towne.

Calculate Tree Density Per Neighborhood

Finding the number of trees per neighborhood is interesting, but larger neighborhoods might contain more trees by virtue of their size. This next query will normalize the number of trees by each neighborhood’s area. COUNT() is used to divide the number of each neighborhood’s trees by its area, and then ST_Intersects() performs the spatial join. This results in trees per square kilometer, which in this query has an alias called trees_km2.

SELECT
hoods.gid,
hoods.the_geom,
hoods.hoodname,
COUNT(trees.gid) / hoods.area_km2 AS trees_km2
FROM edmonton_trees AS trees
JOIN neighborhoods AS hoods
ON ST_Intersects(trees.the_geom, hoods.the_geom)
GROUP BY hoods.hoodname, hoods.gid, hoods.the_geom
ORDER BY trees_km2 DESC
Picture showing table of resulting tree density per neighborhood.

Resulting table of tree density per neighborhood

The resulting table shows that the top two most tree-dense areas are Virginia Park, and Mill Woods Park with 3,100 and 2,693 trees per square kilometer, respectively. Summerside and River Valley Rundle are no longer in the top ten.

Find the Widest Tree by Neighborhood

Next, let’s try refining our queries a bit and find distinct features within each neighborhood. What if rather than summarizing all the trees in the neighborhood, we wanted to find only the widest tree in each neighborhood? This query once again employs ST_Intersects(). The standard SQL MAX() function finds the widest tree from the diameter field.

SELECT
hoods.hoodname,
trees.species__c AS treespecies,
MAX(trees.diameter) AS tree_diam
FROM edmonton_trees AS trees
JOIN neighborhoods AS hoods
ON ST_Intersects(trees.the_geom, hoods.the_geom)
GROUP BY hoods.hoodname, hoods.gid, hoods.the_geom, trees.species__c
ORDER BY tree_diam DESC

 

Picture of resulting list of widest single trees in each neighborhood.

Widest single tree in each neighborhood

The resulting table shows that the largest single tree of any species in the OpenTreeMap database for Edmonton is a Willow tree in Westbrook Estates. Willows, Maple, Elm, and Ash round out the rest of the largest trees.

Widest Tree By Neighborhood By Species

Let’s take the previous query one step further. What if rather than the single widest tree in each neighborhood, we wanted the single widest tree of every species present in each neighborhood? This query places ST_Intersect() in a sub-query. Type it into the SQL editor and run it. It may take 10-20 seconds to complete. Note that we’re also selecting everything (‘*‘ is SQL for everything), so the table that is returned has all the original fields.

SELECT *
FROM edmonton_trees,
(SELECT 
MAX(trees.gid) AS treeid,
hoods.hoodname,
trees.species__c,
MAX(trees.diameter) AS tree_diam
FROM edmonton_trees AS trees
JOIN neighborhoods AS hoods
ON ST_Intersects(trees.the_geom, hoods.the_geom)
GROUP BY hoods.hoodname, trees.species__c) AS sub
WHERE edmonton_trees.gid = sub.treeid

The resulting table should have around 8,600 records. While species will repeat across neighborhoods in the table, they will not repeat within the same neighborhood. The resulting data also has a secondary function: by finding the widest tree of each unique species per neighborhood, the result can act as a proxy for the tree biodiversity within each neighborhood. Azavea’s OpenTreeMap team discusses this more in their blog post, Seeing the Forest for the Trees: Interpreting Data from Tree Inventories (Part One).

Export Query Results as Maps

Next, let’s export the maps from the PostGIS database in QGIS. The SQL Window in the DB Manager of QGIS allows users to export query results directly into QGIS as a new layer. Run one of the queries above and then look for the button labeled ‘Load as new layer’ below the result pane. This should expand a menu.

Click the ‘Retrieve columns’ button on the right and it should automatically grab the gid and the_geom.  Finally, give the layer a name, and then press ‘Load now!’ This should add the layer to QGIS’ table of contents.

If it doesn’t, you can work around this issue by using pgAdmin, which should have come with OpenGeoSuite in the prior installment of this guide. Use pgAdmin to navigate to connect to your PostGIS database and use the SQL editor to perform one of the queries above. Test that it works, and then run it with the ‘Execute Query, Write Result to File’ option. This should save the resulting table as a CSV. This CSV can then be loaded back into QGIS, ArcGIS, or CartoDB as a spatial layer.

Style a Map

Once you’ve managed to either load your query result as a new layer in QGIS, or export the resulting query as a CSV with pgAdmin, you should make a map and show off your hard work! For example, I loaded and styled the data in CartoDB for the OpenTreeMap team:

Next Steps

These queries only scratch the surface of what PostGIS can do. PostGIS can reduce a traditionally labor-intense GIS task to a few lines of code, which can be reused over and over. Each of the code snippets above can be quickly repurposed and reused for just about any point and polygon data.

If you’d like to take your PostGIS a bit further, Chris Whong of CartoDB recently published a blog about PostGIS queries that replicate many common GIS geoprocessing tasks present in QGIS. Try using PostGIS on your own machine or in CartoDB and see how much time you can save with common geographic summary and processing tasks.

Loading Spatial Data into PostGIS with QGIS

QGIS logo

I recently had the opportunity to work with Azavea’s OpenTreeMap team to analyze tree planting data for Edmonton, Alberta. You can read about some of the results in Seeing the Forest for the Trees: Interpreting Data from Tree Inventories, on OpenTreeMap’s blog. Given the number of records in the data, I decided to use PostGIS to complete the analysis. As a long-time GIS user, I wasn’t very familiar with PostGIS, and working through the analysis process taught me quite a bit. This article will share how to set up a PostGIS database and load in spatial data contained in a CSV text file using QGIS. The next installment will detail different spatial queries to analyze the data.

PostGIS is a spatial extension for PostGRES databases. This means that the database gains the ability to store and manipulate spatial data. PostGIS databases allow users to model relationships and query information, and create repeatable analyses. With experience, users can analyze large data sets faster than with traditional GIS systems.

A common hurdle to using PostGIS for many users is the difficulty of knowing where to start. Thankfully, BoundlessGeo created the OpenGeoSuite, an open-source geospatial software bundle. It includes a PostGIS logoPostGRES database, and the PostGIS extension.

 

Download the Required Software

This tutorial will use QGIS to load data into the PostGIS database. QGIS is an open-source geographic information system (GIS) that has an active developer and support community. If you don’t already have QGIS, download and install a free copy here. Next, head over to BoundlessGeo and download the OpenGeo Suite. You’ll have to register an email address, but otherwise it’s also free. The OpenGeoSuite should install everything necessary for setting up a PostGIS database.

Start the PostGIS Database

Run the PostGIS application (called simply, ‘PostGIS’) that OpenGeoSuite installed. This will turn on the database and keep it running in the background. Follow Boundless’ directions to create and name a spatial database with PgAdmin, another piece of software included in the OpenGeo Suite. I called mine ‘otm_edmonton. If everything works, you should have a working connection to a PostGIS database.

Download Location Data

I’ll be using a CSV of tree locations from Edmonton, Alberta’s instance of OpenTreeMap, yegTreeMap. The file has about 275,000 records, so the download preparation may take a few minutes.

Displaying download of the yegtreemap data as a gif

 

Add the CSV locations to QGIS

It’s possible to add data into a PostGIS database through the command line, but QGIS is more user friendly for people familiar with desktop GIS. Open up QGIS and add the CSV as a delimited text layer under Layer > Add Layer > Add Delimited Text Layer. Set the coordinate reference system (CRS) as WGS 84 (EPSG:4326) when prompted after configuring your settings similar to the image below:

settings for adding CSV data

How to add a CSV with lat-long to QGIS

The trees should now show up on the map! Some trees with incorrect coordinates will pull the extent out pretty far, so zoom in to the large cluster of trees to view in more detail.

Display of tree data after correct loading from CSV

275,000 trees in Edmonton mapped with OpenTreeMap

Add a Boundary Layer to QGIS

Adding geographic boundary layers adds context to maps. Download the Edmonton neighborhoods shapefile. Load it into QGIS with the ‘Add Vector Layer’ option. Now we can see a map of trees by neighborhood in Edmonton.

A map of trees with the underlying Edmonton neighborhoods shapefile.

A map of trees with the underlying Edmonton neighborhoods shapefile.

Connect to PostGIS Through QGIS

Now that both files are in QGIS, this is no different than a standard GIS project. So how do we go about loading them into PostGIS? Easy, use the ‘Database’ drop-down menu in QGIS and open DB Manager. This menu should look like PgAdmin’s database interface. If your PostGIS database is still running in the background it should show up here. Click on PostGIS and it should expand to show the database you created when you ran PgAdmin (mine is called ‘otm_edmonton’). It will have some files inside, but not our spatial data. Let’s fix that.

Spitting Files into the Database

Open the ‘Database’ drop-down menu again, open ‘Spit’ and click ‘Import Shapefiles to PostgreSQL’. You will need to select the database you created earlier and hit connect. Enter the password you created. If you didn’t create one, the password should be blank. Once connected, press ‘Add’. It will open a prompt for you to navigate to your shapefiles. If you haven’t yet saved the Edmonton trees that you mapped as a shapefile, now is the time. Once you’ve added shapefiles for both trees and neighborhoods, hit OK. Note: When I performed this step, I got an error that the Edmonton neighborhoods shapefile was a multipolygon when it should be a polygon. I fixed this by running the QGIS tool called ‘Multiparts to Singleparts’ on the neighborhoods file. It’s located in QGIS under Vector > Geometry Tools. Once fixed, both files should load into the database with Spit. Check the DB Manager again, and both files should be visible.

PostGIS database with files successfully loaded via Spit

PostGIS database with files successfully loaded via Spit

What Happens Next?

Congratulations! You’ve just loaded spatial data into a PostGIS database! Here’s a tip to spin up a PostGIS database much faster: try uploading your shapefiles to CartoDB. CartoDB is a mapping and analysis platform that uses PostGIS, and any data uploaded goes into a database. There are advantages to using your own database, but it’s hard to beat the ease of uploading data to CartoDB and watching your shapefile (or spreadsheet) get loaded into PostGIS in seconds.

In the next post in this series we’ll show you some spatial queries you can perform with PostGIS (either in your PostGIS database or CartoDB) to examine the distribution of trees in Edmonton.

The Struggle for Power in Civic Tech: Highlights from #PDF15

Among the numerous talks at this year’s Personal Democracy Forum, one word that stood out was “Power” (not the kind that is currently keeping your computer on). Power, in Civic Tech, was discussed as both an impediment to achievement, as well as a by-product of having the right tools at your disposal. Below, several compelling perspectives:

Picture of the stage at Personal Democracy Forum 2015

Let’s Talk About Power

Eric Liu’s excellent first-day talk framed this discussion, defining Power, simply, as “a capacity to have others do what you would like them to do.” Civic Power, therefore, is that capacity “as applied to the common good” or the many. This leads to the core question of civic Power, which is, for the many, “who decides?” Eric thinks we should strive to Democratize our understanding of how Power works, and be responsible with the pile of tools, skills, and ideas we have in our possession. This is especially important in Civic Tech, where we have the opportunity to design tools for the public good, and often, to choose how they are used and distributed.

 

Knowledge is Power

Both Harold Feld and Dave Troy gave incredible presentations about the internet and social media as public utilities, although approached from two very different angles. Harold discussed Verizon’s attempts not to rebuild on Fire Island after Hurricane Sandy. He argued that the internet’s powers to offer open communication are as crucial as any other public utility, and it should be made available and affordable to all.

A People Map of the City of St. Louis showing various twitter users interests who live in St. Louis

People Map of St. Louis, via Peoplemaps.org, created by Dave Troy

Dave Troy, a social media cartographer, presented a new project called Peoplemaps.org. He used Twitter to create a people map of St. Louis (including Ferguson) that shed light on the segregations in that community in a much more meaningful way than geographic maps alone. Dave used to do this with Facebook, which unfortunately no longer allows access to the API. Many other sites do the same. Even Twitter allows itself to be censored in other countries. These tools also are public utilities, Dave pointed out, and blocking access to them is a direct attack on knowledge.

Tools are Power

Many PDF speakers agreed that having the right tools impacted Power. Cathy McMorris Rodgers (pictured right), Representative Cathy McMorris Rodgers Presenting on stage at PDFCongressional Chair of the House Republican Conference, gave an honest speech about her efforts to integrate Congress with the technology of the future. “Policy makers should be innovators,” she said, but instead, “Congress is more like the DMV than Uber.” She also summarized a recent trip to Ukraine in which the Mayor of Kiev stated, “media is more powerful than bullets” when it comes to revolution.

Danny O’Brien highlighted the successes of the movement to stop mass surveillance with the Patriot Act, but asked “Did we win because we were right, or because we had cool tools?” He ended with this message: “Tools need to be used to distribute Power, not aggregate it.”

 

Control is Power

Dante Barry’s passionate speech about the Black Lives Matter movement recognized the extraordinary collaboration that has resulted from the open internet. This Powerful platform has led to a Powerful movement, but he warned, “The internet is only as good as the people who control it.” In an age where phones have become defense mechanisms, rules that keep our content in our hands are critical.

 

Politics is Power

Speaking over Skype, Birgitta Jonsdottir (pictured below), Leader of the Pirate Party in Iceland, detailed her efforts Birgitta Jonsdottir speaking over Skype at the PDF Conferenceto affect change in her country, which ultimately led to a new political party being formed. “People are always telling us we don’t have the power to change. It’s a lie,” she said. Political movements should not be triangular, but rather a circle of shared Power. “If you don’t become the power, the power can’t control you.”

During a breakout session on Designing the Digital Legislature, New York City Councilmember Ben Kallos also agreed that politics is often the road to Power: “You can have the best ideas in the world, but you still need someone in government to pass the law.” This sentiment was echoed by Santiago Siri, in his creation of the Net Party in Argentina. This is also a theme well understood here at Team Cicero. Our database of legislative districts and elected officials is often used by organizations to advance an advocacy campaign through direct contact from constituents to their legislators.

 

Power to the People

Jess Kutch’s talk on coworker.org proved what people can do when they have the right tools and skills available to them. In a triumph for Starbucks Baristas, a social media campaign was launched by one woman in an effort to change the corporate tattoo policy. What resulted was not only a win for tattoos, but also a blueprint for how the internet can fuel a movement.

With so many different channels, where does Power come from in Civic Tech? It does not flow solely from knowledge, or tools, or control, or politics. As Andrew Rasiej and Micah Sifry (organizers and founders of PDF) seemed to know when choosing this year’s theme, it is comes from the people. And as people working within Civic Tech, we should use that power consciously.

Unicorns, Ducks, and Things that are Big: A recap of Visualized Political Data

Last week I attended the Visualized: Political Data conference in Washington, D.C. This was an offshoot of the popular Visualized conference, focusing entirely on political data and visualizations, and also

The podium at the Jack Morton Auditorium at GW

covering themes such as open data, communication, journalism, maps, things that are big, and not-maps. With thirteen presentations in eight hours, it was an ambitious agenda. But, despite that diverse list of talking points, there was a clear sense of continuity among the sessions. Below, some themes from the day:

The Bridge

Jamie Chandler (self-described “lecturer” and professor from GW) started off the day with this statement: there is no bridge between Data Science and Communications. The problem, he said, is that Data Scientists can compute the numbers, and journalists can tell the story, but they often don’t do those things together. As Chris Cillizza from Chris Cillizza presenting on the ascent of data in journalismthe Washington Post (pictured left) later confirmed, the media is usually the last to catch on in terms of innovation.

The people that calculate the data need to be able to share it in a meaningful way. Jamie’s solution was that Data Scientists should partner with Data Journalists to get the story out. Chris’s solution, and one that other speakers referenced throughout the day, was more unicorns. Elusive beings that only exist in small numbers, unicorns are people who can crunch the numbers, make the visualizations, and talk about them in a way that makes sense. Are unicorns the journalists of the future? Chris thinks so. And he would hire hundreds of them if he could.

The Scaffolding

Derek Willis (NY Times Upshot) and Rebecca Williams (Data.gov) discussed our data infrastructure problem, and tools to encourage open data, respectively. Derek’s focus was on the dead ends and errors encountered when one goes searching for data from the government. He highlighted incorrect Congressional swearing in dates, Congressional Leave of Absence records that were surprisingly absent, and a successful hack of Rand Paul’s donations (which he pretty easily scraped from the campaign website). Before building, you need to have a strong frame in place, which is something that we desperately lack. Derek encouraged the @unitedstates project, and the sharing of our individual efforts as a group to help prevent wasted time.

Rebecca, a current government employee, offered several useful solutions when encountering data you need but cannot access:

  1. Vote on it: This vote “may be more important than your annual November vote.”
  2. Engage with it: via Project Open Data
  3. Edit it: via Github
  4. And when all else fails, email the government (under: “human capital”) and they will email back*!

(*Personal anecdote: When doing research about legislative officials for the Cicero Database, I default to emailing frequently. Sometimes it works!)

Maps and Not-Maps

This was the title of the presentation by Alyson Hurt (NPR Visuals), and also summed up the contents of several other presenters’ material. Alyson discussed the “Geo-Map” (or just, “map”), which has long been the method used for representing state data. Alicia Parlapiano (NY Times) also showed election maps, ElectoralCollege2008.svgsome dating back to the late 18 and early 1900’s, with varying levels of usefulness. One of these maps depicted ducks with and without hats used to represent the senate and congress. But, most looked incredibly similar to those we still use today. This begged the question: how effective is this image a means to convey data?

Though maps are recognizable and loved by the public, they are also fraught. Maps don’t give states that are smaller fair representation, particularly in showing electoral results. Sometimes a map is not the best way to represent geographical data, Alicia said. Sometimes a table is.

Jonathan Swabish (Urban Institute, PolicyViz, and also co-host of the conference) mentioned this same point in the workshops the day before the conference. Before you make a map, he said, ask yourself these four things:

  1. Should it be a bar chart?
  2. Should it be a scatterplot?
  3. Should it be a table? OR
  4. Should it just be a sentence?

Alicia noted that Cartograms often solve the map problem. However, the loss of geography can be confusing. But, “sometimes geography is irrelevant,” said Alyson. The answer, as it turns out, is in your audience.

New York Times Cartogram of the United States in 1929, from Alicia's presentation

Fairly bad photo from Alicia’s presentation: New York Times Cartogram of the United States in 1929

The Human Element

At a conference focused on political data, many of the talks were geared toward the people. Jamie Chandler’s presentation on Communicating Data to Mass Publics emphasized the consumers. His first key step toward making an impactful visualization was to “understand your audience.” This was a theme that others carried along as well. Ben Casselman (FiveThirtyEight) asked the question, “How do we reach readers who are non-data junkies, while not disappointing those who are.” It’s important, he said, to choose your complexity wisely. “Reader’s eyes glaze over when they see a lot of numbers that don’t need to be in the piece.”Jonathan Schwabish

Jonathan Schwabish’s presentation (pictured right) made a similar point. He dissected hilarious comparisons that have been made in the media (3000 DWPF Cannisters to 24 Empire State Buildings, the olympic luge course inside of Times Square, 90 tons of CO2 next to wherever this is). Comparing something big to something bigger doesn’t usually work. If a person can’t actually imagine what something looks like, it’s probably not a useful tool for relaying information. We can do better, he said. We’re human. Make people feel. Have a soul.

The Takeaway

Rebecca Williams started off her presentation by saying “All Politics is Data.” In the age of information we not only need to be careful about what we choose to represent, we also need to represent it using the correct tool, for the correct audience, in the correct space. The goal of a successful visualization is not to show “all of the things,” as Alyson Hurt said. Instead, it is to clearly, and fairly, represent one angle of the story.

Anyone can be a journalist today. Anyone can make the decision to post a graphic of something big next to X number of football stadiums, which shows how Y politician is not the person any of you should be voting for. Anyone can put ducks on a map. But what would happen if we thought about the impact first? What would happen if we just stuck with what was really and truly important?

 

 

Balloon Mapping: A Citizen Science Exercise

Philly Tech Week, now in its fourth year, brought numerous creative and inspirational events to Philadelphia. GeoPhilly, Philadelphia’s meetup for spatial and mapping enthusiasts, hosted a fun, educational event during the festival. Balloon mapping is a DIY and affordable technique for capturing high resolution aerial imagery and it puts the power of data collection in the hands of the data consumers.

This approach can be particularly helpful in times of disaster or urgency when it is too costly to capture imagery in traditional ways (satellite and low flying aircrafts).  You can learn more about these methods in my 2013 blog on mapping techniques during emergencies and the Public Labs page about balloon mapping during the 2010 Deep Horizon Oil spill.  Balloon mapping can also be used during gatherings, community events or protests to photograph the crowd such as Occupy Wall Street. Or it can be used for the mapping of public space such as the Benjamin Franklin Parkway in Philadelphia.BalloonImage1

Engaging the Community

Balloon mapping is an easy way to engage the community by inviting participants to learn the methods and conduct their own balloon mapping exercise.  The unique  thing about balloon mapping, compared to drone mapping, is that it is very transparent: a large red balloon tethered to a person clearly shows that the mapping process is public and not hidden or secretive.  This was evident at our balloon mapping event which had numerous passersby comment and inquire about the exercise. In this way, it engages community members and neighbors in the activity.

How does balloon mapping work?

A 5ft diameter balloon is filled with helium (a 80CU tank is sufficient for one balloon) and attached to approximately 1000 ft of cord which is tethered to a moor (either a very heavy object or a person). The balloon has a camera rig attached to the base which holds the camera steady. The camera is programmed to automatically snap photos at regular intervals. The balloon is raised in the air, though always tethered, and continuously collects high resolution photographs of the area. Kite or pole mapping can also be used to capture photos from above. The photos are later downloaded and stitched together using reference images and open source software MapKnitter.

Aerial image captured from camera on ballonThe images captured by our balloon mapping exercise are free for anyone to access and enjoy. You can find all the photos taken during GeoPhilly Balloon Mapping on google drive, along with instructions, a video and other resources.

You can still get involved, even if you don’t organize a balloon mapping activity. The open source software MapKnitter can be used by anyone to stitch together the photos taken at other DIY aerial mapping activities. You can contribute to other projects by helping out with stitching.

 

Special thank you goes out to the Delaware River Waterfront who allowed us to use their space for this activity. And special thanks to Michelle Schmitt for co-hosting the event and providing ample moral support! Also special thanks to Azavea for funding this event and for all the attendees who participated and helped stitch the photos together.