Esri enriches maps with Tweets and the Streaming API

New Idea

On March 11, 2011, a 9.0 magnitude earthquake struck near the east coast of Honshu, Japan. As the news broke, many turned to Twitter to get the latest updates from people on the ground who experienced the disaster first hand. Within hours of learning about the devastating event, the @Esri team used Twitter to launch an interactive map that combined trusted sensor data with Tweets and other social feeds like Flickr and YouTube. The team layered Tweets over an information-rich map that showed earthquake location, a shakemap, and aftershocks from USGS. The resulting product helped the world understand the impact of the earthquake and resulting tsunami.

Adding Tweets to mapping technology gives insights into what people are saying and where they are saying it. It can highlight spatial trends in the conversation. Combining a Twitter conversation with authoritative data sources, like 911 calls, insurance claims, demographics, weather reports, and earthquake feeds, can provide a human perspective on the situation. By adding social intelligence to its mapping and analyzing the Twitter conversations, Esri visualizes the most engaging Tweets over space and time to get a better understanding of how a crisis event spreads and where resources are needed. Esri maps can even help predict the weather, just take a look below.


@Esri‘s proof of concept uses public Tweets to tell the story of what is happening on the ground in real-time. This has been hugely successful and has allowed the team to generate more projects that use Twitter in decision making platforms for retail, public safety, and insurance customers.

Esri’s Twitter mapping work began with the Twitter Search API. Although the API provided a low barrier of entry for accessing geo-tweets filtered by topic, Esri quickly realized that the quantity of information available through the Search API could not facilitate decision making. Esri turned to Twitter’s data partner Gnip to provide a source of data that they combined with their spatial analysis engine. Esri has prototyped and will switch to using the Streaming API to find Tweets for all pubic facing applications on, like this Severe Weather Map.


The Japan earthquake map was picked up by news organizations including CNN, ABC, Al Jazeera, and Wired resulting in over 500,000 page views to the application on in the days following the event. Typical page views for disaster response pages are around 5,200 per week. Esri customers and partners have also had great success implementing Twitter in their maps. Esri’s Spain’s map of the 2011 elections, combining demographics, polling locations, and social conversation, received 4 million requests per hour at its peak and was linked from the home page of Spain’s largest media organization.

As a side note, Esri built the Pubic Information Map application featured in the maps in this case, and encourages readers to go to to download the code and begin exploring social media mapping for workflows.

Using the Twitter Search API


  • The Search API is not complete index of all Tweets, but instead an index of recent Tweets. At the moment that index includes between 6-9 days of Tweets.
  • You cannot use the Search API to find Tweets older than about a week.
  • Queries can be limited due to complexity. If this happens the Search API will respond with the error:{"error":"Sorry, your query is too complex. Please reduce complexity and try again."}
  • Search is focused in relevance and not completeness. This means that some Tweets and users may be missing from search results. If you want to match for completeness you should consider using the Streaming API instead.
  • The near operator cannot be used by the Search API. Instead you should use the geocode parameter.
  • Queries are limited to 1,000 characters in length, including any operators.
  • When performing geo-based searches with a radius, only 1,000 distinct subregions will be considered when evaluating the query.
  • In API v1.1, the Search API requires some form of authentication — either OAuth 1.0A or app-only auth
  • Recent Enhancements

    • API v1.1’s Search methods return tweets in the same format as other REST API methods.
    • Classic pagination is not offered in API v1.1. You must use since_id and max_id to naviagte through results.
    • The user IDs returned in the Search API now match the user IDs utilized in the Twitter REST & Streaming APIs. You no longer need to maintain a mapping of “search IDs” and “real IDs.”
    • In v1, use include_entities=true to have Tweet Entities included for mentions, links, media, and hashtags.
    • in_reply_to_status_id and in_reply_to_status_id_str are now included with @replies, allowing you to know the replied-to status ID, which can be looked up using GET statuses/show/:id.

    Rate Limits

    Rate Limiting on API v1.1’s search/tweets

    The GET search/tweets is part of the Twitter REST API 1.1 and is rate limited similarly to other v1.1 methods. See REST API Rate Limiting in v1.1 for information on that model. At this time, users represented by access tokens can make 180 requests/queries per 15 minutes. Using application-only auth, an application can make 450 queries/requests per 15 minutes on its own behalf without a user context.

    Rate Limiting on deprecated

    The Rate Limits for the legacy Search API are not the same as for the REST API. When using the Search API you are not restricted by a certain number of API requests per hour, but instead by the complexity and frequency.

    As requests to the Search API are anonymous, the rate limit is measured against the requesting client IP.

    To prevent abuse the rate limit for Search is not published. If you are rate limited, the Search API will respond with an HTTP 420 Error. {"error":"You have been rate limited. Enhance your calm."}.


    On July 20, 2012 by Nate Ricklin

    Twitter Big Data + Geolocation = Massive Insight

    It’s a simple idea: Twitter + Geo.  What are people saying and where are they saying it?  These are basic questions, but getting the answers is surprisingly difficult.  In this blog post I’ll talk about some of the shortcomings with the Geo layer in Twitter’s API offerings, and what we built to get the functionality that we needed.

    At first glance using Twitter’s built-in “geo” functionality seems pretty straightforward, but dive into it and you’ll soon realize that there’s a lot to be desired.  In fact, Talking with my good buddy Charles at GNIP, catching a stream of tweets coming from a geographic area is notoriously difficult, and there’s no clear good way to do it yet.

    Twitter’s Built-in Geo-tags

    The first thing you might try when trying to put tweets on the map is looking at the “geo” field in tweets returned from the Twitter Search API.  Go ahead, try it out: .  Look at the “geo” fields in the returned results, and you’ll see that exactly zero tweets have embedded geotags.  In our own internal testing, we typically see that only about a quarter to a half a percent of tweets actually have embedded geotags.  I’ve never seen it above 1%.  That’s a lot of data flying around without a home.

    Search API Location Information

    Twitter’s search API allows you specify a location (in lat/lng) and a radius that those tweets should originate within, and it goes by both actual embedded geotags in tweets as well as the “location” field that people fill out in their Twitter profiles in free-form fashion.  Problem solved, right?

    Well, not quite.  There are still a couple of problems with this:

    If you’re monitoring many keywords, the number of searches you need to perform (number of keywords you care about times the number of locations you care about) starts to really add up and you quickly run into API limits trying to track it all.

    The second problem is that Twitter’s Search API location search has been, and remains buggy and inconsistent.  Here are a few links to some of the issues that I’ve been following on this front: – General Location Bugginess – Geocode search results fall out of specified radius – Issue #98: Geocode search results fall out of specified radius (allegedly fixed as of 2012-05-17) – Geocode search volume lower than expected – Issue #141: Geocode search volume lower than expected (not resolved)

    But check this out: Perform this geo-search out in the Nevada desert:

    Streaming API Location Information

    Well maybe the Twitter Streaming API offers a solution? The main problem with the Streaming API is that you can filter by keywords OR filter by location, but not both at the same time.  Here’s the official word from the Twitter documentation:


Procedural City Modeling

Esri’s CityEngine has brought procedural (rule-based) modeling to the urban scale. Building out of software originally developed by the film industry (yet again: see Maya) to create and simulate vast cityscapes, CityEngine allows the designer to deploy city form and organization through the generation of rule-sets and a variety of base geometries. This opens up the very-real possibility of speculative city-design at a completely new scale and scope.

CityEngine also allows for the integration of 2D GIS data (shapefiles, geodatabases, etc.) into a 3D modeling environment. On the fly updating of GIS data through the CityEngine platform is also possible, as well as output to an interactive web format. This provides a rich platform for running scenarios in the city, working from real-world, real-time data, comprehensive rule sets, intuitive design, interactive commenting and reporting, etc. These rule sets have the potential encompass a wide range of existing and speculative urban planning paradigms. Esri has already developed a number of schema developed including the use of “Urban Transects” a model developed by New Urbanist Andrés Duany:

youtube tutorials channel:



links (free signin required):

Overview Video Workshop

Tutorial Gallery

Essential Skills Tutorial

Basic Tutorial on Integrating GIS data into CityEngine