Visualizing 100 Years of Earthquakes

My last post was about the 1960 Chile megathrust earthquake, and how much energy it released (about 1/3 of all seismic energy on earth over the last 100 years). I used data from USGS on all earthquakes greater than magnitude 6 from 1915-2015. Since I had this nice dataset (about 10,500 quakes), I could not resist playing around in CartoDB to make some nice visualizations.

This is an animated map of all earthquakes since 1915 using the Torque function in CartoDB. I know this has been done many times, but it makes such a striking image it’s hard to resist. If you watch closely you’ll notice that the earthquakes seem to occur more frequently towards the end of the time lapse (starting in the 1960s). That’s because seismologists got better at measuring and recording earthquakes, not because the quakes actually became more frequent.

This is a heatmap of all quakes in the dataset. The Pacific ring of fire (the arcs of subduction zones encircling much of the Pacific Ocean) dominates the global pattern. The mid-ocean spreading centers are also visible, but not as pronounced the ring of fire. There are fewer big earthquakes in the extensional spreading centers than the compressional subduction zones. There is also a broad zone of earthquake activity that stretches from Italy and Greece through Asia Minor, Iran, Central Asia, the Himalayas, into China. This is a huge zone of compression caused by the African, Indian, and other small plates colliding with Eurasia.

This map shows earthquake depth, with deep earthquakes in red, intermediate depth in orange, and shallow in yellow. Plotting earthquake depth on a map illustrates the geometry of subduction zones. For example, in South America, the ocean crust of the Nazca plate (under the Pacific Ocean) is subducting under the South American plate. As the Nazca plate plunges eastward at an angle, the earthquakes produced get deeper with distance to the east.

You can pan and zoom right in the embedded maps if you are keen to explore. You can also make the maps full screen using the button on the upper left.

The 1960 Chile Earthquake Released Almost a Third of All Global Seismic Energy in the Last 100 Years

I just saw a trailer for the movie San Andreas. It looks preposterous but I love geology disaster movies, so I’ll probably see it. In the film, a series of earthquakes destroy California, culminating with a giant magnitude 9.5 quake. Fortunately the Rock is on scene to help save the day.

The largest earthquake ever recorded in real life struck central Chile on May 22, 1960. With a magnitude of 9.6 (some estimates say 9.5) this was a truly massive quake, more than twice as powerful as the next largest (Alaska 1964), and 500 times more powerful than the April 2015 Nepal quake. The seismic energy released by the 1960 Chile quake was equal to about 20,000 Hiroshima atomic bombs. Thousands were killed. It also triggered a tsumami that traveled 17,000 km across the Pacific Ocean and killed hundreds in Japan.

But I think the most striking thing about this quake is that it accounts for about 30% of the total seismic energy released on earth during the last 100 years. To illustrate this, I calculated the seismic moment (a measure of the energy released by an earthquake) of all earthquakes greater than magnitude 6 and plotted the global cumulative seismic moment over the last 100 years.

Global Cumulative Seismic Moment 1915-2015

Click for interactive version

This plot clearly shows how the 1960 Chile quake (and to a lesser extent the 1964 Alaska event) dominates the last 100 years in terms of total energy released. This is not always obvious as the earthquake magnitude scale is logarithmic. So a magnitude 9.6 releases twice as much energy as a 9.4 and 250 times as much as an 8.0.

Technical notes: To make this plot I downloaded from the USGS archive data on all the earthquakes greater than magnitude 6 from 1915-2015. There are about 10,500 of them.

I calculated the seismic moment for each quake relative to a magnitude 6 (the smallest in the database) using

\Delta M_{0} = 10^{3/2(m_{1}-m_{2})}\

Where m1 is the magnitude of each quake and m2 = 6.

So a mag 9.6 is about 250,000 times more powerful than a mag 6.0. (Note that this refers to energy released, not necessarily ground shaking, which is influenced by many factors, such as earthquake depth).

Then I summed all the relative moments, normalized to 1, and plotted the cumulative seismic moment over the time period.

A few caveats. First, the quality of the magnitude measurements has improved over time, so that the data from the earlier part of the 20th century is not as reliable as the more current data.

Second, this analysis only looks at earthquakes larger than magnitude 6.0. Of course there are many, many smaller earthquakes. However, the cumulative amount of seismic energy released by these smaller quakes is very small compared to the larger ones (again, remember the logarithmic scale).

Third, the magnitudes listed in the USGS archive are calculated in different ways. The majority are moment magnitude or weighted moment magnitude. The equation above is meant for these types of magnitude. Other magnitude measurements, such as surface wave magnitude, have slightly different ways of calculating total energy release. This may introduce some inaccuracies, However, they will be small compared relative to total energy release.

If any seismologists would like to weigh in, I would be most grateful.

More information on calculating magnitude and seismic moment here and here.

Data and R code here. Graph made with

These Visualizations Will Spice Up Your Football Viewing this Weekend

Millions of Americans and a few dozen people from other regions of the globe will sit down this weekend to watch the NFC and AFC Championship games. Both games should be pretty good, but no matter how interesting they are, you’ll still need something to do during the commercials besides go for chips and beer and bathroom breaks. I’ll share with you two companions that I plan to have with me during the games. And they both involve attractive visualizations.

The first is the New York Times 4th Down Bot. This is a web site that compares every 4th down situation in the game with a model developed by Brian Burke of The Bot will then tell you whether the coach should choose to go for it, punt, or kick a field goal. The model was built from 10 years worth of football statistics and calculates how each decision impacts the number of expected points for that team. The idea is that coaches should be trying to maximize expected points (how many points they score minus how many points the other team scores) when the make their 4th down decision. This sounds incredibly obvious but according to the 4th Down Bot coaches are much more conservative than the model would predict.

For example, look at the graphic below. For each position on the field and 4th down distance to go, the graphic shows what decision would maximize expected points. If you are on the opponents 20 yard line and it’s 4th and 15, you should kick the field goal. So far so good. But look at what the model recommends for 4th and 1 at your own 11 yard line. It says you should go for it! I don’t think there’s ever been an NFL coach who’s gone for it in that situation, unless it’s very late in the game and you’re behind. You can see how much more conservative actual coaches are by looking at the right side of the graphic.

4th down

Click on the image to view the interactive version and learn more about the model used to develop it

One explanation that’s commonly given for this discrepancy is that coaches are not simply trying to maximize the chances of winning. They are also risk averse and fear making a controversial decision to go for it, which, if it fails, would incite the rage of the fans and media. There is something to this, but I don’t think it can explain the whole phenomenon. You would think that a maverick coach who starts going for it on 4th and 1 deep in his own territory would eventually start winning more games, and other coaches would feel safer and start copying him.

So I don’t know why coaches seem to play more conservatively than models would suggest they should. But as a fan I can say that intuitively I do think my team should go for it more often on 4th.

It’s fun to watch the game, and when a 4th down comes up, pretend you’re the coach and decide what to do. Then check what the Bot says. You can follow it on twitter at @NYT4thDownBot .

The second tool is seismic analysis of the vibrations caused by the crowd at the Seattle Seahawks stadium. The Pacific Northwest Seismic Network installed three seismometers under the stadium, which is legendary for its crowd noise. They are planning to make near real-time seismographs available during the NFC Championship game, so you can follow all the action during the game. If you’re a Seahawks fan, but you get too nervous to watch the game, you can just wait until you see a big spike in the seismometer, and then turn on the game to watch the replay.

We know that it is possible to pick up seismic waves produced by the roar of the crowd in Seattle because of the famous Beast Quake. This event was measured during Marshawn Lynch’s ridiculous touchdown run against the Saints in the 2011 playoffs. Here it is:

beast quake


And here’s the run: