Unlocking the secrets of great earthquakes with deep learning

Great Earthquakes, with magnitude  ≥ 8.0, can unleash forces equivalent to detonating several trillion kilograms of explosives, with impacts on societies and economies that can be felt for decades. 

Japan’s magnitude-9.0 earthquake in 2011 was followed by a huge tsunami and triggered the Fukushima nuclear meltdown.

Rescue teams search for missing people in Natori, Japan, which was devastated by a great earthquake in 2011.
Source: RIA Novosti archive, image #882887 Originally taken by Iliya Pitalev / Илья Питалев

Thankfully, these ‘great earthquakes’ are also exceedingly rare—of the 270,000 earthquakes with magnitude ≥ 4.5 over the past 300 years, only 100 were catalogued as great earthquakes by the United States Geological Survey (USGS)

On the other hand, this rarity means earth scientists don’t have enough data to fully understand what causes these gigantic geological events, so as yet there is no way to predict them. 

One theory is that great earthquakes do not  behave like scaled up smaller earthquakes, but that they are entirely different beasts, shaped by geophysical features of their surroundings.

Creating a model manually to test this theory is hugely complex. We know most great earthquakes occur at the ‘subduction zone’—the place where two tectonic plates meet. But there are many characteristics, such as velocity, curvature and thickness, that could contribute to a region’s capacity for great earthquakes.

Deep learning allows us to overcome this hurdle.

Over the last year I have had the pleasure of supervising two internship teams from Monash DeepNeuron who took on the challenge, in collaboration with researchers from Monash School of Earth, Atmosphere and Environment (Monash EAE) and the Monash Data Science and AI platform (DSAI)

Our goal was to train a series of models to accurately infer whether a region could produce a great earthquake purely based on its geophysical characteristics. 

We trained a set of artificial neural network models which analyses over 200 geophysical features using historical data about past earthquakes from the USGS catalogue. This is  many more characteristics than is practical to analyse without machine learning, allowing for more complex analysis.

After much hard work by the team, the final models achieved validation accuracies of up to 99.2%.

The Monash EAE earth science researchers on our team will now analyse these models, searching for patterns of features the model considers most associated with Great Earthquakes. This may help confirm their theory about how great earthquakes may differ from their lesser counterparts.

The Ring of Fire is known for its volcanoes, but also its trenches. The historical data we used to train the models was from the regions of: Alaska-Aleutian, Central America, Izu-Bonin-Mariana, Tonga-Kermadec, Kamchatka-Kuriles-Japan, Ryukyu-Nankai, South America, and Sumatra/Southeast Asia. Source: US Geological Survey 

Working with researchers from other disciplines was a new experience for many of us on the team. 

“Working on this project has allowed me to collaborate with people in fields I never expected to, and has increased my understanding of how machine learning can be used to solve unconventional problems”, says Monash DeepNeuron intern Lachlan Burne.

Setting up the data was probably the hardest part of the project, and we experimented with different datasets and methods of sampling the data until we were satisfied. 

“During the project, we had many fun brainstorming sessions where a lot of knowledge exchange happened between the teams”, says Thyagarajulu Gollapalli, a Monash EAE PhD student. 

“In particular, extracting inputs to the models from the geophysical datasets is challenging. In future, we would like to improve these models to understand the dynamics of the earthquakes by applying explainable AI techniques.”

This work was a collaboration between the Data Science and AI platform (Mitchell Hargreaves), Monash DeepNeuron (Darren Tan, Ravindu Nanayakkara, Alexander Gallon, Lachlan Burne), and the Monash School of Earth, Atmosphere and Environment (Fabio Capitanio, Juan Carlos Graciosa, Thyagarajulu Gollapalli). This work is enabled by the ongoing collaboration between the Monash Data Science and AI Platform and Monash DeepNeuron (Komathy Padmanabhan, Mitchell Hargreaves and Will McLean).

Analysis and modelling was performed using Python, Pandas, PyTorch and Weights and Biases on local machines and Monash’s MASSIVE high-performance computing infrastructure.

Previous
Previous

Introduction to High Performance Computing

Next
Next

Containerisation and Docker in HPC