Morgan T. Page

Research Geophysicist

U.S. Geological Survey

Fault Roughness at Seismogenic Depths

We analyze over 18,000 earthquakes in the 2016-2018 Cahuilla, California swarm and, for the first time, use these high-resolution earthquake locations to map the roughness across an active fault surface at depth. We measure roughness at multiple scales and find that the fault is self‐affine with a Hurst exponent of 0.52, consistent with a Brownian surface. In addition, we find that the fault is 50% rougher in the slip‐perpendicular direction than parallel to slip, consistent with evidence from exhumed faults.

Cochran, Elizabeth S., Morgan T. Page, Nicholas J. van der Elst, Zachary E. Ross, and Daniel T. Trugman, Fault Roughness at Seismogenic Depths and Links to Earthquake Behavior, The Seismic Record (2023), 3 (1): 37-47. doi: 10.1785/0320220043

Background Earthquakes Inform Future Aftershock Locations

We find that aftershocks preferentially occur in previously active areas. However, over the entire aftershock sequence, more active areas do not have more aftershocks in total. We reconcile these two seemingly disparate observations within the context of rate-and-state friction.

Page, Morgan T., and Nicholas J. van der Elst, Aftershocks Preferentially Occur in Previously Active Areas, The Seismic Record (2022), 2 (2): 100-106. doi: 10.1785/0320220005

More, Not Less, Connectivity Needed in PSHA

Did the third Uniform California Earthquake Rupture Forecast (UCERF3) go overboard with multifault ruptures? Unlike what Schwartz (2018) alleges, the UCERF3 rupture-length distribution matches empirical data. Only 0.47% (not 32%, as claimed by Schwartz, 2018) of UCERF3 surface‐rupturing earthquakes, by rate, are >500 km. In fact, the UCERF3 model could be improved by adding more connectivity to the fault system. Adding more connectivity would improve model misfits with data, potentially improve aftershock forecasts, and reduce model sensitivity to inadequacies and unknowns in the modeled fault system.

Page, Morgan T., More Fault Connectivity Is Needed in Seismic Hazard Analysis, Bull. Seism. Soc. Am. (2020), doi:10.1785/0120200119

Peak Ground Displacement Saturates Exactly When You'd Expect

Do large earthquakes begin more impulsively than small earthquakes? It appears not. We analyze 140,000 waveforms for M4.5 to M9 earthquakes near Japan, and find that if duration variability is accounted for, peak ground displacement saturation can be well-fit by a non-deterministic model of rupture growth. Our model of saturation can be used in a Bayesian framework to estimate posterior uncertainties of magnitude that take into account all available information in real time, including the prior information of Gutenberg-Richter magnitude scaling.

Trugman, Daniel T., Morgan T. Page, Sarah E. Minson, and Elizabeth S. Cochran (2019), Peak Ground Displacement Saturates Exactly When Expected: Implications for Earthquake Early Warning, JGR - Solid Earth 124, doi: 10.1029/2018JB017093

Faulty Intuition about Earthquakes near Major Faults

When it comes to smaller earthquakes, are major faults special? Previous work showed that earthquakes near the faults in the SCEC Community Fault Model (CFM) have a lower b‐value than earthquakes elsewhere in Southern California. However, the correlation between earthquake size and proximity to major faults is not present in data collected after this version of the CFM was completed. This indicates that to some degree, the CFM is overtuned to past seismicity, with some structures related to transient features in seismicity rather than persistent geologic features. We also search for differences in aftershock productivity and foreshock statistics near faults and find that they are also “fault‐tolerant”—that is, insensitive to distance from major faults. Our results suggest that the fault system in Southern California is highly connected, since the chance of an earthquake nucleating on or near a major fault versus on a secondary structure is independent of its final size.

Page, Morgan T. and Nicholas J. van der Elst (2018), Fault-tolerant b-Values and Aftershock Productivity, JGR - Solid Earth 123, doi: 10.1029/2018JB016445

Testing Earthquake Forecast Models

Mathematician Alan Turing devised Turing tests to determine whether a machine could successfully imitate the language behavior of a human. Here, we present a series of tests to determine whether a model -- in this case, UCERF3 -- can produce synthetic earthquake catalogs that successfully mimic the statistical behavior of the observed earthquake catalog in California.

Page, Morgan T. and Nicholas J. van der Elst (2018), Turing-Style Tests for UCERF3 Synthetic Catalogs, BSSA 108, 2, doi: 10.1785/0120170223

Aftershock Forecasting

Following a large earthquake, seismic hazard can be orders of magnitude higher than the long-term average as a result of aftershock triggering. Because of this heightened hazard, emergency managers and the public demand rapid, authoritative, and reliable aftershock forecasts. We estimate aftershock parameters for sequences within global tectonic regions, taking into account short-term aftershock incompleteness and intersequence variability.

Page, Morgan T., Nicholas van der Elst, Jeanne Hardebeck, Karen Felzer, and Andrew J. Michael (2016), Three Ingredients for Improved Global Aftershock Forecasts: Tectonic Region, Time-Dependent Catalog Incompleteness, and Intersequence Variability, BSSA 106, 5. doi: 10.1785/0120160073 and Electronic Supplement

In the paper below, we take a different approach to aftershock forecasting -- one that is empirical and nonparametric. To predict aftershock probabilities in an ongoing sequence, we search for past sequences that are similar to what has been observed so far. This simple "simularity" method returns realistic uncertainty bounds and can, in principal, predict deviations from Omori behavior.

van der Elst, Nicholas J. and Morgan T. Page, (2017), Nonparametric Aftershock Forecasts Based on Similar Sequences in the Past, SRL 89, 1. doi: 10.1785/0220170155

Potentially Induced Earthquakes in Los Angeles

In the early twentieth century, Los Angeles was one of the leading oil producers in the world, accounting for nearly 20 percent of the total global production by 1923. We find evidence that several damaging earthquakes – the 1920 Inglewood, 1929 Whittier, 1930 Santa Monica, and 1933 Long Beach earthquakes – may have been caused by oil and gas production.

Hough, Susan E. and Morgan Page (2016), Potentially Induced Earthquakes during the Early Twentieth Century in the Los Angeles Basin, BSSA 106, 6. doi: 10.1785/0120160157

How Large can Induced Earthquakes Get?

Deterministic limits on induced earthquake magnitudes have been proposed based on the size of the reservoir or the volume of fluid injected. However, if induced earthquakes occur on tectonic faults oriented favorably with respect to the tectonic stress field, then they may be limited only by the regional tectonics and connectivity of the fault network. In this study, we show that the largest magnitudes observed at fluid injection sites are consistent with the sampling statistics of the Gutenberg-Richter distribution for tectonic earthquakes, assuming no upper magnitude bound.

van der Elst, Nicholas J., Morgan T. Page, Deborah A. Weiser, Thomas H. W. Goebel, and S. Mehran Hosseini (2016), Induced earthquake magnitudes are as large as (statistically) expected, JGR - Solid Earth 121, 6. doi:10.1002/2016JB012818

Induced Earthquakes in Oklahoma: Nothing New?

Seismicity rates have increased sharply since 2009 in the central and eastern United States, with especially high rates of activity in the state of Oklahoma. Some industry representatives have pointed to previous clusters of seismicity in Oklahoma as evidence that the recent activity could be a naturally occurring phenomenon. However, the spatial and temporal correspondence of historical seismicity in Oklahoma with waste-water injection wells provides compelling evidence that these earthquakes, like recent seismicity, were induced by oil production activities.

Hough, Susan E., and Morgan Page (2015), A Century of Induced Earthquakes in Oklahoma?, BSSA 105, 6. doi: 10.1785/0120150109

Hough, Susan E., and Morgan Page (2015), The Petroleum Geologist and the Insurance Policy, Seis. Res. Lett. 87, 1. doi:10.1785/0220150218

The Predictive Power of Foreshocks

The standard model for the origin of foreshocks is that they are earthquakes that trigger aftershocks larger than themselves (Reasenberg and Jones, 1989). This can be formally expressed in terms of a cascade model. In this model, aftershock magnitudes follow the Gutenberg-Richter magnitude-frequency distribution, regardless of the size of the triggering earthquake, and aftershock timing and productivity follow Omori-Utsu scaling. An alternative hypothesis is that foreshocks are triggered incidentally by a nucleation process, such as pre-slip, that scales with mainshock size. If this were the case, foreshocks would potentially have predictive power of the mainshock magnitude. Recently, Bouchon et al. (2013) claimed that the expected acceleration in stacked foreshock sequences before interplate earthquakes is higher prior to M≥6.5 mainshocks than smaller mainshocks. Our re-analysis fails to support the statistical significance of their results.

Felzer, Karen R., Morgan T. Page, and Andrew J. Michael (2015), Artificial seismic acceleration, Nature Geoscience 8, 82-83. doi: 10.1038/ngeo2358

Are Faults Gutenberg-Richter or Characteristic?

The Gutenberg-Richter (G-R) magnitude-frequency distribution is known to describe the distribution of earthquake sizes within large regions. There is controversy, however, as to whether this distribution also applies at the scale of individual faults. An alternative hypothesis, the characteristic earthquake hypothesis, posits that large earthquakes on major faults occur at a higher rate than a Gutenberg-Richter extrapolation from small events predicts (Wesnousky et al.,1983; Schwartz and Coppersmith, 1984). The primary evidence for such a scaling break is an apparent mismatch between instrumental and paleoseismic earthquakes rates for several major fault zones. This mismatch, however, can also be explained as a rate change rather than a deviation from G-R statistics.

SSA Earthquake Debate Videos: Do Large Earthquakes on Faults Follow a Gutenberg-Richter or Characteristic Distribution?

Page, Morgan T., David Alderson, and John Doyle (2011), The Magnitude Distribution of Earthquakes near Southern California Faults, JGR - Solid Earth 116, B12309. doi:10.1029/2010JB007933

Page, Morgan and Karen Felzer (2015), Southern San Andreas Fault Seismicity is Consistent with the Gutenberg-Richter Magnitude-Frequency Distribution, BSSA 105, 4. doi:10.1785/0120140340

UCERF3

The 3rd Uniform California Earthquake Rupture Forecast (UCERF3) uses an inversion methodology to derive rupture rates consistent with fault slip rates, paleoseismic data, and regional seismicity rates. In this system-level approach, rupture rates, rather than being prescribed by experts as in past models, are derived directly from data. This new methodology enables the relaxation of fault segmentation and allows for the incorporation of multi-fault ruptures, which are needed to remove magnitude-distribution misfits that were present in the previous model, UCERF2.

Uniform California Earthquake Rupture Forecast, Version 3 (UCERF3) - The Time-Independent Model

The UCERF3 Grand Inversion: Solving for the Long-Term Rate of Ruptures in a Fault System

UCERF3 Hazard Curves

UCERF3 Supplementary Material

Long-Term Time-Dependent Probabilities for the Third Uniform California Earthquake Rupture Forecast (UCERF3)

UCERF3 Fact Sheet

A Spatiotemporal Clustering Model for the Third Uniform California Earthquake Rupture Forecast (UCERF3‐ETAS): Toward an Operational Earthquake Forecast

A Synoptic View of the Third Uniform California Earthquake Rupture Forecast (UCERF3)

The New Madrid Seismic Zone

Some researchers have suggested that current seismicity in the New Madrid region could represent the tail end of a long-lived aftershock sequence following the 1811-1812 earthquakes. We examined historical and instrumental seismicity in the New Madrid region and reject this hypothesis; it is not consistent with Omori's Law. This is in agreement with recent work finding nonzero strain measurements in the region (Frankel et al., 2012). Low strain is also consistent with a reanalysis of historical intensity data, which suggests that the principal events in the 1811-1812 sequence had magnitudes around 7.0.

Page, Morgan T. and Susan E. Hough (2014), The New Madrid Seismic Zone: Not Dead Yet, Science 343, 6172. doi:10.1126/science.1248215

Hough, Susan E. and Morgan Page (2011), Toward a Consistent Model for Strain Accrual and Release for the New Madrid Seismic Zone, Central United States, JGR - Solid Earth 116, B03311. doi:10.1029/2010JB007783

Uncertainty in Earthquake Source Inversions

I am interested in the resolving power of seismic data. Kinematic inversions are routinely used to image the rupture process at depth, but differences between various slip inversions make it clear that the uncertainties in these inversions can be quite large. Data fit is not necessarily a good measure of the error in the final slip model. It seems that many features we would like to extract (for example, rupture area, maximum slip, the size and location of asperities) are not robust features of earthquake source models. A better understanding of which features in inversions are robust will allow this research to be applied in ways that are suitable given the information that they provide.

Page, M. T., S. Custódio, R. J. Archuleta, and J. M. Carlson (2009), Constraining Earthquake Source Inversions with GPS Data 1: Resolution Based Removal of Artifacts, JGR - Solid Earth 114, B01314. doi:10.1029/2007JB005449

Source Inversion Validation (SIV) Project

Seismic Hazard Analysis

Rigorous methodology in Probabilistic Seismic Hazard Analysis (PSHA) requires fully accounting for model uncertainty. PSHA is characterized by deep uncertainty, for not only is there parameter uncertainty regarding the values of various parameters needed to estimate hazard, there is also model uncertainty stemming from uncertainty regarding the mechanism generating risk. The ability of logic trees, as currently implemented in PSHA, to correctly capture model uncertainty is limited. Logic trees are most easily implemented when a) the branches at each level of the logic tree are mutually exclusive, and b) the uncertainties sampled at each level of the logic tree are uncorrelated.

Page, M. T. and J. M. Carlson (2006), Methodologies for Earthquake Hazard Assessment: Model Uncertainty and the WGCEP-2002 Forecast, Bull. Seism. Soc. Am. 96, 5, doi: 10.1785/0120050195

Dynamic Rupture Modeling

Earthquake ground motions are the result of a complex and heterogeneous faulting process. Dynamic forward simulations, which include all of the physics of the earthquake problem, can include realistic friction mechanisms and simulate up to frequencies important for building response. While kinematic models provide a direct connection to data, dynamic models, being feasible physically, can help us better constrain the rupture processes that occur at the source.

Page, M. T., E. M. Dunham, and J. M. Carlson (2005), Distinguishing Barriers and Asperities in Near-Source Ground Motion, JGR - Solid Earth 110, B11302, doi:10.1029/2005JB003736.