[ad_1]
Editor’s voice is the AGU Publishing Department blog.
Scientists have been measuring earthquakes for hundreds of years. As our instruments have advanced, so has our understanding of how and why the ground moves.Recent articles published in geophysical review It explains how the big data revolution is advancing the field of seismology. Explain to some of the authors how seismic waves are measured, how measurement techniques have changed over time, and how big data is collected and used to advance science I asked you to
In a nutshell, what is seismology and why is it important?
Seismology is the science based on vibrational waves (“seismic waves”) that travel through the earth. Seismic waves produce ground motions that are recorded by seismographs. Recorded ground motion can provide important clues about both the sources of waves (earthquakes, volcanoes, explosions, etc.) and the properties of the Earth through which they pass. Seismology provides tools for understanding the physics of earthquakes, monitoring natural hazards, and revealing the inner structure of the Earth.
How do seismometers work and what important advances in knowledge have there been since their development?
Accurately measuring ground motion is surprisingly difficult. This is because the equipment that does this must move with the ground (otherwise it must be in the air where ground movement cannot be recorded directly). To meet this challenge, seismometers contain a sprung mass that remains stationary (an “inertial mass”) and measures the movement of the instrument relative to that mass. Early seismometers were entirely mechanical, but it was difficult to design a mechanical system with an inertial mass stationary over the frequency range of ground motion.
An important advance was the use of electronics to fix the mass and record a wider range of frequencies. An ideal seismometer can accurately record a wide frequency band and a wide range of seismic motion amplitudes without going off scale. It’s easier said than done, but seismometers are getting better every year.
What is the difference between passive seismology and exploratory seismology?
Passive seismology is the recording of seismic waves produced by natural or existing sources such as earthquakes. Passive seismologists typically deploy instruments for long periods of time to collect the necessary data from the natural generation of natural sources of seismic waves. Exploratory seismologists, by contrast, use man-made sources such as explosions, air guns, and truck vibrations to generate their own seismic waves. Exploratory seismologists typically work with a large number of instruments that are deployed in a short period of time because they control the timing and location of seismic wave sources. Exploratory seismology is most widely used in the oil industry, but can also be used for more general scientific purposes when high-resolution imagery is required.
How have advances in seismic techniques improved subsurface imaging?
Advances in seismic imaging technology have enabled seismologists to greatly improve the resolution of subsurface images. A particularly powerful technique for high-resolution imaging is called full waveform inversion (FWI). FWI uses the full seismic record for imaging, in an attempt to “wiggle by wiggle” the data and models, rather than using only simplified measures such as travel times, resulting in better images I can provide the resolution. This method has become widely adopted by the exploratory seismology community for this reason, and is now becoming more popular in the passive seismology community as well.
Another key innovation in imaging is the use of persistent environmental noise sources, such as ocean waves, for subsurface imaging. This is especially useful for short-term deployments where there is often not enough time to wait for natural sources such as earthquakes to occur.
What is “big data” and how is it used in seismology?
“Big data” defines data containing greater variety, in greater quantity or at a faster rate, requiring different data analysis methods and techniques than “small data”. terminology. In seismology, the amount of data obtained from individual experiments currently amounts to hundreds of terabytes for passive seismology and petabytes for exploratory seismology. As a general overview, a typical laptop has less than 1 terabyte of disk storage. Data velocity is the speed at which data is retrieved or analyzed. In seismology, a new measurement technique called Distributed Acoustic Sensing (DAS) fills a 1-terabyte hard drive in about 14 hours. The types of data used for seismic research are also increasing, and it is common to combine seismic data with complementary data types such as GNSS, barometric pressure, and infrasound.
What are the main drivers of big data seismology?
There are 3 main drivers. First, innovations in sensing systems have enabled seismologists to conduct “big data” experiments. Second, new data-hungry algorithms such as machine learning and deep neural networks are enabling seismologists to scale up their data analysis and extract more meaning from large seismic datasets. increase. Third, advances in computing have enabled seismologists to apply algorithms that require large amounts of data to big data experiments. Parallel and distributed computing allow scientists to perform many computations simultaneously, often split across multiple machines. Cloud computing services also give researchers access to on-demand computing power.
What are the challenges and opportunities facing big data seismologists in the future?
As for the challenges, the first concerns processing large amounts of data. Most seismologists are accustomed to easily accessing and sharing data via web services, and most of the data processing and analysis takes place on their computers. This workflow and the infrastructure that supports it do not scale well for big data seismology. Another challenge is acquiring the skills necessary to conduct research with large seismic datasets, which requires expertise not only in seismology, but also in statistics and computer science. Statistics and computer science skills are not routinely included in most geoscience curricula, but they are becoming increasingly important for conducting research at the cutting edge of big data seismology.
The opportunities are manifold, and while our paper details many of the opportunities for basic science discovery, it is also difficult to predict all possible discoveries. It’s a guide. In seismology, advances in data have led to many major discoveries. For example, after the earth’s layers were discovered, seismometers were developed that were sensitive enough to measure distant earthquakes. The first global seismic networks were developed prior to the discovery of global patterns of seismic activity that played an important role in the development of the theory of plate tectonics. The first digital global seismic network was followed by the first images of the convective mantle. Using the past as a guide, we can expect the era of big data seismology to provide a platform for creative seismologists to make new discoveries.
—Stephen J. Arrowsmith (sarrowsmith@smu.edu;
Editor’s Note: It is AGU Publications policy to ask authors of articles published in Reviews of Geophysics to write an Eos Editors’ Vox summary.
Quote: Arrowsmith, SJ, DT Trugman, K. Bergen, and B. Magnani (2022), The Big Data Revolution Unlocks New Opportunities for Seismology, Eos, 103, https://doi.org/10.1029/2022EO225016. Published June 9, 2022.
This article does not represent the opinion of AGU. Eos, or its affiliates. It is solely the opinion of the author.
Text © 2022. Author. CC BY-NC-ND 3.0
Images are subject to copyright unless otherwise noted. Reuse without the express permission of the copyright owner is prohibited.
[ad_2]
Source link