Why is earthquake prediction important




















Still, in a field where scientists have struggled for decades and seen few glimmers of hope, machine learning may be their best shot. But there have also been reputable scientists who concocted theories that, in hindsight, seem woefully misguided, if not downright wacky. Bureau of Mines who in the early s sounded successive false alarms in Peru, basing them on a tenuous notion that rock bursts in underground mines were telltale signs of coming quakes.

Paul Johnson is well aware of this checkered history. The convictions were later overturned. But Johnson also knows that earthquakes are physical processes, no different in that respect from the collapse of a dying star or the shifting of the winds. That slip — the laboratory version of an earthquake — releases the stress, and then the stick-slip cycle begins anew. When Johnson and his colleagues recorded the acoustic signal emitted during those stick-slip cycles, they noticed sharp peaks just before each slip.

Those precursor events were the laboratory equivalent of the seismic waves produced by foreshocks before an earthquake. At a meeting a few years ago in Los Alamos, Johnson explained his dilemma to a group of theoreticians. They suggested he reanalyze his data using machine learning — an approach that was well known by then for its prowess at recognizing patterns in audio data. Together, the scientists hatched a plan.

They would take the roughly five minutes of audio recorded during each experimental run — encompassing 20 or so stick-slip cycles — and chop it up into many tiny segments.

For each segment, the researchers calculated more than 80 statistical features, including the mean signal, the variation about that mean, and information about whether the segment contained a precursor event.

Because the researchers were analyzing the data in hindsight, they also knew how much time had elapsed between each sound segment and the subsequent failure of the laboratory fault. Johnson and his co-workers chose to employ a random forest algorithm to predict the time before the next slip in part because — compared with neural networks and other popular machine learning algorithms — random forests are relatively easy to interpret. The algorithm essentially works like a decision tree in which each branch splits the data set according to some statistical feature.

The tree thus preserves a record of which features the algorithm used to make its predictions — and the relative importance of each feature in helping the algorithm arrive at those predictions.

A polarizing lens shows the buildup of stress as a model tectonic plate slides laterally along a fault line in an experiment at Los Alamos National Laboratory.

When the Los Alamos researchers probed those inner workings of their algorithm, what they learned surprised them. The statistical feature the algorithm leaned on most heavily for its predictions was unrelated to the precursor events just before a laboratory quake.

Rather, it was the variance — a measure of how the signal fluctuates about the mean — and it was broadcast throughout the stick-slip cycle, not just in the moments immediately before failure. The variance would start off small and then gradually climb during the run-up to a quake, presumably as the grains between the blocks increasingly jostled one another under the mounting shear stress. Just by knowing this variance, the algorithm could make a decent guess at when a slip would occur; information about precursor events helped refine those guesses.

The finding had big potential implications. For decades, would-be earthquake prognosticators had keyed in on foreshocks and other isolated seismic events. The problem with earthquake predictions, as with any other type of prediction, is that they have to be accurate most of the time, not just some of the time.

We have come to rely on weather predictions because they are generally and increasingly accurate. Efforts are currently focused on forecasting earthquake probabilities, rather than predicting their occurrence.

There was great hope for earthquake predictions late in the s when attention was focused on part of the San Andreas Fault at Parkfield, about km south of San Francisco. Between and there were five earthquakes at Parkfield, most spaced at approximately year intervals, all confined to the same 20 km-long segment of the fault, and all very close to M6 Figure Both the and earthquakes were preceded by small foreshocks exactly 17 minutes before the main quake.

The U. Geological Survey recognized this as an excellent opportunity to understand earthquakes and earthquake prediction, so they armed the Parkfield area with a huge array of geophysical instruments and waited for the next quake, which was expected to happen around Nothing happened!

Fortunately all of the equipment was still there, but it was no help from the perspective of earthquake prediction. There were no significant precursors to the Parkfield earthquake in any of the parameters measured, including seismicity, harmonic tremor, strain rock deformation , magnetic field, the conductivity of the rock, or creep, and there was no foreshock.

In other words, even though every available technique was used to monitor it, the earthquake came as a complete surprise, with no warning whatsoever. The hope for earthquake prediction is not dead, but it was hit hard by the Parkfield experiment. The current focus in earthquake-prone regions is to provide forecasts of the probability of an earthquake of a certain magnitude within a certain time period — typically a number of decades — while officials focus on ensuring that the population is educated about earthquake risks and that buildings and other infrastructure are as safe as can be.

When the ground shakes, the water near the surface shifts upward and over-saturates the soil making it extremely unstable. Earthquake effects on buildings are seen in this animation.

Construction is a large factor in what happens during an earthquake. For example, many more people died in the Armenia earthquake where people live in mud houses than in the earthquake in Loma Prieta.

The key to earthquake safety are the structures we live in. But the reason why not all buildings are not built to withstand earthquakes is cost. More sturdy structures are much more expensive to build, so communities must weigh how great the hazard is, what different building strategies cost, and make an informed decision. But this is the crucial factor in earthquake safety. When Salt Lake City has its expected 7. But on January 12, , Haiti experienced a 7.

The difference—building codes. Skyscrapers and other large structures built on soft ground must be anchored to bedrock, even if it lies hundreds of meters below the ground surface. Larger buildings must sway, but not so much that they touch nearby buildings. Counterweights and diagonal steel beams are used to hold down sway.

Large buildings can also be placed on rollers so that they move with the ground. Earthquake prone areas should have building codes that require the use of correct building materials. Houses should also be built with wood and steel rather than brick and stone because they need to be able to bend and sway.

New buildings should be built on layers of steel and rubber to absorb the shock of the waves. Connections, such as where the walls meet the foundation, must be made strong to withstand the shaking. In a multi-story building, the first story must be well supported. Elevated freeways and bridges can also be retrofitted so that they do not collapse. Fires often cause more damage than the earthquake. Fires start because seismic waves rupture gas and electrical lines, and breaks in water mains make it difficult to fight the fires.

Builders zigzag pipes so that they bend and flex when the ground shakes. In San Francisco, water and gas pipelines are separated by valves so that areas can be isolated if one segment breaks.



0コメント

  • 1000 / 1000