Skip to content

Cookies 🍪

This site uses cookies that need consent.

Learn more

Zur Powderguide-Startseite Zur Powderguide-Startseite
mountain knowledge

World of Science | Avalanches and artificial intelligence

What are the benefits of AI for avalanche research?

by Lea Hartl 03/28/2022
Artificial intelligence and machine learning are the buzzwords of the moment in many areas and have been for the last few years. Corresponding techniques are also increasingly being used in snow and avalanche research. What exactly is AI, what are its concrete applications and what are the benefits?

Artificial intelligence (AI) is more or less anything that enables computers to simulate human behavior or decisions. Machine learning usually refers to a subcategory of AI. Here, computers receive data and "learn" something from it without being explicitly told what exactly to do. Machine learning (ML), in turn, encompasses everything from linear regression, which you may remember from school, to complex neural networks.

Processing large data sets quickly

In the geosciences, ML is often used to process large amounts of data or complex, multidimensional data sets faster and more efficiently than would be possible with explicit programming. In the current literature, for example, there are numerous studies that use ML methods in one way or another for terrain classification and the identification of avalanche tracks. This is particularly useful in regions of the world where terrain data is only available in moderate resolution and such classifications do not already exist (some examples: Iran, Iran, Iran, India, Tianshan, Turkey). ML methods are also very useful for identifying avalanches in satellite data. The ML algorithms receive data sets in which the avalanches have been manually marked and use them to train themselves to find avalanches in the satellite images. This does not always work perfectly, but it works relatively well and is faster and much less labor-intensive than manually counting avalanches for entire regions (Snow avalanche detection and mapping in multitemporal and multiorbital radar images from TerraSAR-X and Sentinel-1,Snow Avalanche Segmentation in SAR Images With Fully Convolutional Neural Networks).

).
mountain knowledge
presented by

AI for avalanche warning?

From a skiing perspective, the question naturally arises: What can artificial intelligence tell us about the avalanche danger? Do the computers know something that the avalanche warning services don't? Not at the moment - although they may be able to recognize multidimensional correlations that are too complex for the human brain. Operational applications are certainly conceivable in the future. Two current SLF preprints are moving in this direction (preprints are studies submitted to scientific journals that are accessible online but have not yet undergone a full peer review process). In both cases, a so-called random forest model is used. The "random forest" is a "forest" of many decision trees and is often used for the automated classification of large data sets.

In "A random forest model to assess snow instability from simulated snow stratigraphy" it looks like this: Over 700 snow profiles serve as a database, for which, among other things, the result of a landslide block test is available. Depending on the slide block, the profiles are manually classified as stable, unstable or "something in between". With the help of weather data, all profiles were also simulated with a snowpack model - so there is an actual profile dug in the snow and a simulated profile. For each simulated profile, relevant weak layers and the overlying snow slab are described using 34 "features", including, for example, the grain size of the layer, the difference in cone size to the next layer, the shear strength, viscosity, density, etc.

The Random Forest algorithm then receives manually classified training data, i.e. profiles and corresponding "features" that have been identified as stable or unstable. The algorithm then "learns" which features are decisive for stability or instability and sorts the simulated profiles into the "stable" or "unstable" category depending on the properties of the weak layer and the overlying board. The comparison with the "real" profile and avalanche data shows that instability is recognized fairly reliably. The authors of the study see potential to incorporate such methods into operational avalanche warning in the near future.

Interestingly, instability is usually recognized even though the simulated profiles of the snowpack model are generated with spatially interpolated weather data, i.e. they are somewhat blurred and do not reflect small-scale, microclimatic differences. The weather factors that are most decisive for weak layers are apparently nevertheless captured.

In addition, it is interesting to see which of the 34 characteristic features are most important for the stable/unstable classification. In the Random Forest algorithm, the following 6 are the most central:

  • The viscous deformation rate

  • The "critical cut length" (length that is sawn in a propagation saw test until breakage occurs)

  • Skier penetration depth (strongly dependent on the density of the top 30cm of the snowpack)

  • The grain sphericity in the weak layer (how round are the grains?).)

  • The ratio of the average snow slab density to the grain size

  • The grain size in the weak layer

All other features have little or no influence on the result. This will certainly lead to many more research questions. The Random Forest may be "intelligent", but it cannot explain exactly why it chooses these features. Other approaches are needed to explain this in terms of snow physics.

The second current SLF preprint, "Data-driven automated predictions of the avalanche danger level for dry-snow conditions in Switzerland", also uses a Random Forest classification. Here, the avalanche danger level is determined automatically based on weather data and snowpack model calculations. This is successful in 70 to almost 80% of the cases examined. However, there are certain regional differences and the whole thing works somewhat less well if an old snow problem prevails. Wet snow situations were excluded due to the special avalanche processes. The training data with which the model "learns" includes the winters 1997-98 to 2017/18, the model was tested with data from the last two winters (2018/19 and 2019/20)

Once again, there are numerous features and characteristic parameters that are available to the Random Forest in order to assign the appropriate hazard level to a day. And again, it turns out that only relatively few features significantly influence the classification: various new and drift snow parameters, the snowfall rate, the skier penetration depth, the "critical cut length", as well as the relative humidity, the air temperature and stability indices. In principle, exactly the same things that are also heavily incorporated into the "manual" avalanche forecast.

In both cases, the studies aim to use snow and weather models to make automatic statements about the stability of the snow cover and the risk of avalanches. The advantages are obvious: in the Alps, which are truly blessed with terrain data, such systems could improve the spatial resolution of the information even further and the warning services would have an additional tool at their disposal. In regions of the world where there are no or poorly equipped warning services, an operational application would be an even more significant increase in information. In this context, the question arises as to whether and to what extent AI-supported methods represent a practical gain in information compared to "classic" methods, as the overall database is usually much poorer outside the Alps. Under certain circumstances, it may not even be possible to calculate with variables such as sphericity or deformation rates, but must be limited to wind and fresh snow in the weather forecast.

AI: Theoretically clever, practically not always

An important principle for any form of statistics, whether "intelligent" or not, is: garbage in, garbage out (GIGO). No matter how smart the ML algorithm is - if you feed it with garbage, i.e. bad data, it will also spit out garbage. What we can learn from data with the help of machine learning methods is only as good as the data itself. On the one hand, ML algorithms reproduce errors that may be contained in the data and, on the other hand, may not learn what they are supposed to learn if the data allows too much leeway. Many ML algorithms, including the random forests of the studies mentioned above, are also so-called black box models - we cannot fully understand how the result that is spit out at the end is created.

The data used for the studies mentioned above is, of course, the opposite of garbage data. However, snow profiles recorded by human observers, stability assessments and danger levels determined by the avalanche warning service do contain a certain margin of interpretation. Predictions are, by definition, not always completely accurate and the "correct" determination of the danger level can be discussed more or less endlessly. It is no trivial task to define an absolute right and wrong, or stable and unstable, with this type of training data. The computer does this on principle due to its naturally binary way of thinking, but this does not make it any more right or wrong.

The characteristics of the input data and data cleansing are an important topic in both of the preprints cited. For example, the hazard level model was "learned" once with all the data and once with a cleaned data set, which presumably contains fewer false predictions and also has slightly less bias. The algorithm has "learned" how the weather and simulated snow cover over the last 20 years relate to the predicted hazard level in Switzerland. So if there are any Swiss peculiarities here, for example, the model has also learned them.

Avalanches are physically very complex and avalanche forecasting is a kind of prime example of the human brain's impressive ability to draw relevant conclusions from incomplete, multidimensional information. This is precisely why avalanche research as an AI field of application is very exciting, but also very challenging. We are excited to see where developments will take us in the coming years!

Many thanks to Stephanie Mayer and Frank Techel (both SLF) for their input on this article!

This article has been automatically translated by DeepL with subsequent editing. If you notice any spelling or grammatical errors or if the translation has lost its meaning, please write an e-mail to the editors.

Show original (German)

Related articles

Comments

mountain knowledge
presented by