Skip to content
Zur Powderguide-Startseite Zur Powderguide-Startseite
mountain knowledge

World of Science | Frank Techel (SLF) on people and models

Humans and models in avalanche warning - can artificial intelligence predict the danger levels of tomorrow?

02/26/2025 by Lea Hartl
Frank Techel is an avalanche forecaster and researcher at the SLF. He deals with the integration of models into operational avalanche forecasting and believes that there is great potential to improve avalanche warning when humans and machines work together. In the interview, he explains how "AI" supports the human SLF team in producing the bulletin and where the limits of the models lie.

In addition to the physical snowpack model SNOWPACK, which uses weather data to calculate the layering of the snowpack at a specific point, three machine learning ("AI") models are used at the SLF. These models have "learnt" statistical correlations between snow cover simulations, weather data and avalanche observations or hazard levels using training data sets. The learnt correlations are used to predict relevant avalanche parameters (probability of triggering, danger level) without the intervention of human forecasters.

PG: In your publications, you write that the trend in Swiss avalanche warning is moving away from a purely "expert-based approach" and towards a "data and model-driven approach". Is that a fundamental goal for you?

FT: I don't know whether a purely data- and model-driven approach is really the goal, but I see a stronger integration of ever-improving models into the forecasting process as a logical development. Until about five years ago, all we really had in avalanche warnings were today's observations, today's measurements and a weather forecast for the following day. We also had the SNOWPACK snowpack model, but it was rarely used in our area. So weather models were the only models that really played a role. All the rest was done by the avalanche wardens using their experience, their knowledge and their gut feeling. That's what I mean by Expert Based Approach. 

We have had a lot more model data available for a few years now. On the one hand, SNOWPACK, which we are increasingly using for forecasting, and on the other, the machine learning models that come on top of SNOWPACK, so to speak. A lot has happened since the last PowderGuide article on this topic was published. At that time, we had already partially tested the model chains, but they are now very stable and in operational use. In addition to all the programming work that was necessary for this, the training of the avalanche wardens is also extremely important. We all have to learn what the models can do and, above all, what they can't do.

Are you also noticing this trend in neighbouring countries, towards models and away from purely human expertise? 

Yes, when I talk to colleagues in Tyrol, or in Norway or Canada, everyone realises that models offer a great opportunity. I'm not talking about AI, but about all models. There is certainly still a lot of implementation work to be done, as well as research, but models have great potential to improve avalanche warning. I am trying to promote this transfer - that the models really do come from research to us in avalanche forecasting. 

What does the "model-driven approach" look like in day-to-day operations?

I don't know if I would call it a model-driven approach yet. But the models already offer a valuable second opinion on how the existing data can be interpreted. So when I make a forecast, I continue to make exactly the same considerations as I did a few years ago, but I also have a model that tells me something about the probability of spontaneous avalanches, for example.

mountain knowledge
presented by

What are you hoping for from the models in the future?

The models help me to scrutinise my own forecast. If the model says something different to me, I think about whether I have missed something or whether the model is missing something. And then models can also help us to recognise spatial patterns, which can then also lead to more spatially accurate forecasts. We can already see that the avalanche forecast is slowly becoming more and more detailed. Ten years ago, we might have differentiated and described five different danger areas in the Swiss Alps. Now there are often ten areas. However, this trend has a cap, which is determined by what people can do. This is because we avalanche forecasters are limited in our capacity to process data. Models can calculate with almost infinite spatial and temporal resolution. 

In Switzerland, there is a bulletin twice a day and the spatial resolution is the so-called microregions.

Yes, that's about what we can do as humans. And even then we don't make an assessment for each of the hundred and forty microregions, but we summarise them. Theoretically, we could assess all one hundred and forty individually, but then we would also have to know how each region differs from neighbouring regions and then write something for each region individually. That's not possible with two or three people in the forecasting team. For something like this to be possible, the machine would have to be able to do a large part of it. But can it do that? And what resolution currently makes sense with the available data?

You might think that high resolution is great, more information is more information. But when is it too much, or no longer useful?

That's a good question and I don't have an answer at the moment. From what I can see operationally, it's quite clear to me that models don't make single-slope forecasts. A slightly higher resolution than what we have now is probably possible. Finding out what would be possible if humans and models worked together is extremely difficult.

Why don't the models make a single slope forecast?

My feeling is that we are still far, far away from that at the moment. In Switzerland, we work with snow cover simulations that are made at weather stations and then we interpolate between them. This means that everything we do in between is a kind of smearing of point information. We do take into account slope orientation and sea level, but no local topography, really zero. 

The terrain data for this would be available in Switzerland, wouldn't it? 

Yes, theoretically you can simulate the snow cover for any point or slope. But the extent to which these simulations can depict local effects or the variability of the snow cover is questionable, which is why I believe that a great deal of research is still needed.

Let's talk in more detail about the models you use. Can you briefly tell us what SNOWPACK does?

SNOWPACK takes weather data and uses it to calculate the snow cover, so to speak. Every weather event has an influence on the snowpack, be it an increase with a new layer of snow, or when meltwater penetrates, or anything else. SNOWPACK then simulates these processes for a specific point;

SNOWPACK is not an AI model, but a physical model that calculates what happens in the snowpack when it rains or snows on it, for example, based on our understanding of the process.

Genau, I. 

You now have several AI models -  including the Danger Level Model, the Instability Model and the Natural Avalanche Model. What do they do?

As I said, we have the very complex simulations from SNOWPACK at hundreds of data points and for different exposures. All in all, that's a lot of data that first has to be made usable and interpretable for the avalanche warning system. With the AI models, we try to filter out the most relevant information. 

All three AI models are trained to answer a very specific question. They can each do one specific thing. The hazard level model, for example, has used data from the last twenty years to learn which combination of weather data and SNOWPACK snow layers correlates approximately with which hazard level.

The instability model is different. It only takes the simulated layer profile and goes through every layer combination. What does the layer look like? Could it be a weak layer and is there a snow slab over it? The model has learnt the correlations on the basis of slide block tests. It then gives a probability that this is a typical combination of a weak layer and the "slab" above it. Based on past data, we then expect a more or less weak slip block result. The most unfavourable layer-board combination in a simulated profile then classifies the snow profile as weak or stable.

The spontaneous avalanche model is based on the instability model and also takes fresh snow as a parameter. It has learnt from a historical data set with observed avalanches. We used the model operationally for the first time last winter and then analysed how the model's predictions correlate with radar-detected avalanches. The radar data comes from certain avalanche tracks where permanently installed sensors recognise when an avalanche is about to occur. Such systems are used, for example, for automated road closures on exposed traffic routes. This is an interesting data set for evaluating the spontaneous avalanche model, because the radar registers the avalanches virtually in real time. Human avalanche observations, on the other hand, are usually somewhat delayed because we don't always notice the avalanches as soon as they start, or because it's snowing and we simply don't see anything. The comparison with the radar detections confirmed our feeling that the model reacts with a slight delay. This is probably due to the fact that the model was trained with human observations, which are also slightly delayed. It is therefore very important for all models that we know how the models are trained, as this also determines potential errors.

The AI-based, statistical models are trained with your forecasts and observations and the snowpack simulations from SNOWPACK. This means that we still need to understand the physical processes and can't just rely on the magic of AI somehow, right?

Exactly. I would very much like to have models that represent the physical processes as well as possible. I never just look at the output of the instability model, I always go back to the SNOWPACK simulation and look at some stations to see whether the layers look plausible. It is extremely important that we at least begin to understand what is happening in these long model chains. The AI models come at the very, very end and are actually just superimposed on everything that happens beforehand.

The AI models help to extract the relevant information. SNOWPACK spits out lots and lots of information and the AI then filters it for us. Can you summarise it like this?

Yes, you can summarise it like that. Ultimately, these are simply models that end up on top of SNOWPACK. But these "small" models have led to quite a big leap in the use of SNOWPACK in our forecasting service. This is because they suddenly make the complex output of the SNOWPACK model digestible by extracting relevant information from it that is relatively easy to understand. I therefore think it would be important if we could integrate the models even more easily when creating the forecast.

mountain knowledge
presented by

You have to give the AI models good training data so that they deliver good results and the hazard level model was trained with your avalanche forecasts. How do you know whether the forecasts were good and the model learns the right things?

The hazard level model was not simply trained with our forecasts, but with verified forecasts. However, there is always a human element involved, both in the forecast and retrospectively in the review. Because it is always a person who determines a hazard level. For the review, for example, we use the feedback from observers who make a risk assessment. Or the observed avalanche activity. We also discuss in retrospect, when we have all the data for a day, whether the danger level was correct. But a reliable review of the forecast is really extremely difficult with the available data. [Further information from Frank on the verification of avalanche forecasts can be found in this PG article]

How do you interpret it when the model tells you something you don't expect? 

The most important thing for me is that the model says something different from what I imagined. This means that either I have overlooked something or the model is missing information. For example, if we start an avalanche phase and hear about large avalanches, then this information flows into our forecast for tomorrow. The hazard level model does not have this information. It has no idea what avalanches have already occurred, it simply calculates on the basis of the weather. That would be a case where my assessment is probably more correct, because I know that large avalanches have already occurred.

Then there are also less clear cases, for example when we have an old snow problem and are in decline from level three to two. The hazard level model often declines a little faster than we do. The big question is always, are we simply being too cautious or does the model not recognise the weak layers? We don't yet have a solution as to whether and how we should take the model into account. 

The hazard level model is already operationally integrated, isn't it?

Yes, we now have an approach that integrates the hazard level model directly into the bulletin software. The other models flow in as additional information while we prepare the forecast. I think we're on the right track, but our processes aren't quite ideal yet. We also need to find out when the models are better than us and when they simply can't do it yet. These are the major challenges for using the models as effectively as possible.

In your workflow, all forecasters and the hazard level model make a proposal for tomorrow's hazard level. The model is treated like another team member who makes a forecast. The idea behind this is that the more ensemble members or suggestions you have, the better the result?

Exactly, we take the median of this ensemble. The basic condition for something like this to work is that each ensemble member is competent. This does not mean that you are always right, but that on average you are right more often than wrong. If you have three bad forecasters and calculate an average, the result may be even worse. If all three are pretty good on average, the mean value is usually a good starting point for the subsequent discussion.

We always work with two avalanche warnings and the model. If the two people agree and the model differs, and you then form a median, then the model doesn't really have anything to say. And if the two people disagree, then the model can tip the balance. But we can also deliberately exclude the model if we are pretty sure that it is wrong. The idea is therefore to first determine a proposal that is as objective as possible and then discuss it. The idea is not for a strong personality to push through their proposal, but for us to discuss it as a team, based on a statistically relatively strong anchor.

How does that work in practice? I say three plus. You say four minus, the model says three plus. Then we form the median. What does the discussion look like then?

In this case, if we include the model, we would start with 3+ as a starting point. Let's assume that I am now very unhappy with giving the situation a three for tomorrow. Then my task would be to explain to the others why I don't think this is right. As data-based as possible, although this is of course difficult because our human interpretation is always involved in the end. In this discussion, decisions are likely to be made one way or the other. And that is also one of our weaknesses, because we humans are not always consistent.

So it's still human?

It's clearly human. We are still primarily human avalanche monitors. However, the models can support us and show us where we perhaps need to take a closer look today. 

Will it stay that way in the future?

I don't know what an avalanche warning will look like in the future. Maybe in ten years' time the forecast will have a high resolution, maybe models will do a large part of the forecasting. Who knows? Maybe that's why it's all the more important that we can still provide explanations and communicate to the user: "Watch out right there..." In other words, that human voice that you bring in. What the avalanche forecast of the future will look like is still very open to me.

ℹ️PowderGuide.com is nonprofit-making, so we are glad about any support. If you like to improve our DeepL translation backend, feel free to write an email to the editors with your suggestions for better understandings. Thanks a lot in advance!

Show original (German)

Related articles

Comments

Lade...
No comments yet.

Login

If you do not have a user account with us yet, you can register for free.

mountain knowledge
presented by