Apparently, some people do not see any added value in science based forecasts. Why is that? Are these predictions really useless? Would someone dismiss as useless the prediction that spring will follow this winter?
How forecasts are made
Climate change, weather forecasts or assessing a pandemic – regardless of the subject, scientific forecasts in all fields are nearly always based on four elements: first, a model; second, data used to estimate unknown parameters; third, assumptions based on scenarios; fourth, expert knowledge. The weight of these four elements depends greatly on the specific problem.
The model describes our understanding of the dynamics of a particular system. The complexity of different systems varies greatly. Celestial mechanics determine the seasons so precisely that any uncertainty in the forecast is negligible. Much more difficult are biological systems that cannot be described with a simple equation, or systems that exhibit chaotic behaviour, such as the weather. Some complex processes surpass our understanding or our computational ability to model them directly, so they are described statistically or with approximations.
Models: only as good as the underlying data
Second, a model requires data for calibration and verification. This is where climate models differ from epidemiological models. Critical data and relationships in climate research have already been collected systematically for decades, but mutating SARS-CoV-2 variants meant the available data was limited or unrepresentative, and decisive factors depended on shifting test strategies or advances in treatment.
Third, where decisions are made, they must also be factored into forecasts. Forecasts – i.e. predictions about developments in the real world – thus become projections or “what-if” scenarios in technical terms. Examples include the expected epidemiological trend with a certain combination of measures, or the degree of warming that will accompany a specific course of CO
2
emissions. If the worst-case scenario does not occur, this often does not mean the model is wrong but rather that measures have been taken to prevent it.
The nature of these two uncertainties is completely different: the scenarios present us with choices and are therefore ultimately a matter for policymakers. It is not a given, but it helps us to understand the system and identify the vulnerabilities. The uncertainty of a certain scenario, on the other hand, reflects an incomplete understanding of how the system behaves or limited data. It is the task of science to mitigate this.
Models are approximations of reality
Fourth and finally, expert knowledge is required to assess the imprecision of predictions due to simplifications in the model or errors in the data. The objective is to give context to model simplifications and data errors, and to communicate this. A model is never entirely accurate – it is and will remain a model and at best provides a more or less precise depiction of reality. As the British statistician George Box once put it so well: “All models are wrong – but some are useful.” So the question is not whether a model is correct, because every model is a simplification of reality and therefore “wrong” in a strict sense. What matters most is whether the model is suitable for addressing a specific question.
Were the Omicron predictions really wrong?
This brings us back to the issue of forecasting the Omicron wave.
4
The progression of case numbers in January was predicted accurately, meaning the model was adequate. Hospitalisations, on the other hand, remained below the most optimistic predictions. As these figures lag behind reported cases, it was impossible to predict them based on Swiss data. It made sense to rely on data from the lab or other countries, but clearly this did not fully reflect the situation in Switzerland. In addition to medical aspects, such as lower virulence and higher infection rates among recovered or vaccinated individuals with existing basic immunity and thus less severe disease, behavioural aspects that are difficult to quantify may also have come into play. For instance, the more cautious behaviour of high-risk groups even where this is not mandated. The experts will eventually present a final epidemiological assessment.
But one thing is clear: namely, that projections are not deliberately distorted by scientists. They reflect the current data available and state of knowledge, as best as this can be represented quantitatively. As new knowledge becomes available, the projections are adjusted.