The crystal ball is always hazy

EDITORIAL

Reading time approx. 3 minutes Published: Publication type:

Medical and Social Science & Practice

The SBU newsletter presents and disseminates the results of the SBU reports, describes ongoing projects at the agency, informs about assessment projects at sister organisations, and promotes interest in scientific assessments and critical reviews of methods in health care and social services.

Forecasts are intriguing – sparking both hopes and fears. Homeowners about to put their house on the market may recoil in response to the news flash ‘Housing bubble about to burst!’. Investors about to choose an equity fund who read the headline ‘Stock prices skyrocketing’ continue reading with glee. And patients whose test results show ‘a 42 per cent risk of dementia’ will likely experience anxiety unless they have nerves of steel.

However, as we all know, prognoses are often wrong – and not just concerning how a new virus will spread around the world. Many Swedes will remember a particular summer when the weather prediction was glorious, only to find that the crispbread on the traditional midsummer table turned to rain-drenched mush. We wanted a detailed forecast with a precise prediction for our particular location, just for the tiny meadow where the picnic table would stand – not for the entire countryside – and for a given point in time; specifically, when we planned to be seated at the table, and not two hours later. Ideally, we would have had that prediction at least a few days in advance. In reality, the forecast that we actually received from meteorologists equipped with supercomputers, advanced mathematical models, extensive experience and dozens of measurements, was way off.

Two Harvard researchers write* in the New England Journal of Medicine about questions that must be posed regarding the mathematical models used to predict the spread of infection during the COVID-19 pandemic. First: For what purpose was the model designed and for what temporal perspective – was it for the purpose of a short-term prediction, or to investigate how different assumptions may lead to potential future scenarios in the long term? A single model is seldom equally good (or equally bad) at everything. On what fundamental assumptions is the model based – for example, regarding the question of immunity and disease spread via asymptomatic individuals? How is contact tracing data used?

One important question pertains to how the uncertainty of data is calculated and reported, for example the confidence interval. In many cases, the more long-range a specific prediction, the greater the uncertainty. How reliable are the input data used in the calculation, and how different would the prediction be if the values changed somewhat, within seemingly reasonable intervals? Are the data based on confirmed or suspected cases of infection, or on documented deaths? If the model was developed from a database, was it national, regional or local? Is the model intended for general use or for a specific context – and if so, are the assumptions made when the model was constructed still valid in other contexts, in which population density and contact patterns may differ?

When forecasts are very uncertain, one might question just how useful the underlying predictive model is. The forecast that Midsummer will be cloudy-if-not-gloriously-sunny-but-perhaps-stormy-with-torrential-rain – can that really be helpful? Of course it can, according to the NEJM authors – as long as we recognise, understand and take into account the uncertainty and the likelihood of local differences. This type of reasoning is reminiscent of the legendary Canadian physician Sir William Osler’s description of medicine as the ‘science of uncertainty and the art of probability’.

Ragnar Levi, Editor

* Holmdahl I, et al. Wrong but useful – what covid-19 epidemiologic models can and cannot tell us. NEJM 2020; May 15. DOI: 10.1056/NEJMp2016822

Page published