Coding that led to lockdown was 'totally unreliable' and a 'buggy mess', say experts

Neil Ferguson
Neil Ferguson

The Covid-19 modelling that sent Britain into lockdown, shutting the economy and leaving millions unemployed, has been slammed by a series of experts.

Professor Neil Ferguson's computer coding was derided as “totally unreliable” by leading figures, who warned it was “something you wouldn’t stake your life on".

The model, credited with forcing the Government to make a U-turn and introduce a nationwide lockdown, is a “buggy mess that looks more like a bowl of angel hair pasta than a finely tuned piece of programming”, says David Richards, co-founder of British data technology company WANdisco.

“In our commercial reality, we would fire anyone for developing code like this and any business that relied on it to produce software for sale would likely go bust.”

The comments are likely to reignite a row over whether the UK was right to send the public into lockdown, with conflicting scientific models having suggested people may have already acquired substantial herd immunity and that Covid-19 may have hit Britain earlier than first thought. Scientists have also been split on what the fatality rate of Covid-19 is, which has resulted in vastly different models.

Up until now, though, significant weight has been attached to Imperial's model, which placed the fatality rate higher than others and predicted that 510,000 people in the UK could die without a lockdown.

It was said to have prompted a dramatic change in policy from the Government, causing businesses, schools and restaurants to be shuttered immediately in March. The Bank of England has predicted that the economy could take a year to return to normal, after facing its worst recession for more than three centuries.

The Imperial model works by using code to simulate transport links, population size, social networks and healthcare provisions to predict how coronavirus would spread. However, questions have since emerged over whether the model is accurate, after researchers released the code behind it, which in its original form was “thousands of lines” developed over more than 13 years.

In its initial form, developers claimed the code had been unreadable, with some parts looking “like they were machine translated from Fortran”, an old coding language, according to John Carmack, an American developer, who helped clean up the code before it was published online. Yet, the problems appear to go much deeper than messy coding.

Neil Ferguson timeline
Neil Ferguson timeline

Many have claimed that it is almost impossible to reproduce the same results from the same data, using the same code. Scientists from the University of Edinburgh reported such an issue, saying they got different results when they used different machines, and even in some cases, when they used the same machines.

“There appears to be a bug in either the creation or re-use of the network file. If we attempt two completely identical runs, only varying in that the second should use the network file produced by the first, the results are quite different,” the Edinburgh researchers wrote on the Github file.

After a discussion with one of the Github developers, a fix was later provided. This is said to be one of a number of bugs discovered within the system. The Github developers explained this by saying that the model is “stochastic”, and that “multiple runs with different seeds should be undertaken to see average behaviour”.

However, it has prompted questions from specialists, who say “models must be capable of passing the basic scientific test of producing the same results given the same initial set of parameters...otherwise, there is simply no way of knowing whether they will be reliable.”

It comes amid a wider debate over whether the Government should have relied more heavily on numerous models before making policy decisions.

Writing for telegraph.co.uk, Sir Nigel Shadbolt, Principal at Jesus College, said that “having a diverse variety of models, particularly those that enable policymakers to explore predictions under different assumptions, and with different interventions, is incredibly powerful”.

Like the Imperial code, a rival model by Professor Sunetra Gupta at Oxford University works on a so-called "SIR approach" in which the population is divided into those that are susceptible, infected and recorded. However, while Gupta made the assumption that 0.1pc of people infected with coronavirus would die, Ferguson placed that figure at 0.9pc.

That led to a dramatic reversal in government policy from attempting to build “herd immunity” to a full-on lockdown. Experts remain baffled as to why the government appeared to dismiss other models.

Imperial vs Oxford coronavirus studies
Imperial vs Oxford coronavirus studies

“We’d be up in arms if weather forecasting was based on a single set of results from a single model and missed taking that umbrella when it rained,” says Michael Bonsall, Professor of Mathematical Biology at Oxford University.

Concerns, in particular, over Ferguson’s model have been raised, with Konstantin Boudnik, vice-president of architecture at WANdisco, saying his track record in modelling doesn’t inspire confidence.

In the early 2000s, Ferguson’s models incorrectly predicted up to 136,000 deaths from mad cow disease, 200 million from bird flu and 65,000 from swine flu.

“The facts from the early 2000s are just yet another confirmation that their modeling approach was flawed to the core,” says Dr Boudnik. “We don't know for sure if the same model/code was used, but we clearly see their methodology wasn't rigourous then and surely hasn't improved now.”

A spokesperson for the Imperial College COVID19 Response Team said: “The UK Government has never relied on a single disease model to inform decision-making. As has been repeatedly stated, decision-making around lockdown was based on a consensus view of the scientific evidence, including several modelling studies by different academic groups.

“Multiple groups using different models concluded that the pandemic would overwhelm the NHS and cause unacceptably high mortality in the absence of extreme social distancing measures. Within the Imperial research team we use several models of differing levels of complexity, all of which produce consistent results. We are working with a number of legitimate academic groups and technology companies to develop, test and further document the simulation code referred to. However, we reject the partisan reviews of a few clearly ideologically motivated  commentators.

“Epidemiology is not a branch of computer science and the conclusions  around lockdown rely not on any mathematical model but on the scientific consensus that COVID-19 is a highly transmissible virus with an infection fatality ratio exceeding 0.5pc in the UK.”