Flattening the Curve of COVID Conviction
Byline Times’ Secret Scientist finds that science is being abused as much in the second wave of COVID-19 as it was in the first
I can’t be the only person experiencing deja vu as Coronavirus cases and deaths are creeping up across the country, albeit at an ostensibly slower rate than in March and April, and with greater regional variations. The traditional and virtual airwaves are again brimming with the voices of commentators who claim to know exactly what is happening and what to do about it, occasionally backed up by the opinions of cherry-picked experts. They speak as if they have never been more certain of anything in their lives.
From where do they gain their confidence and certainty, I ask? The peer-reviewed science, is, after all, still very much in its infancy, so it can’t be from there.
With public trust and compliance both plummeting, overly confident commentators with large followings… ought to consider the impact of their contributions on their audiences.
Still, pick any COVID-19-related intervention (e.g. testing, masks, social distancing, vaccines), and you will find extreme polarisation, reminiscent of the sharp Brexit divides from prior years, perhaps even sharper.
On all sides, positions appear to be so entrenched that it is as if there are no grounds for legitimate debate. From where I sit, this appears to be, in part, due to a collective unwillingness among experts and commentators alike to appreciate and reflect on the limits of scientific understanding and to acknowledge and examine the gaps in that understanding.
Data and Evidence Gaps
There are legitimate scientific knowledge gaps at the heart of many of these debates, which appear to be providing fertile ground for claims around conspiracies and cover-ups. Primarily, these gaps relate to the number of people who catch the virus who go on to die (i.e. the infection fatality rate; IFR) and the number of people who are susceptible to infection.
The existing data still remain compatible with a wide range of scenarios – in part because there are significant questions that are extraordinarily difficult for science to precisely answer in real-time around the extent of asymptomatic transmission, and around both short- and long-term immune responses. There are also significant gaps in understanding of the potential long-term health consequences of the virus – with reports of poor outcomes even among those deemed to be at low risk (i.e. long COVID).
Given these uncertainties about the virus itself, it is impossible for science to precisely predict the course of the pandemic, and it is unrealistic to expect scientists to be capable of doing so. The uncertainty surrounding predictions is further exacerbated by the fact that so much of how the virus spreads depends on both spontaneous and enforced changes in human behaviour.
Science can, and has, provided objective and transparent evidence-based estimates of the risk to life posed by various strategies to various individuals under a range of potential IFR, susceptibility, and behavioural scenarios. Clearly, despite the uncertainty, it is preferable for decision-makers to have such estimates. The alternative would be to rely solely on opinion, with no way of objectively scrutinising or updating any underlying assumptions. The extent to which those estimates represent acceptable levels of risk to those individuals and their communities is, however, not a straightforwardly answerable scientific question.
Whilst war analogies are often used in the context of COVID-19, they fall short in that there is no pre-defined outcome for what a “win” would look like. Deaths are clearly important, but so are long-term health outcomes for infected patients, and the health and wellbeing of society at large. Nobody can legitimately claim to be an expert on all aspects of a pandemic-response. Responding to a pandemic is inherently multi-disciplinary, requiring input from experts in virology, immunology, epidemiology, public health, general practice, intensive care, psychology, logistics, policy – and on and on the list goes. Importantly, expertise in one area doesn’t automatically confer expertise in another.
Good Decision Making
To add to the complexity, for maximum efficacy, policy decisions must be made at speed, on the basis of lead indicators such as case increases that lag weeks behind key indicators such as hospitalisation and death. Whatever the policy, there will be significant trade-offs, about which individual experts may legitimately and in good-faith disagree with the wider consensus.
The wider consensus among evidence-based authoritative medical bodies, including the World Health Organisation, Centers for Disease Control and Prevention, British Medical Association, and European Centre for Disease Prevention Control remains to suppress the spread of the virus using local test, trace, and isolate strategies, in combination with social distancing, and masks in settings where social distancing isn’t possible. These bodies also support more stringent measures e.g. lockdowns, as a last resort when the virus is growing exponentially and there is a risk of overwhelming local health services.
Even the most ardent consensus supporting experts are only in favour of lockdowns as last resorts for test, trace, and isolate strategies to regain control. They are acutely aware that the more stringent measures have significant economic, societal, and physical and mental health costs, and are unsustainable long-term. They are however equally aware that there are intolerable costs to an unmitigated pandemic in which health and other public services become overwhelmed.
Consensus sceptics often fail to acknowledge that many of the apparent costs, including missed and postponed routine health appointments, are a direct consequence not of the restrictions themselves, but of limited resources and manpower having to be redirected towards dealing with acutely ill Coronavirus patients when infections spiral out of control.
Whilst some experts have argued for a strategy based on focused protection of the elderly and vulnerable, with a return to normal life for those deemed to not be at risk, authoritative bodies have yet to support this view. This is in part because of the absence of an evidence-based way to precisely identify and protect vulnerable groups without segregating them completely from society, or suppressing infection among their contacts.
Uncertainties regarding the IFR, immune responses, and long COVID are also important in this regard, with such a strategy posing a significant risk of overwhelming hospitals, intolerable numbers of deaths, and poor health outcomes. Though new evidence may emerge with time, the science simply isn’t there yet to sufficiently rule out those risks. Significant economic harm, societal disruption and civil unrest are potential costs both of a prolonged lockdown in the absence of support, and of a completely unmitigated pandemic that cripples public services.
Over-Confident Commentators and Armchair Experts
Does the weighing-in of overly-confident commentators and armchair experts with contrarian consensus-challenging views on all things COVID-19 meaningfully add to the debate? Or does it provide another source of potentially harmful misinformation that exacerbates an information environment in which there is already little signal and a lot of noise?
Given that armchair experts often base their opinions on faulty or unfounded assumptions, misunderstand basic epidemiological and public health principles, create information echo-chambers with cherry-picked experts and data, and even occasionally appear to pull numbers from thin air, I would argue the latter.
A simple scroll through Twitter or glance at the tabloids demonstrates widespread straw-manning of expert consensus, to the point where the debate has evolved into a toxic false dichotomy between saving the economy and preserving health, with commentators on all sides fancying themselves as professors of “truth”. Social media algorithms amplify the effect, as overly-confident controversial/surprising/emotive statements have a wider reach.
This approach is the very opposite of the scientific one, as commentators become motivated to seek out data and experts that support rather than falsify their positions, they engage in confirmation bias rather than scientific falsification. This isn’t rigorously challenging the consensus in a meaningful or evidence-based manner. Rather, it is the unskilled and unaware spreading a false sense of certainty without even realising. As such, it is the Dunning-Kruger effect, in action.
With public trust and compliance both plummeting, overly confident commentators with large followings, expounding in fields far removed from their own areas of expertise ought to urgently reflect on the uncertain assumptions underlying their convictions, and consider the impact of their contributions on their audiences. Their lack of humility is risking a worsening of the pandemic, along with an unnecessarily long and painful road to recovery.
This is not to say that no debates should be had. Nor that no opinions should be expressed. On the contrary, debates are healthy. However, as a pre-requisite for healthy debates, commentators and experts alike must be willing to embrace and acknowledge the limits of their understanding, and they must also be willing to entertain the possibility that their deeply held convictions may turn out to be wrong.
Secret Scientist is a researcher who wishes to remain anonymous to provide Byline Times with more unfettered revelations and transparency about the practice, funding and politics of science
what the papers don’t say
Thank youfor reading this article
New to Byline Times? Find out about us
Our leading investigations include Brexit Bites, Empire & the Culture War, Russian Interference, Coronavirus, Cronyism and Far Right Radicalisation. We also introduce new voices of colour in Our Lives Matter.
Support our journalists
To have an impact, our investigations need an audience.
But emails don’t pay our journalists, and nor do billionaires or intrusive ads. We’re funded by readers’ subscription fees: