Will the Coronavirus Pandemic Obliterate the Last Frontier in PrivacyOur Biological Selves?
COVID-19 is accelerating the attempts of big tech companies to harvest our data, writes Tanya O’Carroll
In March, Prime Minister Boris Johnson held a meeting with healthcare start-ups, big tech firms, and major healthcare players to discuss how they could help tackle the pandemic. The Government then quietly granted access to millions of UK health data records to Amazon, Microsoft, and Google – plus controversial data-mining firm Palantir – to build a COVID-19 Datastore, aggregating data from multiple sources, including testing data.
On the other end of the spectrum, start-ups such as EverlyWell, Let’s Get Checked and CircleDNA that sell home testing kits for things like genetics and blood diagnostics have rushed to market new COVID-19 testing kits.
COVID-19 presents an unprecedented opportunity for tech companies to get their hands on health data. The fact that health data is governed by strict data privacy regimes in Europe and the US has long been a frustration for those who want to cash in on the sector, as it takes gargantuan datasets to train the kinds of Artificial Intelligence (AI) models – essentially sophisticated computer systems and algorithms that can process the data in useful ways – that can be monetised.
One way private ventures can gain access to health data is by partnering with governments. Even if they cannot walk away with direct patient records, they can walk away with the lucrative AI models built from those records.This helps to explain why Palantir, a data-mining firm that frequently run contracts worth millions of dollars, agreed to assist the government’s COVID-19 response for just £1.
Another way that companies can access data is by building up their own private vaults of health data directly from consumers. Google knew this when it moved to acquire FitBit for $2.1 billion at the end of 2019 and Fitbit’s CEO also knew it when he said, “ultimately Fitbit is going to be about the data.”
The problem is that once health data is on the market, it can be used in all kinds of ways that could never have been understood or predicted when someone ticked a ‘consent’ box. Advertisers, including pharmaceutical companies, can use AI models based on genetics to target people who flag as higher risk for specific health conditions – despite the science behind such ‘predictions’ being shaky at best. Insurance companies are already using the insights generated by big data to determine who gets coverage and at what price. Meanwhile, privately-held genetic data has already been used by law enforcement without the awareness or consent of those whose data was shared.
Furthermore, research has shown time and again that the algorithms created off this data encode biases along racial, gender and socio-economic lines. For example, last year researchers found that a widely-used algorithm from leading US health care company Optum was systematically underestimating the needs of the sickest black patients, amplifying long-standing racial inequities in healthcare.
Once in private hands, health data can easily be linked to the vast troves of other data that exist about us: our social media data, our purchasing history, our search results. Even when they pledge not to share or link data, tech companies do not have a strong track record of keeping their promises. For example, Google’s DeepMind partnered with the NHS promising that “data will never be connected to Google accounts or services” – only to later turn the app they built using NHS data into a Google product.
These are the mechanics of surveillance capitalism at work, whereby digital services are designed to mine as much data as possible about us, in order the predict future behaviours, influence people at scale and ultimately sell services.
Last year, Amnesty International warned this “surveillance-based business model” poses an unprecedented threat to human rights, forcing people to give up their intimate data – and their rights – in order to access the benefits of the modern world.
People need to be able to trust that when they take a COVID-19 test – be it provided through the NHS, their employer or a private company – their data is protected and won’t be bartered to the highest bidder as part of the COVID-19 gold rush.
As companies scramble to create an even more intimate marketplace of our data – one that trades in insights about our biological selves – you don’t need to strain hard to imagine the end game. Start-ups like China-based iCarbonX, dubbed “the next Google in BioTech”, have painted that vision for us. iCarbonX reportedly wants to combine machine-learning with more data about your body than has even before possible before – combining genetic sequencing, data from frequent blood tests, microbiome insights and physical data from both wearable fitness devices and products.
Even their smart mirror aims to produce “an exact 3-D figure of you: the fat, the muscle—your entire body shape, plus facial recognition, and what’s going on with your skin”. The main product currently advertised on the company’s website? A COVID-19 testing kit.
People need to be able to trust that when they take a COVID-19 test – be it provided through the NHS, their employer or a private company – their data is protected and won’t be bartered to the highest bidder as part of the COVID-19 goldrush. Data protection might not feel like a priority in a crisis, but allowing big tech and start-ups into the hen house on health data might be one of the long-term symptoms of this pandemic which causes problems for years to come.
Tanya O’Carroll is the director of Amnesty Tech. A version of this article first appeared in Newsweek