Free from fear or favour
No tracking. No cookies

As Healthcare Data Breaches Mount, How Secure are Digital IDs?

Kiki Woods explores the potential flaws in plans for digital ID systems based on the supposed safe storage of our most sensitive data

As Healthcare Data Breaches Mount,How Secure are Digital IDs?

Kiki Woods explores the potential flaws in plans for digital ID systems based on the supposed safe storage of our most sensitive data

The highest rate of healthcare data breaches was recorded in May by the Identity Theft Resource Center (ITRC), compromising sensitive information relating to millions of patients.

142 individual cyberattacks were detected, with the second-highest being 77 last September, and the third-highest being 71 in June.

In a report published last month, the ITRC revealed that identity theft had also increased by nearly 250% in the past four years, from around 400,000 cases in 2016 to 1.3 million cases in 2020.

A few months prior to this, the Government issued a call for feedback on its envisaged digital attributes and trust framework. The goal of this is to outsource the design of a new digital system that would provide credible identification much like passports and bank statements.

Its call-out said: “Physical documents can be stolen, falsified or misplaced. They can be expensive to replace and their loss can lead to identity theft and fraud. This Government is committed to solving these problems digitally and without the need for a national identity card.”

Its definition of a digital identity is “a digital representation of who you are. It lets you prove who you are during interactions and transactions”.

Whenever data is collected, it opens up new vulnerabilities. The assumption of the trust framework is that digital records can be safer than their physical counterparts – but, as is proven with the rise of data breaches, they are just as susceptible to being compromised.

Digital documents can be stolen and falsified too, and have resulted in identity theft and fraud. They are also not protected by location-based limitations. If you keep your passport safe in a drawer in your house, the only way for someone to obtain access to that would be to physically break into your house. Digitally stored documents aren’t protected by location in the same way. Data breaches also magnify the amount of people affected – it’s not just one person’s data being compromised, it can be in the millions.

Is the belief that a digital identities scheme will necessarily be safer and more secure not misplaced techno-solutionism? Where is the evidence that digitisation can offer solutions, when the evidence to make the counter claim is so plentiful?


Establishing Trust

The proposal is that this trust framework would be overseen by a governing body selected by the Government which would then be responsible for ensuring that participating organisations follow the rules. But this overlooks the external risks caused by data breaches and cyber-attacks, which evidence shows are only increasing. The organisation holding the data may do everything in its power to keep the data safe, but this is not the same as guaranteeing this data will be safe.

So can the framework promise that it will grant “the user confidence your digital identity and attributes will only be shared in a controlled and protected way”, as it currently states? Indeed, even in the outline of the framework, it highlights situations in which users should be informed of the risk of identity theft, thus admitting that this digital solution won’t be watertight. 

When we just think about the security and privacy of data, we lose sight of the security and privacy of people, and those are two very different things.

Elizabeth M Renieris

The confidentiality of healthcare data is particularly contentious because of its sensitivity, and the track record of support available to identity theft victims is far from reassuring. As data privacy expert Latanya Sweeney, professor of the practice of government and technology at Harvard University, has shown, it only requires the breach of birthdate, postcode and gender to individually identify 87% of the US population.

In recent research published by the ITRC, 83% of victims find themselves in a situation in which they are turned down for credit or loans and unable to rent a flat or access housing; and 69% are denied unemployment benefits. 75% of those who experienced identity theft related to COVID-19 in 2020 said that their issues were still unresolved as of April this year. This latter point illustrates the inadequate support available to victims in the aftermath of data breaches.

The examples given of how the digital identities scheme might work include the exchange of biometric information for a QR code that then grants a young female nightclub-goer access to an event. The question being: is access to a nightclub a fair exchange for the new vulnerabilities that biometric authentication poses? Other examples include weightier transactions such as accessing loans, getting a mortgage, and opening a bank account.

Yet, with higher stakes come bigger opportunities for those involved in cyber-attacks. As the current evidence shows, those who have had compromised identity data have had to deal with severe consequences in terms of the economic, health-related, and housing opportunities available to them, the majority of these continuing long-term and unresolved.

Until the support available following data breaches properly protects our individual and collective wellbeing, is it wise to create more opportunities to undermine the safety of our most sensitive data? Do the temptations of techno-solutionism and claims to enhanced security stand up to scrutiny?

There are also concerns about the financial incentives behind creating digital ID schemes. As founding director of the Notre Dame-IBM Technology Ethics Lab, Elizabeth M Renieris, points out that “one thing that many people don’t realise is that many Digital ID schemes are ‘pay-per-verification’, meaning it’s profitable to require/encourage use of ID credentials even where unnecessary”. Here, the potential for profit could skew any considerations around user safety.

In a Taskforce on Artificial Intelligence hearing titled ‘Verifying Identity while Preserving Privacy in the Digital Age’ held in front of the US House of Representatives Committee on Financial Services last month, she said:

“Depending on the business model and the commercial incentives, this could create perverse incentives for the use of ID perhaps in contexts where it’s not necessary or it didn’t exist before… When we start to normalise things like biometrics, we start to normalise presenting it in contexts where perhaps it shouldn’t be appropriate or required…

“When we just think about the security and privacy of data, we lose sight of the security and privacy of people, and those are two very different things.

“The technologists designing and building these systems have a very narrow definition of privacy, which is really a technical, mathematical view of it. So we have to reset this to mean identity in the context of this socio-technical system, in the context of culture, law and economics and to think about what the true impact will be on people… Bad actors are always going to be able to outsmart the state-of-the-art technology and so the only way to get ahead of this is to think about how these technologies operate broadly in socio-technical systems.”

When it comes to keeping our most sensitive data secure and private, we have already seen companies back-track on promises.

FUND MORE INVESTIGATIVE REPORTING

Help expose the big scandals of our era.

Think of, for instance, the construction of the COVID-19 patient data store which has consistently been framed as only using “anonymised” data. In a blog published by the Government last March, it originally stated that “all the data in the data store is anonymous, subject to strict controls”, promising that “individuals cannot be re-identified”. However, after Freedom of Information requests by openDemocracy and Foxglove, the blog quickly removed these claims to state instead that “all the data held in the platform is subject to strict controls that meet the requirements of data protection legislation”.

The promises of anonymity were hastily shifted to pseudonymisation and yet false assurances of anonymity have continued to be used elsewhere.

Appearing before MPs in May, the Prime Minister’s former chief advisor Dominic Cummings repeatedly argued that this data was anonymised and consistently praised the ethics of Faculty – the core AI company involved in the data store and one that the Government has since been positioning as an ‘online safety provider’. In February, Faculty was awarded a £2.4 million contract to find justifications for how increased data-sharing – antithetical to user privacy – can be used in the name of safety. The vulnerabilities that the public are exposed to from data-gathering and sharing are strategically overlooked.

Faculty still maintains on its website – as of July 2021 – that “assigning a pseudonym to each patient… [makes] it impossible to uncover the patient’s true identity”. This is a company that, prior to 2020, had no track record in the NHS other than a small-scale project managing employee time in a clinic in the East Midlands. Its past experience does, however, include working for Cummings’ Vote Leave group during the 2016 EU Referendum campaign.

The COVID-19 data store contracts published last June include no guarantees of data anonymity, revealing ‘anonymity’ to be a premise that is widely circulated in the PR around such projects but one that doesn’t have any legal, contractual basis.

All of this being said, perhaps there are alternative routes to secure digital IDs that reduce the risk of data breaches.


Decentralisation

The Surveillance Technology Oversight Project, STOP, in New York suggests that a digital ID could be implemented using cryptographic keys enabling authentication without users having to enrol in surveillance databases. This wouldn’t require any biometric data to be submitted, but it does raise the caveat that “any system that makes it possible to conclusively prove your identity at any time, such as a universal ID… is a mass surveillance tool”.

Decentralised identity data systems, such as those trialled in Estonia and Singapore, look at how data can be stored on an individual’s device as a ‘self-sovereign identity’ rather than on a centralised system. It does not remove the risk of hacking, but it is minimised in comparison to data breaches of centralised datastores.

Systems based on ‘zero knowledge proofs’ – a cryptographic security protocol – can enable users to prove something without revealing the thing itself. It still relies on decryption and a private key that needs protecting which could be the weak link in this process. In a real-world application, the cryptocurrency Zcash, which uses zero knowledge proofs, destroyed the private keys and computers that processed them to minimise the chances of the system being compromised. Here, the original data isn’t stored; instead, users need to trust that it is discarded properly after it is required for the initial set-up of the system – and that is where the vulnerability lies.

None of these digital options can promise complete security. And yet, the narrative remains – beneficial to tech companies – that digital systems can be used to solve problems of theft and fraud and increase user safety.  The fact that they can open up new doors to theft, fraud and harm is conveniently downplayed.

What is clear is that the support and protection for those who have had their data breached is not in place and, until that happens, any system that opens people up to new vulnerabilities that exacerbate social inequalities and restricts access to essential services flies in the face of civic interests.

The temptations of profit motives and commercial agendas are pushing concerns about privacy to the sidelines. In response, we need to be continually questioning the necessity of these systems – who they benefit, how they can be manipulated and abused, how they can intensify power inequalities, and how they undermine instead of enhance safety.


Written by

This article was filed under
, , ,