Subscribe to our newsletter for exclusive editorial emails from the Byline Times Team.
The UK government’s AI Safety Summit wraps up after hosting global leaders and big tech representatives at WWII codebreaking site Bletchley Park.
Prime Minister Rishi Sunak set the stage for the summit in a speech last week as he explained the risks society faces. He briefly mentioned biases, misuse and went on to describe how “AI could make it easier to build chemical or biological weapons,” how “terrorist groups could use AI to spread fear and destruction on an even greater scale,” and that “humanity could lose control of AI completely, through the kind of AI sometimes referred to as ‘superintelligence’.”
As the Government positions Britain as a global leader in AI safety, what are they doing to safeguard their citizens from the harms of artificial intelligence?
The Threats Right Now
A £100 million task force, composed of researchers and academics has been assembled to address the existential threat highlighted in last week’s warning. In his speech, Rishi Sunak wanted to clarify, “This is not a risk that people need to be losing sleep over right now.”
As the task force takes care of these serious threats, what about the risks that are here right now?
Back in August, the Department for Science, Innovation and Technology (DSIT) published a white paper detailing their “pro-innovation” approach to regulation. The paper outlines a cross-cutting framework guided by common non-statutory principles, where there will be no new legislation introduced.
This approach is designed to leverage existing laws. ‘Cross-cutting’ refers to rules that apply across all sectors, such as the Equality Act, a horizontal piece of legislation. Sector-specific laws, like The Financial Services and Markets Act 2000, are considered vertical. The premise is that when both horizontal and vertical intersect or overlap, they should inherently safeguard citizens when adhering to the Government’s recommended principles:
- Safety, security, and robustness
- Appropriate transparency and explainability
- Accountability and governance
- Contestability and redress
This approach was welcomed by tech giant Google, who, when giving evidence to the Select Committee for Science Innovation and Technology, stated, “From our experience, we believe AI regulation should be principles-based and implemented on a non-statutory basis,” in response to an earlier draft of the paper.
How would these principles work in practice?
Using the Government’s recommendations, researchers at the Ada Lovelace Institute worked with law firm AWO and stress-tested some hypothetical, yet probable scenarios that citizens will face in the near future. This included:
- The use of an AI system to manage shifts in a workplace.
- The use of an AI system to analyse biometric data as part of a mortgage application.
- The deployment of an AI chatbot, based on a foundation model, by the Department of Work and Pensions to provide advice to benefits applicants.
The legal analysis graded the effectiveness of the safeguards based on their level of harm prevention, transparency, and redress in each scenario. The overall score fell short, with most earning a below-medium score. The Ada Lovelace Institute reports that even when protective measures are in place, active enforcement is not consistent.
AWO found that cross-cutting regulation would not be feasible. “They do not have sufficient powers, resources, or sources of information. They have not always made full use of the powers they have.”
Watering Down GDPR
One of the most robust regulations currently in place to prevent AI harm is GDPR, which the UK adopted during its EU membership. However, this is set to change with the introduction of the Data Protection and Digital Information Bill, which aims to ‘reform’ the rules. DSIT stated “the Bill aims to reduce pointless paperwork for businesses” and “free British businesses from unnecessary red tape.”
Some relevant changes will include:
- A lower threshold for organisations to refuse a Subject Access Request (the request by an individual to see the data an organisation holds about them).
- Removing the independence of the Information Commissioner’s Office (ICO)
- . The government will be able to issue instructions and intervene with the regulatory functions of the ICO.
- The bill removes individuals’ rights not to be subjected to automated decision-making.
- Data can be transferred to other countries that do not have particularly strong data protection laws.
- The definition of ‘scientific research’, a category that has special provisions with GDPR, is expanded to include ‘commercial activity’.
Open Rights Group (ORG) said in a statement these changes “will weaken our data protection rights and give more power to the state and corporations.” Policy manager Abigail Burke of ORG told The Guardian that the Bill “greatly expanded the situations in which AI and other automated decision-making was permitted and made it even more difficult to challenge or understand when it was being used.”
In his pre-summit speech last week, Rishi Sunak highlighted the benefits of AI, “In the public sector, we’re clamping down on benefit fraudsters… and using AI as a co-pilot to help clear backlogs and radically speed up paperwork.” The technology he’s referencing is a £70 million Integrated Risk and Intelligence Service used in the Department for Work and Pensions (DWP) for Universal Credit.
The DWP isn’t the first to be an early adopter. Several other departments have embraced the technology:
- The Home Office made use of an algorithm they called ‘visa streaming’. It was scrapped after legal pressure from the Joint Council for the Welfare of Immigrants asking the Court to declare the streaming algorithm unlawful.
- The Metropolitan Police have adopted live facial recognition. Cameras take images of people in public places and compare them to a database of known persons of interest. The practice was banned in South Wales when a Court found it violated people’s right to privacy according to the European Convention on Human Rights.
- In education, Ofqual, England’s exam regulatory body, replaced A-level exams with an algorithm that adjusted teacher-predicted grades based on each school’s historical data. This resulted in 40% of students receiving lower than anticipated grades, disproportionately affecting those from less-privileged schools and benefiting those from affluent ones. The algorithm was then ditched and replaced by the original predictions.
The Government reflected on their previous use of technology. Following a 2021 report from the National Audit Office (NAO), the government admitted on their digital transformation roadmap that “previous attempts at digital transformation have had mixed success.”
Gareth Davies, the head of the NAO, stated, “There is a gap between what government intends to achieve and what it delivers to citizens,” he added, “which wastes taxpayers’ money and delays improvements in public services.” He concluded, “If government is to improve its track record in delivering digital business change, it must learn the hard-won lessons of experience.”
Recognising that there are lessons to be learned, a safety summit dedicated to the use of artificial intelligence is a notable gesture. However as the date drew closer, there was little transparency on what and who was involved. Details only emerged a day before the summit, after an open letter was addressed to the Prime Minister revealing the abundance of Big Tech attendees, while advocates for the vulnerable and workers were shut out.
On the day of the summit, the media were stationed in a designated centre away from the main events, and journalists were only allowed to roam the grounds under supervision.
With the level restriction for the average person, Britons can only speculate on exactly what lessons the international guests take back to their home nations.