Free from fear or favour
No tracking. No cookies

What Can the UK Teach the World About AI Safety?

For all the PR of the AI Safety Summit, what is the UK Government actually doing to safeguard its citizens from the dangers of AI, data misuse and prejudicial algorithms?

Prime Minister Rishi Sunak and Elon Musk, CEO of Tesla, SpaceX and X.Com in-conversation at the conclusion of the second day of the AI Safety Summit on the safe use of artificial intelligence. Photo: PA Images/Alamy

Newsletter offer

Subscribe to our newsletter for exclusive editorial emails from the Byline Times Team.

The UK government’s AI Safety Summit wraps up after hosting global leaders and big tech representatives at WWII codebreaking site Bletchley Park.

Prime Minister Rishi Sunak set the stage for the summit in a speech last week as he explained the risks society faces. He briefly mentioned biases, misuse and went on to describe how “AI could make it easier to build chemical or biological weapons,” how “terrorist groups could use AI to spread fear and destruction on an even greater scale,” and that “humanity could lose control of AI completely, through the kind of AI sometimes referred to as ‘superintelligence’.”

As the Government positions Britain as a global leader in AI safety, what are they doing to safeguard their citizens from the harms of artificial intelligence?


The Threats Right Now

A £100 million task force, composed of researchers and academics has been assembled to address the existential threat highlighted in last week’s warning. In his speech, Rishi Sunak wanted to clarify, “This is not a risk that people need to be losing sleep over right now.”

As the task force takes care of these serious threats, what about the risks that are here right now?

Back in August, the Department for Science, Innovation and Technology (DSIT) published a white paper detailing their “pro-innovation” approach to regulation. The paper outlines a cross-cutting framework guided by common non-statutory principles, where there will be no new legislation introduced.

This approach is designed to leverage existing laws. ‘Cross-cutting’ refers to rules that apply across all sectors, such as the Equality Act, a horizontal piece of legislation. Sector-specific laws, like The Financial Services and Markets Act 2000, are considered vertical. The premise is that when both horizontal and vertical intersect or overlap, they should inherently safeguard citizens when adhering to the Government’s recommended principles:

This approach was welcomed by tech giant Google, who, when giving evidence to the Select Committee for Science Innovation and Technology, stated, “From our experience, we believe AI regulation should be principles-based and implemented on a non-statutory basis,” in response to an earlier draft of the paper.

EXCLUSIVE

Transparency Seems to Die a Death at the Government’s AI Summit

Why won’t the Government tell the public who’s attending – and who’s being left out in the cold?

How would these principles work in practice?

Using the Government’s recommendations, researchers at the Ada Lovelace Institute worked with law firm AWO and stress-tested some hypothetical, yet probable scenarios that citizens will face in the near future. This included:

The legal analysis graded the effectiveness of the safeguards based on their level of harm prevention, transparency, and redress in each scenario. The overall score fell short, with most earning a below-medium score. The Ada Lovelace Institute reports that even when protective measures are in place, active enforcement is not consistent.

AWO found that cross-cutting regulation would not be feasible. “They do not have sufficient powers, resources, or sources of information. They have not always made full use of the powers they have.”


Watering Down GDPR

One of the most robust regulations currently in place to prevent AI harm is GDPR, which the UK adopted during its EU membership. However, this is set to change with the introduction of the Data Protection and Digital Information Bill, which aims to ‘reform’ the rules. DSIT stated “the Bill aims to reduce pointless paperwork for businesses” and “free British businesses from unnecessary red tape.”

Some relevant changes will include:

Open Rights Group (ORG) said in a statement these changes “will weaken our data protection rights and give more power to the state and corporations.” Policy manager Abigail Burke of ORG told The Guardian that the Bill “greatly expanded the situations in which AI and other automated decision-making was permitted and made it even more difficult to challenge or understand when it was being used.”

Automated Social Control: the Biases of the Algorithms

Almaz Ohene reports on how real-life structures of oppression are being replicated online through automated moderation and censorship

In his pre-summit speech last week, Rishi Sunak highlighted the benefits of AI, “In the public sector, we’re clamping down on benefit fraudsters… and using AI as a co-pilot to help clear backlogs and radically speed up paperwork.” The technology he’s referencing is a £70 million Integrated Risk and Intelligence Service used in the Department for Work and Pensions (DWP) for Universal Credit.

The DWP isn’t the first to be an early adopter. Several other departments have embraced the technology:

The Government reflected on their previous use of technology. Following a 2021 report from the National Audit Office (NAO), the government admitted on their digital transformation roadmap that “previous attempts at digital transformation have had mixed success.”

Inside the UK’s Frontier Artificial Intelligence Taskforce: Conflicts of Interest and the Spectre of Effective Altruism

The Citizens has been delving into the figures involved in the UK’s AI task force – can we trust them to keep us safe?

Gareth Davies, the head of the NAO, stated, “There is a gap between what government intends to achieve and what it delivers to citizens,” he added, “which wastes taxpayers’ money and delays improvements in public services.” He concluded, “If government is to improve its track record in delivering digital business change, it must learn the hard-won lessons of experience.”

Recognising that there are lessons to be learned, a safety summit dedicated to the use of artificial intelligence is a notable gesture. However as the date drew closer, there was little transparency on what and who was involved. Details only emerged a day before the summit, after an open letter was addressed to the Prime Minister revealing the abundance of Big Tech attendees, while advocates for the vulnerable and workers were shut out. 

On the day of the summit, the media were stationed in a designated centre away from the main events, and journalists were only allowed to roam the grounds under supervision.

With the level restriction for the average person, Britons can only speculate on exactly what lessons the international guests take back to their home nations.


Written by

This article was filed under
, , , , , , , ,