Newsletter offer
Subscribe to our newsletter for exclusive editorial emails from the Byline Times Team.
Rishi Sunak was positively beaming last week as world leaders and tech giants descended on Bletchley Park for the UK’s first Artificial Intelligence Safety Summit. But while the event may appear to have been a political success, there’s very little substance once you scratch the surface.
The culmination of the event was the signing of the Bletchley Declaration. Described as a “landmark achievement”, the agreement signals intent for global cooperation in the addressal of risks posed by AI and has been signed by representatives from the European Union and 28 countries, including China.
Given the UK’s shrinking global influence, getting ink on paper for any UK-led agreement in an advancing field is a political feat that the Prime Minister will be pleased to tuck under his belt. But despite the fanfare, the UK is not, nor will it be, the global AI leader Sunak hopes for.
The summit declaration is a non-binding agreement between the signatories to cooperate on research to ensure the technology develops in a way that is “human-centric, trustworthy and responsible”, with France and Korea set to host further summits next year.
An agreement to cooperate is a far cry from delivering effective, binding, global commitments to safeguard against what the declaration describes as “catastrophic” risks to humanity. We may have a signal of intent for global cooperation, but there is little agreement over what shape a global set of regulations might take and, importantly, who would action them.
In tandem with the Bletchley Declaration, the UK announced a new AI Safety institute set to risk-test new types of frontier AI before and after their release to assess harmful capabilities. But while the UK Government described this as a world-first, the United States beat Sunak to the punch with the announcement of an executive order on AI two days earlier. President Joe Biden’s announcement did mention the UK, under the title of ‘advancing American leadership abroad’.
US Vice-President Kamala Harris’ statement announcing a US AI Safety Institute at the UK summit continued to make clear that the US won’t be playing second fiddle: “Let us be clear: when it comes to AI, America is a global leader. It is American companies that lead the world in AI innovation. It is America that can catalyse global action and build global consensus in a way that no other country can.”
The two institutes will cooperate on shared objectives. Both will test and examine AI risks in terms of bias, discrimination, and the potential impact on jobs.
However, neither the US nor the UK have yet to implement legislative safeguards equivalent to the European Union’s EU AI Act, which has been advancing through the European Commission since 2021. Having already passed successive parliamentary votes, the EU AI Act stands to be the first actionable, legal guardrail legislation to risk-assess AI technology with expectations that it will pass the final hurdle of trialogue negotiations in December.
Rishi Sunak took an arguably unrealistic approach to the potential impact of AI technology on the labour market at the summit, stating that “people should not be worried about the impact of AI on jobs because education reforms will boost skills”. Meanwhile, a Goldman Sachs report published last month estimates that AI could replace or diminish 300 million jobs.
Speaking at the summit, UN Secretary General António Guterres expressed his concerns over potential job losses, as well as other negative impacts of AI technology, adding that “this is not a risk, it’s a reality”.
Of AI’s innumerable applications and iterations, AI-automation potentially poses the most pressing threat to job security, with bank tellers, checkout operatives, bookkeepers, and call centre staff at risk of being replaced or seeing their work opportunities reduced significantly. In July, India-based start-up Dukaan fired more than 90% of its support staff, replacing them with AI bots.
AI technology poses substantial danger to a plethora of creative fields as well – everything from journalism to architecture. As a result, many facets of the creative industry stand to be completely reassessed, placing artists and creatives at serious risk of seeing their work replaced or replicated by AI programmes. It is not, as Sunak suggested, nothing to worry about.
Elon Musk – who was interviewed by Sunak in a surprising move at the event – described the advancement of AI at the summit as “one of the biggest threats to humanity”.
Musk is one of several prominent tech figures behind popular AI tools who have been vocal about the wide-ranging threats posed by AI – from inflaming geopolitics, increasing inequalities, wiping-out sections of the labour market, and affecting the outcome of democratic elections.
The threat that AI poses to democracy cannot be overstated and is already rearing its head, from the more innocuous viral trend of AI audio parodies of US presidents playing video games, to the Republican National Committee’s entirely AI-generated political ad featuring images of various imagined disasters taking place if Biden was re-elected.
Don’t miss a story
Video deepfakes, AI-generated images, and gargantuan levels of disinformation across social media platforms could significantly impact the results on polling day.
With the UK, EU, Ireland, Canada, and the US all slated to hold elections in 2024, we are all liable to witness a slew of uniquely modern, dystopian phenomena capable of upending political landscapes without people necessarily even noticing.
In short, despite the headlines and handshakes, we are no more prepared for the surging tidal wave of AI technology. Tackling the risks to humanity will require global cooperation and the implementation of legislative policies, but advancing from non-binding commitments to enforceable global agreements can take decades.
Take climate as an example. The first efforts to organise a global response can be traced back to the 1992 Earth summit. It would be more than two decades before the adoption of the 2016 Paris Agreement. The treaty has 195 signatories, however, despite being legally-binding, numerous countries – including all G20 nations – are under-performing on implementation. So how valuable is a non-binding agreement to cooperate?
If developing a global framework takes anywhere near as long, we are not only at risk, but doomed to descend into any one of the myriad dystopian sci-if tales of AI overlords we as a society have enjoyed from the safety of our living homes and theatres. Fantasy may yet become reality.