Newsletter offer
Receive our Behind the Headlines email and we’ll post a free copy of Byline Times
The Government is hosting a high-profile, much advertised AI Safety Summit in November. Among those rumoured to be attending are King Charles, alongside Justin Trudeau, Ursula von der Leyen, Emmanuel Macron, and Kamala Harris.
But beyond media speculation and the bare-bones programme summary (released just last week), there is no clarity on the list of attendees, exact agenda, expected policy outcomes – and seemingly no way to find out.
The Government claims that the summit “will bring together key countries, as well as leading technology organisations, academia and civil society to inform rapid national and international action at the frontier of artificial intelligence”.
In reality, the summit seems to have been hijacked by Big Tech players, and there is no transparency around the tech industry’s role and influence in the summit. Efforts by The Citizens – a non-profit working in the space of democracy, data rights and disinformation – to find out, have come to no fruition.
“Following the organisers of the summit and their statements, it has been clear that it would be government officials and the top representatives from AI companies [at the summit],” said Abigail Burke, policy manager for data rights and privacy at Open Rights Group, a UK-based data rights organisation.
Don’t miss a story
Profiteers Dominate
The Government has not made the list of 100 attendees it has chosen for the summit publicly available. And our efforts to extract the list from the Department of Science, Innovation and Technology (DSIT), for a supposedly landmark global event happening in two weeks, has led us nowhere.
“As is entirely normal for summits of this nature, we do not confirm attendees this far in advance”, is what we have been told after several attempts at following up with DSIT.
According to media reports, DeepMind and Anthropic have confirmed their participation in the summit. DeepMind is Google’s AI lab, and Anthropic is an American AI company that is building Claude, a chat box rival to ChatGPT. Along with them, OpenAI and even the very controversial Palantir seem to have a seat at the table.
Palantir is the US tech firm that is set to win the £480 million contract with NHS for building a federated data platform, which has already caused much uproar as campaigners draw attention to the company’s history with shady surveillance.
And if you are wondering what other Big Tech companies we are talking about besides Google, a reminder that Amazon is about to invest billions in Anthropic and Microsoft is already investing $10 billion in OpenAI.
So, while the Government has cosied up with these AI labs and associated Big Tech companies — placing tech executives at the table with world leaders at this summit, and inviting DeepMind, Anthropic, OpenAI to work with the Government’s newly created Frontier AI Taskforce — civil society presence is limited at best.
As we understand from the press releases and meagre guidance published online, while there is some planned civil society involvement on day one of the summit, alongside technology sector leaders, day two will be a meeting with the Prime Minister and only involve “a small group of governments, companies and experts”. This is a key debate with the potential to influence international policy and, as far as we can tell, civil society won’t be included.
But alongside the opaqueness surrounding the attendees, the objectives outlined for the summit are also vague. The Government has provided no explanation as to why existential risk in relation to “biosecurity” and “cybersecurity” have been made the focus.
Skewed Topics
In contrast, more immediate risks surrounding the information ecosystem (such as the threat of misinformation factories and deepfakes), the integrity of democratic systems, and ongoing concerns about privacy and safety of citizens in the context of AI-driven technology (for example, concerns regarding the blanket use of AI-powered facial recognition in policing) – all seem to take second billing, if they feature at all.
Elaborating her concerns on the existential, long-termist tone of this summit, Burke said that tech companies “create and profit from [AI] systems”. So with their overpowering involvement at the summit, “not only would the discussion be skewed, but they also have a really vested interest in punting the discussions about AI risks and harms to the theoretical future rather than looking at risks that are happening here and now”.
“I think the inclusion of [Big Tech] companies and the lack of transparency about the full list, as well as the fact that we know that a lot of grassroots groups and activists have not been invited, speaks volumes about whose voices the government thinks matter and what harms they actually care about,” she added.
Besides the lack of consideration of immediate risks from AI systems, what is also absent from the Government agenda is discussions on the monopolistic nature of commercial AI.
As a report by AI Now Institute highlights, computational power, including access to “state-of-the-art (SOTA) chips”, is key to building large-scale AI. And only a handful of Big Tech companies can and will be able to muster such compute power. This means that the same few Big Tech firms continue to hold the upper hand and AI labs that wish to get in the race are getting sucked into this Big Tech matrix.
This is the trend we have seen with Anthropic, a “public benefit corporation” built by those who initially broke away from Open AI in an anti-corporate move, which is now partnering with Amazon and accepting billions in investment from it.
The sheer amount of compute power required, and the very few (big) companies that can facilitate that — not just Big Tech, but also companies like NVIDIA that produce chips that power AI systems — mean we need proper antitrust laws and enforcement to ensure competition and prevent market consolidation in the AI sector before it is too late. But none of these important discussions actually figure in this global summit on AI safety.
“Whilst talking up distant global risks, the Government has not yet committed to bringing in the AI regulation needed to protect people in the UK right now, something Parliament’s own cross party Science, Innovation and Technology Committee are calling for,” said Jonathan Smith, advocacy and campaigns director for Connected By Data, a campaign that is trying to bolster community voices on the data and AI debate.
Closed-Doors Taskforce
Outside of this summit, there is no transparency surrounding the workings of the Frontier AI Taskforce put together by Rishi Sunak. While the Government has published the list of people working in the taskforce, it is providing no meaningful information on who is setting the agenda for this group and what such an agenda is based on.
In April, the Government announced that the taskforce is getting £100 million in initial funding. But the announcement was rather vague on what this money will be used for, stating that funding will go towards “foundation model infrastructure and public service procurement, to create opportunities for domestic innovation”. So one could conclude the taskforce will focus on the deployment of AI in public services, and funding would be used for creating AI products.
But the more recent announcement about the Taskforce joining forces with OpenAI, Deepmind and Anthropic, sits somewhat at odds with the previous funding announcement.
This is because Sunak now claims that the £100 million in funding is for “AI safety”, with an undefined collaboration with AI companies for “priority access to models for research and safety purposes.” So, on the whole, there is no clarity on what exactly this money is for. And this is on top of the £900 million investment they already announced in the budget for AI “compute technology.”
The funding saga does not stop there. The Government also announced another £100 million in funding for chips that are to go directly to “NVIDIA, AMD, and Intel”, all of which are very large companies. And very probable, much of it will go to NVIDIA alone given it has a monopoly when it comes to supplying chips for AI systems.
Also worth noting that, while the Government continues to advertise and put money behind a rather opaque Frontier AI Taskforce — bent on pursuing an ambiguous existential harms beat — last month it quietly dissolved the Centre for Data Ethics and Innovation’s advisory board. This was a body of independent advisors that were to look into the more immediate and practical harms from the use of AI by public bodies.
It’s a Lockout
So while millions of taxpayer money is being poured into a few big companies on artificial intelligence deals, with no transparency on what kind of research or innovation this funding is supposed to fuel, the public and civil society do not have a say or space to question if any of this is in the public interest.
When it comes to the summit itself, it is not open to the public. There appears to be very little civil society presence. And the press is expected to be left in a room on its own, with the press invite telling us that: “There will be some pooled media access… however talks will mostly be private.”
All this clearly points to how the UK’s policy making towards AI is happening behind closed doors, with Big Tech not only having the most seats, but also most important seats at the table. This has huge implications for the public, not just because taxpayer money is funding private interests, but also because it is this very public data which is being used without consent or compensation by AI companies to power their models.
Responding to the claims in this piece, a Government spokesperson said: “These claims are nonsense. We have published a range of information concerning both the work of the AI Frontier Taskforce and the aims and programme for the AI Safety Summit.
“The AI Safety Summit will bring together a wide array of attendees including international governments, academia, industry, and civil society, as part of a collaborative approach to drive targeted, rapid international action on the safe and responsible development of AI.”
In the run up to the UK’s AI Safety Summit, The Citizens is putting together a campaign to highlight important conversations and people who are missing from the actual summit. Click here for more on the campaign