Free from fear or favour
No tracking. No cookies

Inside the UK’s Frontier Artificial Intelligence Taskforce: Conflicts of Interest and the Spectre of Effective Altruism 

The Citizens has been delving into the figures involved in the UK’s AI task force – can we trust them to keep us safe?

Prime Minister Rishi Sunak delivers a speech on AI prior to the first AI Safety Summit on 1 and 2 November, 2023. Photo: AP/Alamy

Newsletter offer

Subscribe to our newsletter for exclusive editorial emails from the Byline Times Team.

An elite group at the core of the UK’s artificial intelligence strategy risks being unduly influenced by private interests, an investigation by The Citizens can reveal. The Frontier AI taskforce is a government body within the Department for Science, Innovation and Technology (DSIT). However, analysis of the backgrounds of its members suggests it may make decisions that sideline public safety and prosperity in favour of a neoliberal digital future. 

The small but growing group also contains adherents to the ‘effective altruism’ philosophical movement. Effective altruism (EA) – which centres on the idea of ‘earning to give’ – has faced serious criticism, particularly in light of the collapse of crypto exchange FTX, founded by EA evangelist Sam Bankman-Fried. 

The Frontier AI taskforce and AI Safety Summit have been set up to focus on frontier AI: highly proficient and potentially dangerous foundation models. In prioritising this over the real and present risks posed by AI’s current capabilities, they have attracted criticism.

Taking a ‘doomist’ outlook has recently become en-vogue amongst big tech heavyweights, who have a lot to gain from shifting attention away from their present transgressions and pulling up the drawbridge to other innovation now that their monopolies have been established. EA’s fingerprints and funding are also all over frontier AI and research on its risks. 


Receive the monthly Byline Times newspaper and help to support fearless, independent journalism that breaks stories, shapes the agenda and holds power to account.

We’re not funded by a billionaire oligarch or an offshore hedge-fund. We rely on our readers to fund our journalism. If you like what we do, please subscribe.

So, who do we have concerns about within the taskforce? Aside from debate about whether they’re asking the right question, can we trust them to keep us safe? The Citizens, a journalistic non-profit that works in the field of democracy, data and disinformation, has unearthed the following: 

Matt Clifford is the expert advisory board vice chair and the prime minister’s joint representative for the AI Safety Summit. He has taken a sabbatical from Entrepreneur First, a talent investor that helps people start technology companies, of which he is a co-founder. Entrepreneur First raised £130M in a funding round last year, from investors including Demis Hassabis and Mustafa Suleyman, cofounders of DeepMind, one of the world’s foremost AI research organisations, now owned by Google. Hassabis and Suleyman also invested in Entrepreneur First in 2017

When the AI Safety Summit was announced in June this year, Hassabis was quoted in the press release, saying, ‘The Global Summit on AI Safety will play a critical role in bringing together government, industry, academia and civil society, and we’re looking forward to working closely with the UK Government to help make these efforts a success.’ 

The financial relationship between Hassabis and Clifford’s company is noteworthy. As is Clifford’s extensive praise of Hassabis, given how controversial the latter’s prior work with the UK government on AI and healthcare has been. 

For example, at the end of 2021, a misuse of private information claim was brought on behalf of 1.6 million people whose data was used to help DeepMind train an app, after the ICO ruled that the Data Protection Act had been breached. (It has since been struck out because of the claimants’ differing circumstances.) 

There was also pushback when it emerged Hassabis had attended a SAGE meeting during the COVID-19 pandemic, having been brought into the fold by Dominic Cummings. Clifford is also linked to Cummings as the chair of ARIA. Based on the US’s Defense Advanced Research Projects Agency (DARPA), it was the brainchild of Boris Johnson’s former right-hand-man, and will fund ‘high risk, high reward’ research and development. 


Transparency Seems to Die a Death at the Government’s AI Summit

Why won’t the Government tell the public who’s attending – and who’s being left out in the cold?

■ One of the higher-profile names on the task force is Yoshua Bengio. Known as one of the three ‘godfathers’ of AI, he is a full professor at Université de Montréal, where he is the founder and scientific director of Mila, the world’s largest academic research centre for deep learning. Though it perhaps speaks to a wider issue of diminishing public research funding, Mila’s close relationship with Google can’t be overlooked. 

Mila received $4.5M from Google over a three-year period starting in 2016; $4M (also over a three-year period) from 2020; and an additional $1.5M in 2022. ‘Google Canada’s generous support, and our pledge toward Mila’s mission, further solidifies Mila and Google’s longstanding mutual commitment to continue to develop AI for the benefit of everyone,’ Bengio wrote on the corporation’s blog in 2020.  

David Krueger, one of only two ‘expert researchers’ listed on the Frontier AI task force, studied under Bengio and assisted in sourcing an Open Philanthropy project grant for Mila to the tune of $2.4M. Open Philanthropy is an effective altruism fund, and this grant points to a much deeper affiliation between Krueger and the movement some have labelled ‘a textbook case of moral corruption’. As well as acting as a freelance career mentor for EA-affiliated 80,000 hours, Krueger has personally received approximately $2.3M from EA bodies. 

■ Also part of the task force and also an EA evangelist is Paul Christiano. He has been vocal in his support for the philosophy across its content platforms and his own blogs. Additionally, Christiano worked at OpenAI, which created ChatGPT, until 2021. DSIT did not respond when asked to clarify whether he retains any financial interest in his former employer. 

Christiano was a technical advisor for Open Philanthropy, and living with its executive director, at the time it gave $30M to OpenAI – his then employer – in 2017. This same executive director, Holden Karnofsky, now advises Christiano’s Alignment Research Centre (ARC), which is in turn one of the ‘leading technical organisations’ the UK government is partnering with on the AI summit. Further concerns about ARC and the impact of ‘Silicon Valley doomers’ on Sunak’s AI agenda are outlined in this Politico story.   

Christiano is also a director on the board of Ought, which in 2022 received $5M from now-disgraced Sam Bankman-Fried’s FTX fund. ‘Ethically,’ argues Daniel Kuhn in Coinbase, ‘things could not be more clear. Anyone who took funds from Bankman-Fried should feel compelled to return them. Upward of a million customers were affected by the collapse of FTX.’ 

Bankman-Fried stands accused of one of the largest episodes of financial fraud in history. It has since emerged that key figures in the EA movement were warned about him years before but dismissed concerns and took ‘tens of millions of dollars from Bankman-Fried’s charitable fund for effective altruist causes’. 

All in all, questions remain about the makeup of the Frontier AI taskforce, what exactly it is that’s motivating the UK’s strategy, and whether public money is being funnelled towards private benefit. Ian Hogarth, chair of the task force, has, the government states, ‘responsibility to identify and address any actual, potential or perceived personal or business interests which may conflict, or may be perceived to conflict, with [his] public duties.’ What about the rest of the group?

The task force’s first progress report states, ‘All appointments will comply with the normal DSIT conflicts of interest policy and will be subject to the standard business appointment rules when their term comes to an end.’ However, DSIT did not respond when asked to clarify what this means in practice, or to our request for comment in general. 

Written by

This article was filed under