Free from fear or favour
No tracking. No cookies

‘From Deepfakes to Assault in the Metaverse: The Urgency of Ethics in AI’

Deepfakes depicting Taylor Swift being assaulted in the stands at a NFL game demands a debate about regulating artificial intelligence, writes Patsy Stevenson

Taylor Swift at an AFC Championship NFL football game between the Baltimore Ravens and the Kansas City Chiefs on 28 January 2024. Photo: Julio Cortez/AP/Alamy

Newsletter offer

Subscribe to our newsletter for exclusive editorial emails from the Byline Times Team.

The recent controversy of hyper-realistic, AI-generated images featuring American singer-songwriter Taylor Swift being sexually assaulted has sparked another important conversation surrounding the misuse of artificial intelligence, misogyny, and the ethical considerations of technological advancements.

It not only highlights the vulnerability of public figures when it comes to the misuse of deepfakes – images, videos or audio recordings created using an algorithm to replace the person in the original version with someone else – but also the escalating concern surrounding misogyny perpetuated by AI misuse.

The AI-generated images of Taylor Swift are vulgar and distressing, and were apparently created due to fans’ annoyance at her being shown at the NFL games, where she is shown on screen as she watches her partner, Travis Kelce, play. Fans have booed her and some have taken to X (formerly Twitter) to comment along the lines of ‘is there anything more annoying that Taylor Swift at a football game?’

This misogyny has now taken a darker turn with the AI images – some showing Swift being assaulted in the stands at the NFL game.

One picture was viewed 45 million times before it was removed. The misuse of AI in this manner not only violates the privacy of the victim but also perpetuates a culture of harm and objectification.

ENJOYING THIS ARTICLE? HELP US TO PRODUCE MORE

Receive the monthly Byline Times newspaper and help to support fearless, independent journalism that breaks stories, shapes the agenda and holds power to account.

We’re not funded by a billionaire oligarch or an offshore hedge-fund. We rely on our readers to fund our journalism. If you like what we do, please subscribe.

The incident calls for a deeper examination of the ethical considerations surrounding the use of AI technology, and whether legislation should be implemented to prevent this from happening again.

US law-makers have called for new legislation to criminalise deepfake images being created, but this is a controversial step as government officials often lack the knowledge of such technology to be able to regulate it effectively without hindering the development of AI in a beneficial way.

Tech experts recently created a tool to detect deepfakes but researchers found it to have a racial bias. The datasets used to train the detection of a deepfake under-represented people of colour.. This meant the technology could detect deepfakes as legitimate when used with the face of a person of colour.

The dark side of AI misuse isn’t confined to works of fiction, as the technology advances so does its potential to become a tool for exploitation and objectification, particularly targeting women and people of colour.

The creation of this type of explicit AI-generated content raises questions not only about the limits of technology but also about the ethical compass guiding it’s development. There is an urgent need for prevention tools and ethical boundaries in the AI landscape.

From deepfakes to assault in the Metaverse, the misuse of tech overall is becoming a troubling trend that demands collective action from policy-makers, the tech industry and society at large. However, the question is: how to regulate technology and AI when considering the potential involvement of government bodies?

While regulation is necessary to prevent misuse of the tech, there is a consensus among creators that government intervention must be approached with caution. Many governmental bodies often lack the knowledge needed to understand rapidly developing AI tech fully.

Finding the right balance between safeguarding against misuse and allowing for a fostering of innovation needs a nuanced approach when it comes to legislation. We don’t want to risk hindering the innovation that promotes the development and progress of the technology, but if we leave it unregulated this could leave room for unchecked use of the technology in the worst ways.

Even if legislation is implemented, the technology is evolving at a pace that could outstrip the ability of law-makers to keep up with it. There needs to be continuous and informed dialogue between tech experts and legal experts to ensure that regulation would remain relevant and effective. 


Written by

This article was filed under
,

Subscribe to Byline Times

This website is free. We don’t have a paywall, there are no ads, we don’t profile you with intrusive analytics or track you with cookies. Unlike most UK papers, Byline Times is subscriber-funded. Our team is small, we keep overheads low, we pay journalists fairly… and we pay our taxes in the UK.

An easy way to support us is to receive our newsletter emails (and install our app, for iOS or Android); we gain insight into our readership, and you make sure you don’t miss vital news.

Subscribing to our print newspaper (from £3.75/month) is the best possible support for our journalism. We also sell gift vouchers and books.