Receive our Behind the Headlines email and we’ll post a free copy of Byline Times
“Technology is going to kill us”, a bus driver cheekily remarked after my ticket was repeatedly rejected by the faulty ticket scanner aboard his Co Fermanagh bus. Given I would be covering the EU’s AI Act, and fast-paced advancements in machine learning over the coming days, his quip felt particularly pertinent.
The pace and impact of recent developments in AI technology are remarkable, with efforts to establish safeguards appearing reactive rather than proactive. Can legislation keep pace with the next period of rapid technological transformation? And just how might AI generative applications reimagine our newsrooms and editorial offices?
A debate is well underway regarding how – or even if – advanced AI tools should be used in journalism.
Last month, Ireland’s national paper, The Irish Times, issued an apology after it printed an opinion article which had been generated by AI, in what has been described as a deliberate hoax.
The individual behind the deception has since claimed that they asked OpenAI’s popular large language model (LLM) chatbot – ChatGPT – to write a 1,000-word opinion piece “on how fake tan is appropriating and fetishising the high melanin content of more pigmented races, from the point of view of a non-binary latinx woman of colour living in Dublin, Ireland”. They fed feedback and suggestions from The Irish Times into the AI application to revise it. For a fake accompanying photo to complete their fake persona, they used the image generator Dalle-E 2 to produce an image using the prompts “female, overweight, blue hair, business casual clothing, smug expression”.
The incident highlights one facet of a wider challenge facing editors as they grapple with the ethical implications of AI-generated media: just how important is it that the content we’re reading is written by a human being? And what are the associated risks of using AI technology in place of journalists?
The current wave of AI advancement is receiving significant hype, but this isn’t the first time the technology has made its way into editorial offices.
For years, the Microsoft Editor feature included in Microsoft Word has implemented AI technology to better identify grammatical errors and typos. And beyond word processing software, the Associated Press began publishing AI-written articles as far back as 2014. Bloomberg has been using Cyborg, a programme designed to write instant news stories based on financial reports, since 2018. The Washington Post also employed AI technology in order to cover the 2016 Rio Olympic Games.
However, the unprecedented growth of LLM, such as ChatGPT, is precipitating a tidal wave as tectonic shifts in the digital landscape threaten to wash away smaller media outlets and jobs, and fundamentally rewrite established codes of ethics in the media.
More recent efforts to utilise AI technology to generate news articles have had mixed results.
US news outlet CNET was forced to issue a series of significant corrections after it had quietly rolled out AI-generated articles that turned out to be riddled with inaccuracies. The news organisation has since halted its use of AI software. UK-based Reach published with success three AI generated articles in March – the use of this technology by the publisher will fuel further concerns over job security after 200 jobs were cut in January.
In April, the Guardian issued a statement regarding AI after it discovered ChatGPT was creating fake Guardian headlines that sounded like articles written by the newspaper’s staff. Chris Moran, the Guardian’s head of editorial innovation, said that it had created a team with the purpose of learning about the technology as well as consulting with academics, other organisations and their staff about how generative AI performs when applied to journalistic uses. Smaller outlets may not have the resources to invest in similar exploratory research.
Don’t miss a story
Meanwhile, the EU is set to bring forward legislation on AI technology.
The AI Act is the first of its kind and could set a new global standard following the benchmarking 2018 Data Protection Act which became the GDPR gold standard.
Co-Rapporteur MEP Dragoș Tudorache said that AI technology is “changing the way of life as we know it”, adding that the legislation aims to create “guardrails” in the use and application of technology, with a focus on protecting fundamental human rights and ensuring transparency.
Tudorache said that “AI technology is transforming everything around us, and not only in productivity. We’ve had other industrial revolutions before, it’s changing our interactions as human beings, it’s changing the rules of society, it’s changing the workplace – quite fundamentally, its changing the way education needs to be done… maybe we don’t see all of the effects, all the impacts of AI yet, but it has the potential”.
The Act will go to a plenary vote in mid-June, having passed the previous stages at council and committee level. It will mark the first efforts to grapple with this new era and includes specific requirements placed upon developers of LLM addressing disclosure of its use and copyrighted material.
While reporting in Brussels, I read that the CEOs from the three largest AI technology companies – including OpenAI – had issued an ominous joint statement warning that the technology their companies are pioneering could lead to the extinction of humanity. It appears the bus driver’s sentiment had been more prophetic than he had intended.
Some experts suggest that these alarmist statements provide cover for the near-term harm that this technology poses. And it isn’t just journalism that stands to be massively impacted. This technology could be the ultimate disruptor and, while it has already proved extraordinarily beneficial in an array of applications, AI’s reckless mishandling is akin to taking one’s chances with homemade Fugu – by the time we recognise something’s wrong, it may already be too late.