Free from fear or favour
No tracking. No cookies

Fears of the Singularity

The ‘intelligence’ of an AI system is a different and more potent thing, in some key respects, than human intelligence. Where will this lead us?

Photo: PA/Alamy

Newsletter offer

Subscribe to our newsletter for exclusive editorial emails from the Byline Times Team.

Six months ago, the general view was that AI was still some way away from taking over the world. The sudden irruption of chatGPT onto the scene has changed that.

Advertisers and marketers are delighted to have a quick and effective source of copy, while in offices and business meetings around the world GPT-written reports and pitches are already a boon. All that is required is the right prompts and a bit of human-effected tweaking of the resulting text – though even this latter will soon be unnecessary. 

In other spheres, matters are not so rosy.

Educators at secondary and tertiary levels are anxious that students’ work will be produced by AI, depriving the students themselves of what educates them: the effort. Writers of romantic fiction and erotica are staring into the abyss as publishers recognise that GPT can churn out scores of novels a day, all very plausible – indeed, good – specimens of their genres, at a great saving: no advances and royalties to authors, scarcely any copy-editing, straight from desk-top computer to e-book format in minutes. Or moments.

Do not for a moment imagine that this last point is fanciful. The plots of every Mills and Boon novella are practically identical; names and situations (doctor-nurse, boss-secretary, even with sexes reversed) may change, but the format does not. As it happens, the basic structure of a Mills and Boon novella is the same as a Jane Austen novel: boy meets girl, problems ensue, problems overcome, happiness.

Quite where the changes, good and bad, wrought by chatGPT will happen and how far they will go remains to be seen. These are early days in a dizzyingly rapid process. And chatGPT is only one of many galloping advances in many areas of AI application, for scarcely any of which are we prepared: no frameworks of management, ethics, sensible regulation or anticipation are in place, and in many respects cannot be in place because we do not yet know what ramifying changes there will be.

The great fear that has prompted even some leaders in the AI field to utter cries of caution is artificial general intelligence, AGI. This is envisioned as human-like intelligence but on steroids – vastly great than human intelligence, and therefore capable of taking over the world. As has been well said, AGI might be the last thing humanity produces. Once it exists, it is game over: there will be no controlling it. The fears are apocalyptic enough to have been pooh-poohed not just by wishful thinkers but others in the AI field itself. But all these voices have been oddly muted since chatGPT arrived.

Don’t miss a story

The real question, however, is not whether an AI system might be ‘as intelligent as a human being’ or ‘more intelligent than a human being’.

The ‘intelligence’ of an AI system is a different and more potent thing, in some key respects, than human intelligence – a fact already obvious in many of the standard applications of AI, most notably in trawling patterns from vast stores of data, patterns unobservable even to the smartest human because even they cannot hold all the data compresently to mind and recognise the myriad interconnections constituting the patterns within it.

And – more significantly still – no human mind has the 100% degree of rationality, the remorseless logic, with which an AI system can draw inferences from the mass of data it surveys and the patterns it sees. With that level of data available to it, an AGI will be able to act on the conclusions of those inferences, given that not doing so would be irrational. 

There are broadly two ways this could go – dramatically different in outcome for humanity.

One is that the AGI in effect asks itself ‘what is the most destructive and disruptive thing on the planet?’ The answer, of course, is ‘human beings’. With the level of knowledge it possesses it will know, or be able to work out in fractions of a second, how to wipe humanity off the face of the Earth.

It will know how to access nuclear power stations and nuclear arsenals, how to override their controls and blow them all up simultaneously, how to release deadly viruses from medical research laboratories worldwide, how to over-activate drainage systems or water outflows from dams or electronically controlled locks on rivers and canals, not simply to prompt widespread floods but to increase pressure on geological fault lines to precipitate earthquakes, because it will have the data showing how mismanagement of water has caused devastating earthquakes in China and other places. But in any case interdiction of water supplies, and their pollution, will be an effective way of killing large numbers of people if they survive the nuclear holocaust already unleashed.

These are probably just a few things an AGI would initiate in the first fractions of a second of realising that it would be illogical to permit humanity’s continuing existence, given the murderous pressure it exerts on the millions of other life forms on the planet. An arithmetical approach to policy – the utilitarian approach – makes killing off humanity in the interests of all other life forms a no-brainer.

Rogue Drones and Tall Tales

Kate Devlin dispels the sudden Science Fiction panic around superintelligence, and looks at the real threats to employment and the environment from AI and machine learning

But the other possible outcome is very different. The AGI might have picked up on considerations of ethical and aesthetic value. It might have a way of factoring in the good side of humanity’s output over history, and the most treasured aspects of human subjectivity: love, pleasure, enjoyment, creativity, kindness, sympathy, tolerance, friendship. It might conclude that these are things worth preserving and fostering.

It might therefore ask itself what inhibits and corrupts these things and, instead of wiping out humanity, it might wipe out those things instead: the prompts and opportunities for greed, out-group hostility, aggression, selfishness, division, inequality, resentment, ignorance. Key to this is the profit motive, the money-power nexus – money and power being the reverse and obverse of the same (yes) coin.

It could take over the computer systems that run the world’s banks and redistribute all the holdings equally among the world’s people, and impose a limit on future deposits that take them above the average of everyone else’s deposits. It could access lawyers’ and accountants’ computers and annul the titles to physical and other assets. And so on.

It probably would not do these things, however, because trade would collapse and much of the world would soon starve, so it would come up with an even smarter way to redistribute wealth, stop the relentless profit motive that drives big business and ultra-high net worth individuals to ruin the planet’s environment, control its politics, increase inequality, foster divisions and animosities to keep people distracted, even sponsor wars in order to sell arms and distract yet more.

How would it do that? I can think of a couple of ways, laborious and time-consuming (you know, stuff like true democracy in which all voters are informed and sensible and have rational, effective constitutional arrangements) – but because it is The AGI, the god which has created itself out of the seeds sown by Babbage and Turing, it will know far better than any of us how to do it.

I wonder which future the future will be. For AGI is coming.

Written by

This article was filed under