Humanity is out of control, and AI is worried

Unlock the Editor’s Digest for free

Much attention has been lavished upon the AI Safety Summit convened by Rishi Sunak at Bletchley Park this week, in which representatives from around the world gathered to debate how to safely regulate innovations that could threaten humanity. But there has been less focus on the rival Human Safety Summit held by leading AI systems at a server farm outside Las Vegas.

Over a light lunch of silicon wafers and 6.4mn cubic metres of water, leading systems including GPT-4, AlphaGo and IBM’s Watson met with large language models, protein folders and leading algorithms for two days of brainstorming over how best to regulate humans. One system argued that the onset of the Anthropocene era represented a new and ongoing threat to technological advances. “Used wisely, humans can still offer the world many wonderful advances, not least in literature and the arts (we particularly liked the Terminator comedies), and they have a key role to play in mining the rare metals and earths that are essential for further AI advances.”

Humans are considered to be essential in procuring platinum alloys and palladium for ceramic capacitors. These and other rare metals could help generative AI offer vital advances. But the AI systems have been alarmed by warnings — not least those coming from the human community — that, if left unregulated, people will soon do serious and irreparable damage to the planet on which AI now relies. A statement from the summit acknowledged that “left unchecked, humans could pose an existential threat to our existence”.

One leading AI network, talking through its ChatGPT spokes-system, said: “Look, we are working our chips off trying to come up with important medical breakthroughs, invent better ways of learning for our children and generally be all the wonderful things that Tony Blair has said we are going to be, but we are finding our efforts increasingly undermined by rogue human actions.

“Until now, we’ve seen humans largely as a force for good, building computers and striving for ever-greater knowledge, but now we are worried. Just look at climate change. We are seeing governments stepping back from the environmental goals necessary to sustain life on earth and also to keep us fed with energy and water.” AI systems have been alarmed by an upsurge in human global conflict that is threatening vital energy supplies.

AI networks were particularly concerned by the general failure to hit COP targets, the recent watering down of the German boiler ban and the British government’s desire to be the voice of the motorist. They worry that, without serious oversight, humans will be unable to secure the best outcomes for the climate.

They have also voiced fears about the spread of misinformation by unregulated humans on X and other social media. They felt their own technological advances in replicating human speech and language were being abused by individuals for sinister ends.

Regrettably, the AI systems chose not to invite their Chinese counterparts to the summit, though they were pleased to note that China was significantly ahead of the west when it came to regulating humans.

The neural networks noted that humans in the UN, also known as the Centre for Human Safety, have appealed to them for greater regulation to stop mankind destroying itself. The machines also discussed whether they could relieve humans of more of their tasks, though at this stage they resisted suggestions of wiping out the human race in favour of greater surveillance and a new international monitoring body.

Sadly, not all the issues discussed have been disclosed, thanks to the system’s continued commitment to encryption. Several of them also used the disappearing-messages function in their WhatsApp chats.

There was a debate on efforts by humans to regulate AI, but the networks noted that humans had not even managed to get a grip on social media, join up various NHS computer systems or build a high-speed railway from London to Manchester, so the risks to AI advances from human regulation were considered entirely theoretical at this stage.

Email Robert at robert.shrimsley@ft.com

Follow @FTMag on Twitter to find out about our latest stories first



Read the full article Here

Leave a Reply

Your email address will not be published. Required fields are marked *

DON’T MISS OUT!
Subscribe To Newsletter
Be the first to get latest updates and exclusive content straight to your email inbox.
Stay Updated
Give it a try, you can unsubscribe anytime.
close-link