Sci-fi made real as tales written by AI swamp magazine

A well-known science fiction magazine has stopped accepting submissions for new stories, after being overwhelmed by a technology its authors often base their futuristic narratives on: artificial intelligence.

US-based Clarkesworld, which has published several award-winning sci-fi writers over the past 17 years, has been inundated by people submitting hundreds of stories written or improved by generative AI since December, when OpenAI’s chatbot ChatGPT was first released to the public.

On Monday, Neil Clarke, the founder and editor of Clarkesworld, tweeted that it had decided to currently close submissions following a surge in AI-enhanced entries.

Clarke said the magazine had received more than 500 AI-enhanced submissions so far in February, more than four times the total for all of January, and that it was impossible to manually filter or deal with the volume of content in real time.

“Five days ago, the chart we shared showed nearly 350 of these submissions. Today, it crossed 500. Fifty of them just today, before we closed submissions so we can focus on the legit stories,” he said on Monday.

In a blog posted this month, Clarke added: “To make matters worse, the technology is only going to get better, so detection will become more challenging.”

The potential impact of generative AI — software that produces realistic text, art or code in response to human prompts — is becoming more apparent since the technology burst to the fore last year.

Companies such as OpenAI, which owns ChatGPT, and others building similar technologies, have already become embroiled in controversy with news organisations, artists and software engineers who claim that AI reproduces and builds on their original works without recognition or compensation.

This is not the first time generative AI-facilitated spam has caused services to buckle: in December, coding Q&A website Stack Overflow was forced to ban ChatGPT-generated responses, claiming these answers were flooding its forum with misinformation.

Clarke said he had contacted other editors publishing original content, and that the situation was “by no means unique.” He did not disclose how he had identified the stories generated by AI, adding that there are “some very obvious patterns and I have no intention of helping those people become less likely to be caught”. 

The motivations behind the Clarkesworld submissions are unclear. Some suggest it is a way for people to make money quickly, since Clarkesworld pays writers around 10 cents per word for submissions as long as 22,000 words.

It would be easy for a single person to fake their location and submit multiple stories, said Eran Shimony, a security researcher at cyber security lab CyberArk.

Others suggest it could be a way for novice writers to increase their chances of being published in the prestigious magazine.

However, some writers warn the move will choke creativity, as AI spam will force publishers to limit submission windows, slow down their response turnround pace, and potentially reduce their payment rates.

“The only people getting work out will be the already established and known . . . it’ll be deathly for new writers,” said Shiv Ramdas, a speculative fiction writer, on Twitter.

Earlier this month, OpenAI released an experimental tool to detect AI-generated content, in an effort to address concerns from educators around plagiarism, cheating and other forms of “academic dishonesty”. However, its researchers said the tool only identified AI-generated content 26 per cent of the time.

Read the full article Here

Leave a Reply

Your email address will not be published. Required fields are marked *

DON’T MISS OUT!
Subscribe To Newsletter
Be the first to get latest updates and exclusive content straight to your email inbox.
Stay Updated
Give it a try, you can unsubscribe anytime.
close-link