ChatGPT chief warns of some ‘superhuman’ skills AI could develop
The CEO of one of the most popular artificial intelligence platforms is warning that AI systems could eventually be capable of “superhuman persuasion.”
“I expect AI to be capable of superhuman persuasion well before it is superhuman at general intelligence,” Sam Altman, CEO of OpenAI, the company behind the popular ChatGPT platform, said on social media earlier this month.
He added that such capabilities could “lead to some very strange outcomes.”
Altman’s comments come as fears over what rapidly developing AI technology might eventually be capable of have continued to grow, with some speculating that the technology might surpass the cognitive functions of humans.
WHAT IS ARTIFICIAL INTELLIGENCE (AI)?
While Altman did not elaborate on what exactly the “strange outcomes” he alluded to might look like, some experts questioned just how legitimate such fears are.
“There is a threat for persuasive AI, but not how people think. AI will not uncover some subliminal coded message to turn people into mindless zombies,” Christopher Alexander, chief analytics officer of Pioneer Development Group, told Fox News Digital.
WHAT IS CHATGPT?
“Machine learning and pattern recognition will mean that an AI will get very good at identifying what persuasive content works, in what frequency and at what time. This is already happening with digital advertising. Newer, more sophisticated AI will get better at it.”
As for turning people into “mindless zombies,” Alexander argued the technology to do that is already widespread.
“Social media already does that and is difficult to outperform,” Alexander said.
Aiden Buzzetti, president of the Bull Moose Project, also questioned just how close AI is to “superhuman persuasion” abilities, noting that current platforms like ChatGPT still have issues providing “accurate information instead of hallucinating books, articles and movies just to come up with an answer that ‘seems correct.’”
“It would be no different than a human who is rhetorically gifted, with the exception that some people may find the implicit nature of technology more trustworthy,” Buzzetti told Fox News Digital. “With that said, there’s nothing to it right now, and any fears over this are misplaced. The real question would be, when will AI match or surpass human intelligence accurately? There’s nothing superhuman about it.”
But Phil Siegel, founder of the Center for Advanced Preparedness and Threat Response Simulation (CAPTRS), argued “we are already at that point” of such persuasion with “some AI technology.”
CLICK HERE FOR MORE US NEWS
“If a bad actor coded an AI algorithm to misuse data or make incorrect conclusions, I think it could persuade that it was correct,” Siegel told Fox News Digital. “But the solution is the same as how to treat experts — respect their knowledge but just don’t take it as a given.”
Siegel noted that the argument could be made that human experts “often convince people of things that later turn out to be untrue,” something that would also be true of AI.
“It is literally the same problem,” Siegel said. “It requires the same solution, which is to question and don’t accept answers as a given from human or machine experts without pressure testing them.”
Meanwhile, Jon Schweppe, policy director of American Principles Project, told Fox News Digital such concerns are warranted, joking that we might one day see robots running for Congress.
“It stands to reason that as AI learns how to simulate human behavior, it also learns how to dupe susceptible people and perpetrate fraud,” Schweppe said. “Give it a few years, and we might have AI androids running for Congress. They’ll fit in perfectly in Washington.”
Read the full article Here