AI pause cedes power to China, harms development of ‘democratic’ AI, experts warn Senate
Halting the development of artificial intelligence in America would only give more power to China to develop its own AI technology that favors its communist political system and increase the chances that China’s AI system becomes the global standard, technology experts warned senators this week.
A subcommittee of the Senate Armed Services Committee heard testimony from AI experts on Wednesday, nearly a month after Elon Musk, Steve Wozniak and dozens of other tech luminaries called for a “pause” in AI development until its “profound risks to society and humanity” are better understood.
But at the subcommittee hearing, experts warned of the dangers of such a pause, especially the risk that China might continue to develop AI and dominate the field while the U.S. delays.
Sen. Mike Rounds, R-S.D., said he opposes the idea of a development pause, and asked if the U.S. should “expect that other competitors around the world would consider taking a break.”
ALTERNATIVE INVENTOR? BIDEN AMIN OPENS DOOR TO NON-HUMAN, AI PATENT HOLDERS
Dr. Jason Matheny, president and CEO of RAND Corporation and commissioner of the National Security Commission on Artificial Intelligence, said he didn’t think it would be possible, especially if the U.S. is interested in confirming whether other countries are keeping their promise to pause.
“I think it would be very difficult to broker an international agreement to hit ‘pause’ on AI development in a way that would actually be verifiable,” he said. “I think that would be close to impossible.”
Matheny also warned that it’s not clear how the U.S. would benefit from a pause and said a major goal should be to ensure that “democracies… lead the norms and standards around AI.”
Shyam Sankar, chief technology officer and executive vice president of Palantir, went further by saying a pause would only give China a greater advantage in building AI that could become the standard.
BIDEN MAY REGULATE AI FOR ‘DISINFORMATION,’ ‘DISCRIMINATORY OUTCOMES’
“China has already said that these generative models must display socialist characteristics,” Sankar said of China’s work on AI. “It must not enable the overthrow of the state. These sorts of constraints that are being baked in, to the extent that that becomes the standard AI for the world, is highly problematic.”
“A democratic AI is crucial,” Sankar added. “We will continue to build these guardrails around this, but I think ceding our nascent advantage here may not be wise.”
Dr. Josh Lospinoso, co-founder and CEO of Shift5, told senators he agrees, and said the ethical code that will be built into western AI systems can’t be left behind.
FTC STAKES OUT TURF AS TOP AI COP: ‘PREPARED TO USE ALL OUR TOOLS’
“I think it’s impracticable to try to implement some kind of pause,” he said. “I think if we did that, our adversaries would continue development, and we end up ceding or abdicating leadership on ethics and norms on these matters if we’re not continuing to develop.”
While the industry-led call for an AI pause made waves, it’s an idea Washington has largely ignored for many of the reasons the witnesses cited. Both senators and witnesses on Wednesday made it clear they are more interested in setting up regulations on AI to the extent possible rather than requiring a halt in research.
Sen. Joe Manchin, D-W.Va., who chairs the subcommittee, asked witnesses to provide more detailed ideas on what those regulations might look like.
Matheny said several times that a way to keep some control over AI is to clamp down on open AI models that might be used inappropriately, require licensing for new models, and put tight restrictions on the ability of companies to export AI technology overseas.
At the same time, witnesses called on the government to quickly start incorporating AI into U.S. defense systems. Lospinoso said the Department of Defense should do more to make its weapons system more able to use AI systems.
“America’s weapons systems are simply not AI-ready,” he said.
Read the full article Here