The Race to Save Our Secrets From the Computers of the Future

They call it Q-Day: the day when a quantum computer, one more powerful than any yet built, could shatter the world of privacy and security as we know it.

It would happen through a bravura act of mathematics: the separation of some very large numbers, hundreds of digits long, into their prime factors.

That might sound like a meaningless division problem, but it would fundamentally undermine the encryption protocols that governments and corporations have relied on for decades. Sensitive information such as military intelligence, weapons designs, industry secrets and banking information is often transmitted or stored under digital locks that the act of factoring large numbers could crack open.

Among the various threats to America’s national security, the unraveling of encryption is rarely discussed in the same terms as nuclear proliferation, the global climate crisis or artificial general intelligence. But for many of those working on the problem behind the scenes, the danger is existential.

“This is potentially a completely different kind of problem than one we’ve ever faced,” said Glenn S. Gerstell, a former general counsel of the National Security Agency and one of the authors of an expert consensus report on cryptology. “It may be that there’s only a 1 percent chance of that happening, but a 1 percent chance of something catastrophic is something you need to worry about.”

The White House and the Homeland Security Department have made clear that in the wrong hands, a powerful quantum computer could disrupt everything from secure communications to the underpinnings of our financial system. In short order, credit card transactions and stock exchanges could be overrun by fraudsters; air traffic systems and GPS signals could be manipulated; and the security of critical infrastructure, like nuclear plants and the power grid, could be compromised.

The danger extends not just to future breaches but to past ones: Troves of encrypted data harvested now and in coming years could, after Q-Day, be unlocked. Current and former intelligence officials say that China and potentially other rivals are most likely already working to find and store such troves of data in hopes of decoding them in the future. European policy researchers echoed those concerns in a report this summer.

No one knows when, if ever, quantum computing will advance to that degree. Today, the most powerful quantum device uses 433 “qubits,” as the quantum equivalent of transistors are called. That figure would probably need to reach into the tens of thousands, perhaps even the millions, before today’s encryption systems would fall.

But within the U.S. cybersecurity community, the threat is seen as real and urgent. China, Russia and the United States are all racing to develop the technology before their geopolitical rivals do, though it is difficult to know who is ahead because some of the gains are shrouded in secrecy.

On the American side, the possibility that an adversary could win that race has set in motion a yearslong effort to develop a new generation of encryption systems, ones that even a powerful quantum computer would be unable to break.

The effort, which began in 2016, will culminate early next year when the National Institute of Standards and Technology is expected to finalize its guidance for migrating to the new systems. Ahead of that migration, President Biden late last year signed into law the Quantum Computing Cybersecurity Preparedness Act, which directed agencies to begin checking their systems for encryption that will need to be replaced.

But even given this new urgency, the migration to stronger encryption will most likely take a decade or more — a pace that, some experts fear, may not be fast enough to avert catastrophe.

Researchers have known since the 1990s that quantum computing — which draws on the properties of subatomic particles to carry out multiple calculations at the same time — might one day threaten the encryption systems in use today.

In 1994, the American mathematician Peter Shor showed how it could be done, publishing an algorithm that a then-hypothetical quantum computer could use to split exceptionally large numbers into factors rapidly — a task at which conventional computers are notoriously inefficient. That weakness of conventional computers is the foundation upon which much of current cryptography is predicated. Even today, factoring one of the large numbers used by R.S.A., one of the most common forms of factor-based encryption, would take the most powerful conventional computers trillions of years to carry out.

Shor’s algorithm landed at first as little more than an unsettling curiosity. Much of the world was already moving to adopt precisely the encryption methods that Shor had proved to be vulnerable. The first quantum computer, which was orders of magnitude too weak to run the algorithm efficiently, would not be built for another four years.

But quantum computing has progressed apace. In recent years, IBM, Google and others have demonstrated steady advances in building bigger, more capable models, leading experts to conclude that scaling up is not only theoretically possible but achievable with a few crucial technical advancements.

“If quantum physics works the way we expect, this is an engineering problem,” said Scott Aaronson, the director of the Quantum Information Center at the University of Texas at Austin.

Last year, quantum technology start-ups drew $2.35 billion in private investment, according to an analysis by the consulting firm McKinsey, which also projected that the technology could create $1.3 trillion in value within those fields by 2035.

Cybersecurity experts have warned for some time that deep-pocketed rivals like China and Russia — among the few adversaries with both the scientific talent and the billions of dollars needed to build a formidable quantum computer — are most likely forging ahead with quantum science partly in secret.

Despite a number of achievements by U.S. scientists, analysts insist that the nation remains in danger of falling behind — a fear reiterated this month in a report from the Center for Data Innovation, a think tank focused on technology policy.

Scientists at the National Institute of Standards and Technology have carried the mantle of maintaining encryption standards since the 1970s, when the agency studied and published the first general cipher to protect information used by civilian agencies and contractors, the data encryption standard. As encryption needs have evolved, NIST has regularly collaborated with military agencies to develop new standards that guide tech companies and IT departments around the world.

During the 2010s, officials at NIST and other agencies became convinced that the probability of a substantial leap forward in quantum computing within a decade — and the risk that would pose to the nation’s encryption standards — had grown too high to be prudently ignored.

“Our guys were doing the foundational work that said, hey, this is becoming too close for comfort,” Richard H. Ledgett Jr., a former deputy director of the National Security Agency, said.

The sense of urgency was heightened by an awareness of how difficult and time-consuming the rollout of new standards would be. Judging in part by past migrations, officials estimated that even after settling on a new generation of algorithms, it could take another 10 to 15 years to implement them widely.

That is not just because of all the actors, from tech giants to tiny software vendors, that must integrate new standards over time. Some cryptography also exists in hardware, where it can be difficult or impossible to modify, for example, in cars and A.T.M.s. Dustin Moody, a mathematician at NIST, points out that even satellites in space could be affected.

“You launch that satellite, that hardware is in there, you’re not going to be able to replace it,” Dr. Moody noted.

According to NIST, the federal government has set an overall goal of migrating as much as possible to these new quantum-resistant algorithms by 2035, which many officials acknowledge is ambitious.

These algorithms are not the product of a Manhattan Project-like initiative or a commercial effort led by one or more tech companies. Rather, they came about through years of collaboration within a diverse and international community of cryptographers.

After its worldwide call in 2016, NIST received 82 submissions, most of which were developed by small teams of academics and engineers. As it has in the past, NIST relied on a playbook in which it solicits new solutions and then releases them to researchers in government and the private sector, to be challenged and picked over for weaknesses.

“This has been done in an open way so that the academic cryptographers, the people who are innovating ways to break encryption, have had their chance to weigh in on what’s strong and what’s not,” said Steven B. Lipner, the executive director of SAFECode, a nonprofit focused on software security.

Many of the most promising submissions are built on lattices, a mathematical concept involving grids of points in various repeating shapes, like squares or hexagons, but projected into dimensions far beyond what humans can visualize. As the number of dimensions increases, problems such as finding the shortest distance between two given points grow exponentially harder, overcoming even a quantum computer’s computational strengths.

NIST ultimately selected four algorithms to recommend for wider use.

Despite the serious challenges of transitioning to these new algorithms, the United States has benefited from the experience of previous migrations, such as the one to address the so-called Y2K bug and earlier moves to new encryption standards. The size of American companies like Apple, Google and Amazon, with their control over large swaths of internet traffic, also means that a few players could get large parts of the transition done relatively nimbly.

“You really get a very large fraction of all the traffic being updated right to the new cryptography pretty easily, so you can kind of get these very large chunks all at once,” Chris Peikert, a professor of computer science and engineering at the University of Michigan, said.

But strategists caution that the way an adversary might behave after achieving a major breakthrough makes the threat unlike any the defense community has faced. Seizing on advances in artificial intelligence and machine learning, a rival country may keep its advances secret rather than demonstrating them, to quietly break into as many troves of data as possible.

Especially as storage has become vastly cheaper, cybersecurity experts say, the main challenge now for adversaries of the United States is not the storage of huge quantities of data, but rather making informed guesses on what they are harvesting.

“Couple this with advances in cyber offense and artificial intelligence,” Mr. Gerstell said, “and you have a potentially just existential weapon for which we have no particular deterrent.”

Read the full article Here

Leave a Reply

Your email address will not be published. Required fields are marked *

DON’T MISS OUT!
Subscribe To Newsletter
Be the first to get latest updates and exclusive content straight to your email inbox.
Stay Updated
Give it a try, you can unsubscribe anytime.
close-link