Urmila Mahadev has a soft voice with slightly nasal inflections, rounding out what we in India would have no hesitation in describing as a “generic American accent”. With gadgets in each hand controlling her presentation and her microphone, she speaks for an hour and she speaks in English and yet I fully grasp only parts of what she says. But make no mistake: what she says, the result she is trying to explain on this June day earlier this year, is a remarkable advance in mathematics and theoretical computer science.

Put it like this. There’s a new approach to computing that’s looming over us all. It’s probably still years away from widespread use, though there’s actually a prototype out there right now. It promises miraculous advances, but it also promises to tear down fundamental assumptions about numbers, about computers, that form the framework of much of how the world works today. This can be a serious problem because, with our increasing use of technology, we are dependent on its strengths and especially its limitations to a degree we are perhaps not even truly aware of. And with this new approach I mentioned, that dependency could be seriously challenged. It’s not even clear that we can, with our current computational capabilities, actually verify what the new computers do.

If that all sounds vague, if it comes across like some overblown hype about the perils of Artificial Intelligence … well, fair enough.

But then there’s Urmila Mahadev, a graduate student in computer science at the University of California, Berkeley, who has shown that we can indeed do that verification I mentioned.

But perhaps I should explain what I’m talking about. In two words, it’s quantum computing. This is a qualitatively different way of operating than any computer we’ve known so far uses. No doubt you’ve heard the term “bit” (for “binary digit”), referring to the fundamental computing unit of every computer that’s out there. A bit can take one of just two values, 0 or 1. On that simple binary choice, using millions and billions of bits, we have erected the entire edifice of modern computers —including WhatsApp, funny cat videos and Uber cabs. The remarkable advances in computing power we’ve seen over the last several decades were founded on making transistors—the electronic devices that are the physical embodiment of bits— smaller and smaller, so that we can pack ever more of them into the same space. But as small as they get, they remain bits, meaning they are still transistors that assume one of two states, which we call 0 and 1.

Nobody has found a fundamentally different way for computers to store information and compute things. Not that there was any pressing need to do so.

**There are some search problems in which quantum computing gives us a quantum leap over conventional computing**–

But that may be changing. Quantum computers use quantum mechanical techniques to do their computing. What does that mean? Their fundamental unit, called a qubit (quantum bit), still assumes one of two outcomes when it is measured: 0 and 1, like a bit. But according to quantum mechanics, the state we find a qubit in can also be a “superposition” of both those outcomes; and the act of measuring itself disturbs that superposition. If that makes you scratch your head, think of just this much: in effect, a qubit is not restricted to being in one of two states at any given time. In effect, we can think of a sequence of qubits being in many different states simultaneously, whereas a sequence of bits is in only one state at a time. This uncertainty over simultaneous states is the essence of quantum mechanics, and in this idea of many different states lies the power of quantum computing.

Explaining the actual process of this kind of computing is beyond the scope of this column. But what we know is that various tasks that are hard to do on ordinary computers can, at least in theory, be relatively easily tackled on quantum computers. This is because quantum computers will be much faster than regular (“classical”) ones.

For example, take the idea of search, fundamental to computer science. You might want to search a huge database of exam results for a particular student’s score in Chemistry. How do you find it? Or you know you saved that fascinating article about the explorer Mungo Park somewhere on your hard disk, but you cannot remember where. So you pull up your laptop’s search function and type in “mungo” and before you know it, it’s found not just the article about him, but also the email message from your aunt Aruna in faraway Dakar in which she made a small typo: “I miss my yearly dose of mungo! Please buy a box of alphonso and send it to me!”

You found what you were searching for quickly, right? Thank computer scientists, who have long worked to come up with more efficient search algorithms. And yet there are some search problems in which quantum computing gives us a—well, a quantum leap in speed over conventional computing (if you’re interested, read about Grover’s algorithm).

If that doesn’t stir you overly, consider cryptography. In my column “Your Public Key, Please” , I explained how modern cryptography is based on the idea that “some operations are much easier when performed in one direction than the other.” In particular, we can easily multiply two numbers to produce a third. But if we are given only that third number and asked to find its two factors, that’s a harder problem. So if you multiply two 100-digit prime numbers to produce some 200-digit number, we can use that 200-digit number as part of a key to encrypt data, feeling entirely safe about it. For if you give that 200-digit number to even an accomplished computer scientist and ask her to find its factors, that task would take her, using even the most powerful computers today, so long to finish that it is effectively impossible. And this impossibility is the reason your ATM withdrawals and your online credit card transactions are called “secure”.

But quantum computing threatens to turn that impossibility inside-out. We already know of a quantum algorithm (Shor’s algorithm) that can, in theory, find the prime factors of a number in a reasonable time. This means that when quantum computers come into widespread use, they can break the cryptographic systems we depend on daily.

And when that happens, goodbye security. Make a new plan, Stan. Drop off the key, Lee, and get yourself free—but good luck finding a new technique for safe encryption (apologies to the great Paul Simon, of course, for paraphrasing his song).

But there is possibly a more immediate problem. With classical computers, there are known ways to simulate their operation, to check their results. Quantum computing, though, is such a leap in technology, in the very way we think about and use technology, that we can’t even be sure that when we set a quantum computer to perform a certain task, it has produced the right result. Sure, if it produces the factors of a gigantic number, we can simply multiply those factors—the easy direction, remember?—to check if it was right. But can we as easily verify the results of every task it might claim to have solved?

That is what Urmila Mahadev has found a way to address. Here are the first few sentences of her paper (*Classical Verification Of Quantum Computations*, published on the online repository arXiv, 14 September 2018, arXiv:1804.01082v2): “We present the first protocol allowing a classical computer to interactively verify the result of an efficient quantum computation. We achieve this by constructing a measurement protocol, which enables a classical verifier to use a quantum prover as a trusted measurement device.”

She gave a much-discussed lecture about her findings at UC Berkeley in June, which is how I know something about her accent and the gadgets in her hands. Early in the lecture, she said this: “The verifier can play a game with the two provers, and if the provers pass with a certain probability, the verifier is assured that the prover shares certain states, and he can use this guarantee to force the provers to carry out a computation.”

I don’t know if she meant it, but I sense in those words a hint of what’s often so delicious about mathematics and computer science: the spirit inherent in that mention of a “game”.

In 2006, the University of Texas computer scientist Scott Aaronson announced a now famous prize for work in this field. It would be awarded, he said, “to anyone who could solve the problem of proving the results of an arbitrary quantum computation to a classical sceptic.”

For what she has come up with, Aaronson awarded his prize to Urmila Mahadev.

[“source=livemint”]