The Problem with Quantum Computers
The Problem with Quantum Computers
By now, most people have heard that quantum computing is a revolutionary technology that leverages the bizarre characteristics of quantum mechanics to solve certain problems faster than regular computers can. Those problems range from the worlds of mathematics to retail business, and physics to finance. If we get quantum technology right, the benefits should lift the entire economy and enhance U.S. competitiveness.
The promise of quantum computing was first recognized in the 1980s yet remains unfulfilled. Quantum computers are exceedingly difficult to engineer, build, and program. As a result, they are crippled by errors in the form of noise, faults, and loss of quantum coherence, which is crucial to their operation and yet falls apart before any nontrivial program has a chance to run to completion.
This loss of coherence (called decoherence), caused by vibrations, temperature fluctuations, electromagnetic waves and other interactions with the outside environment, ultimately destroys the exotic quantum properties of the computer. Given the current pervasiveness of decoherence and other errors, contemporary quantum computers are unlikely to return correct answers for programs of even modest execution time.
While competing technologies and competing architectures are attacking these problems, no existing hardware platform can maintain coherence and provide the robust error correction required for large-scale computation. A breakthrough is probably several years away.
The billion-dollar question in the meantime is, how do we get useful results out of a computer that becomes unusably unreliable before completing a typical computation?
Answers are coming from intense investigation across a number of fronts, with researchers in industry, academia and the national laboratories pursuing a variety of methods for reducing errors. One approach is to guess what an error-free computation would look like based on the results of computations with various noise levels. A completely different approach, hybrid quantum-classical algorithms, runs only the most performance-critical sections of a program on a quantum computer, with the bulk of the program running on a more robust classical computer. These strategies and others are proving to be useful for dealing with the noisy environment of today’s quantum computers.
While classical computers are also affected by various sources of errors, these errors can be corrected with a modest amount of extra storage and logic. Quantum errorcorrection schemes do exist but consume such a large number of qubits (quantum bits) that relatively few qubits remain for actual computation. That reduces the size of the computing task to a tiny fraction of what could run on defectfree hardware.
To put in perspective the importance of being stingy with qubit consumption, today’s state-of-the-art gate-based quantum computers, which use logic gates analogous to those forming the digital circuits found in the computer, smartphone, or tablet you’re reading this article on, boast a mere 50 qubits. That is just a tiny fraction of the number of classical bits your device has available to it, typically hundreds of billions.
TAMING DEFECTS TO GET SOMETHING DONE
The trouble is, quantum mechanics challenges our intuition. So we struggle to figure out the best algorithms for performing meaningful tasks. To help overcome these problems, our team at Los Alamos National Laboratory is developing a method to invent and optimize algorithms that perform useful tasks on noisy quantum computers.
Algorithms are the lists of operations that tell a computer to do something, analogous to a cooking recipe. Compared to classical algorithms, the quantum kind are best kept as short as possible and, we have found, best tailored to the particular defects and noise regime of a given hardware device. That enables the algorithm to execute more processing steps within the constrained time frame before decoherence reduces the likelihood of a correct result to nearly zero.
In our interdisciplinary work on quantum computing at Los Alamos, funded by the Laboratory Directed Research and Development program, we are pursuing a key step in getting algorithms to run effectively. The main idea is to reduce the number of gates in an attempt to finish execution before decoherence and other sources of errors have a chance to unacceptably reduce the likelihood of success.
We use machine learning to translate, or compile, a quantum circuit into an optimally short equivalent that is specific to a particular quantum computer. Until recently, we have employed machine-learning methods on classical computers to search for shortened versions of quantum programs. Now, in a recent breakthrough, we have devised an approach that uses currently available quantum computers to compile their own quantum algorithms. That will avoid the massive computational overhead required to simulate quantum dynamics on classical computers.
Because this approach yields shorter algorithms than the state of the art, they consequently reduce the effects of noise. This machine-learning approach can also compensate for errors in a manner specific to the algorithm and hardware platform. It might find, for instance, that one qubit is less noisy than another, so the algorithm preferentially uses better qubits. In that situation, the machine learning creates a general algorithm to compute the assigned task on that computer using the fewest computational resources and the fewest logic gates. Thus optimized, the algorithm can run longer.
This method, which has worked in a limited setting on quantum computers now available to the public on the cloud, also takes advantage of quantum computers’ superior ability to scale-up algorithms for large problems on the larger quantum computers envisioned for the future.
New work with quantum algorithms will give both experts and nonexperts the tools to perform calculations on a quantum computer. Application developers can begin to take advantage of quantum computing’s potential for accelerating execution speed beyond the limits of conventional computing. These advances may bring us all several steps closer to having robust, reliable large-scale quantum computers to solve complex real-world problems that bring even the fastest classical computers to their knees.
About the Articles Author:
Scott Pakin
Scott Pakin is a computer scientist in the Applied Computer Science group at Los Alamos National Laboratory. With co-principal investigator Wojciech Zurek, he leads the Taming Defects in Quantum Computers project at Los Alamos.
No comments