According to MIT professor Seth Lloyd, the answer is yes. We could be living in the kind of digital world depicted in *The Matrix*, and not even know it.

*“According to Lloyd, everything in the universe is made of chunks of information called bits.”*

A researcher in Mechanical Engineering at MIT, Lloyd is one of the leaders in the field of quantum information. He’s been with the field from its very conception to its sky-rocketing rise to popularity. Decades ago, the feasibility of developing quantum computing devices was challenged. Now, as quantum computation is producing actual technologies, we are only left to wonder—what kind of applications will it provide us with next?

But, first things first. In a round-table discussion with undergraduates, Lloyd speaks of his early days in the field with a touch of humor, irony, and most surprisingly—pride. When he just started to research quantum information in graduate school, most scientists told him to look into other areas. In fact, out of the postdoctoral programs he considered, not many were too invested in researching of information in quantum mechanics. Most universities and institutes were reluctant to take up quantum computing, but Murray Gell-Mann accepted Lloyd for a position at the California Institute of Technology. This is where many ideas behind quantum computation were born, and Lloyd is “excited by the popularity of the field today.”

To begin understanding if the universe is a giant quantum computer—that is, a computer that operates using the principles of quantum mechanics—we must first understand the building blocks. What is *information*? According to Lloyd, everything in the universe is made of chunks of information called bits. These are the zeroes and ones you might have seen in *The Matrix*, or the ones that an engineer uses as the building blocks of computer software.

“But isn’t everything made of atoms?” you might ask. Lloyd has a clever answer to this, too. The atoms themselves are also bits of information. Information is everywhere, just like quantum mechanics. When Lloyd proposed the first physically feasible quantum computer design, the matter was taken seriously.

Why do we want computers to be *quantum* at all? Quantum mechanics describes tiny particles such as electrons, whose positions and velocities we cannot know for certain. We can only give an estimate as to where an electron might be, and how fast it is moving. Before we make the measurement, the electron could be in *any* position. This is a bit like pulling a card from a deck and knowing with some probability that the card will be a queen of spades. Before we flip the card to check, it is in a *superposition* of all the possible card types, just like the electron would have a *superposition* of all possible positions before we measure it.

What is the difference between regular and quantum computers? In a regular computer, information is encoded as bits interpreted as either 0 or 1. In a quantum computer, this information comes in slightly different variety – quantum bits, or “qubits”. It is physically allowed for this qubit to be in one state, in another, or somewhere in between. They *can* encode a combination of 1 and 0, thus being able to store or process much more information than regular bits. Unlike the bit that needs to be connected to the entire system to relay information, the qubit collapses instantly to relay information. How does this help us?

A small number of particles in the superposition of both 1 and 0 readings can give us an *enormous* amount of information—100 particles in superposition would mean representing every number from 1 to 2^{100 }(a very, very large number) (Aaranson, 2008). A classical computer can be designed to read one combination of three bits at a time, while a quantum computer will read all possible combinations. This means that quantum computers can process information in parallel. A system with any number *N* qubits will process 2* ^{N}* calculations at once, giving us a completely new and incredibly fast means of computing, such as factoring large numbers or evaluating extremely complex algorithms used for data analysis in finance, science, or cryptography.

So, are quantum computers just really fast computers? Yes and no. A computer scientist is interested in how the time it takes to solve a problem increases with increasing complexity. This time is measured in the number of steps the algorithm takes to come to a solution. For instance, dividing numbers takes a lot more steps due to factoring than just adding or multiplying. So, an algorithm is efficient if the number of steps it takes to solve grows slowly as the number of digits *N* increases (Aaranson, 2008).

What we care about the most is the *time* it takes the computer to apply an algorithm. For example, if you are given a map with hundreds of monuments in New York City, how long would it take you to find a tour that only visits each monument *once*? If someone gives you a tour, it is easy to check that the problem was solved correctly. This problem is called an *NP*-complete problem. Peter Shor of MIT showed in 1994 that a quantum computer can solve the NP-complete problem most efficiently; the time for computation only squares as the complexity increases (Aaranson, 2008). For classical computers, the time grows *exponentially*. Similarly, physical processes (such as the neural synapses in your brain or photosynthesis in plants) take the most efficient path, not just the fastest, giving us more parallels between reality and quantum computation.

Just like a quantum computer, physical processes involve the exchange and processing of information. Lloyd explains that when two electrons interact, their velocities and spins interact just like the patterns of firing neurons do in the brains of two businessmen talking on the phone. The amount of information lost during a process is related to how complex the encoding of information is. Lloyd compares this to long division: the results of the intermediate steps in long division are useless, or “junk” information (Lloyd, 2007). Physicists want to know the relevant information as well as this discarded information.

Think about your laptop; what is the limit to the amount of information it can process? There are two limitations; the first being that most of the energy is locked up in the mass of the computer itself, and the second being that a computer uses many electrical signals to just register one bit. Perhaps the information of the universe is limited in the same way. Just like any natural process, how fast a computer can process information must be limited by its energy and the number of degrees of freedom it possesses.

Ed Fredkin first proposed that the universe could be a computer in the 1960’s, as well as Konrad Zuse who came up with the idea independently. In their view, the universe could be a type of computer called a cellular automaton, which describes a dynamic system that is broken apart into black and white grids, in which cells gather information from the surrounding cells on whether or not to change color (Lloyd, 2007). This is similar to the way a line or moving colony of ants might share information between each other about their surroundings, signaling to each other whether or not to follow a food trail.

However, this initial analogy to such sharing of information turned out to be not quite accurate. Regular computers are not so good at simulating quantum systems that do not follow the “yes” or “no” kind of signals, since quantum systems can have *mixed* signals! These are called the superposition of states, and can only be simulated by a *quantum* rather than classical computer. Since the universe itself is best described by quantum mechanics, Lloyd nonchalantly suggests that “quantum computing allows us to understand the universe in its own language.”

However, there is a catch. Because qubits are so small and able to vary between states so drastically, it is often hard to pinpoint their precise states. Although the first experiments with encoding information in qubits were not very conclusive in their favor, Thomas Harty of University of Oxford and his colleagues have recently reported that trapped ions can be used to “read” the qubit states with a very high accuracy—and an error rate of 0.07% (Harty, 2014). This was the case for both qubit readings and preparation of logic gates, building blocks of quantum circuits.

Despite having such a low error rate, there is still much to do. The average error rate per logic gate, which gives information about the quantum circuit, was reported by Harty’s group to be rather large. This error amplifies when applied to the kind of systems most in need to be analyzed by quantum computation—such as very complicated atomic systems.

*“Physicists are not the only ones keen to reap the benefits of quantum computing; agencies like the CIA and NSA might be on its tail too.”*

Physicists are not the only ones keen to reap the benefits of quantum computing; agencies like the CIA and NSA might be on its tail too. Despite the fundamental physics and data processing application, quantum computing is game for encryption purposes. Database editor at the Washington Post, Steven Rich, proposes that NSA’s purpose of investing in quantum computation is very similar to the sort of “arms race” that was happening during the cold war (Ashbrook, 2014). If this is true, would quantum computation really give an advantage in defensive encryption, or would it open up doors for large-scale domestic and international privacy invasion?

“If anything that can happen ultimately does, I think it will at some point be done,” says Seth Lloyd, emphasizing that the laws of physics do not limit the large-scale encryption or data mining via quantum computing (Ashbrook, 2014). However, he notes this would be very difficult, since all of the information must be stored on a very “delicate” atomic scale. The current existent quantum computers consist mostly of a few dozen quantum bits—far away from being able to crack a code.

Companies like IBM and Canadian D-wave are also investing heavily in quantum computation research. In fact, D-wave proposes the first commercially available quantum computer based on 128-bit chip (BBC News, 2014). This might be a bit pricey for a household ($15 million dollars)—but might be just right for a national budget.

The universe, however, might have already invested in a quantum computer. After all, information is processed in a very quantum mechanical way both on a tiny and large scale. The efficiency of these processes in our universe may very well suggest its true nature—of a quantum kind.

#### Works Cited

Aaronson, S. (2008). The Limits of Quantum Computers. *Scientific American,* 62-69. Web. 5 Dec 2014.

Ashbrook, Tom. (2014). Quantum Computing, The NSA And The Future Of Cryptography [Radio series episode]. *On Point with Tom Ashbrook*. Web.

Lloyd, Seth. (1988). “Black Holes, Demons, and the Loss of Coherence”. PhD Thesis, The Rockefeller University.

Lloyd, Seth. (2007). *Programming the Universe*. New York, NY: Random House, Inc.

Palmer, Jason. (2011). Quantum Computing Device That Hints a Powerful Future. *BBC News: Science and Environment. *Web.

Quantum phenomenon shown in $15m D-Wave computer. (2014). *BBC News: Science and Environment*. Web.

Srivastava, S. (2003). Fear Not Traveling Salesman, DNA Computing is Here to Save the Day. *Journal of Young Investigators. *Web.

T. P. Harty, D. T. C. Allcock, C. J. Ballance, L. Guidoni, H. A. Janacek, N. M. Linke, D. N. Stacey, and D. M. Lucas.** **High-Fidelity Preparation, Gates, Memory, and Readout of a Trapped-Ion Quantum Bit. (2014). *Physical Review Letters,* *114 *(220501), 1-5.