
Jeffrey Scholz
@Jeyffre
I read Google's paper about their quantum computer so you don't have to. They claim to have ran a quantum computation in 5 minutes that would take a normal computer 10^25 years. But what was that computation? Does it live up to the hype? I will break it down.đź§µ
First we need a crash course in quantum computing. A classical computer keeps information in 64-bit groups called "words." Each of these groups of bits are run through logic gates such as AND, OR, and NOT. These gates take one or two bits as an input and return a single bit as an output. AND returns 1 if both the inputs are 1, OR returns 1 if at least one of the inputs is 1 NOT returns 1 if the input is 0.
A quantum computer, instead of using bits, uses a qbit. A qbit is a two-dimensional vector of real numbers. This means one qbit can hold "infinitely" more information than a single regular bit. A bit can only hold 0 or 1, but a qbit can hold a 2d vector of real numbers, so it is much more expressive.
You can think of an individual qbit as a "probability distribution" -- it averages around certain values but doesn't have a concrete value. However, when the qbit is "observed" it is forced into a vector representation of either [1, 0] represnting a binary 0, or [0, 1] representing a binary 1. We call the transformation from a "probability distribution" to a concrete binary value "collapsing" the qbit.
For example, a non-collapsed qbit might look like [0.8366, 0.5477]. A quantum gate is effectively matrix multiplication. If you multiply a qbit (2-dim vector) by a 2x2 matrix, you get another qbit (2-bit vector) back. For example, a quantum NOT gate looks like the 2x2 matrix below. If that is multiplied by qbit [1, 0] then the qbit is flipped to [0, 1]. This still works as expected for non-collapsed qbits.
Like a regular computer, a quantum computer keeps bits in groups. So a 64 bit quantum computer would have a vector of 64 2d vectors serving as it's "word." Here is where the speedup happens: in a regular computer, each of the 64 bits don't know anything about the value of any of the other 64 bits. If we want one bit to affect another bit, we have to explicilty combine them with a logic gate. However, in a quantum computer, each of the 64 qbits can "talk to each other" via "quantum entanglement."
Running a quantum circuit means you plug in a quantum vector, run it through a bunch of matrix multiplications, then collapse the output. The final vector will be the correct answer. Technically, quantum computers can give wrong answers, but if you run the computation multiple times, then you will get the correct answer on average.
So basically, quantum computing is "matrix multiplication of real numbers on crack."
The current problem with quantum computers is that as the circuit gets bigger, they become less correct on average. All of the "talking to each other" creates so much noise the system stops working. Once your probability of being correct drops below a certain threshold your quantum computer becomes useless. This is a major blocker for current quantum compute.
Let's look at a specific (oversimplified but helpful) example. Suppose you shine a laser beam into an ice cube. Actually simulating what the laser will do when it exits the ice cube is very hard to predict because some quantum phenomena is involved. To actually compute what the laser will do means you have to explicilty compute quantum entanglement, which is slow for classical computers but "built in" to a quantum computer.
However, you can *estimate* the distribution of how the laser will scatter without a quantum computer, so you can have at least a rough idea if your answer might be correct. Theoretically, a quantum computer can predict what the laser will do when it enters and exits the ice cube, assuming it knows the distribution of particle structure of the ice cube. You could model the laser traveling through the ice as a lot of matrix multiplications. It would effectively be multiplication by a lot of seemingly random matrices, with some entanglement involved.
By analogy, this is what Google was doing. The computation Google was doing was a "pseudo-random quantum circuit" (think pseudoranom ice cube) but we know a quantum circuit is just matrix multiplications (on crack). Therefore, it is a bunch of random matrix multiplications with an output that looks right. Google's actual breakthrough was that the output of the circuit "looks correct" -- which sounds underwhealming -- and compared to the headlines, it definitely is. The academic breakthrough is that Google was able to use a larger circuit and notice an apparent *increase* in accuracy when modeling how a laser shines through an ice cube. That is noteworthy. You can definitely tell if a computation has failed, and it seemed to be failing less as the circuit got bigger. Normally, a quantum computer simulating a laser going through an ice cube would produce a garbled mess because quantum computers lose too much accuracy as the circuit gets bigger, and the output no longer collapses to the right answer on average. But that was not the case in the Google experiment.
However, note that the problem is "rigged" in favor of quantum computers. The benchmark is explicitly modeling a quantum phenomenon, so *of course* we get a speedup. In other words, Google created a random distribution on the output that "seems correct." Why does it "seem correct?" well because by design, the computation cannot be run on a classical computer. But if we can't run it on a classical computer, how do we know the quantum computer is actually giving the right answer? The answer is we don't, and this is a serious gap.
But how big a step is this towards quantum computers breaking cryptography? What google did is create a circuit with 9 gates, 25 gates, and 49 gates (very roughly speaking) and noticed that the accuracy of the output seemed to improve — or rather the computation failed less. Therefore, by extrapolation, something with a lot more gate should have more accuracy. Right? Skeptics would be right to point out that extrapolation from 49 to 49,000 is a very big leap.
Quantum computing is kind of at the stage right now where some smart teenager wired a few logic gates together in a random fashion and said "hey look, my circuit made a random output and didn't explode!" Compared to previous attempts, it is an improvement. But he is still a long way from training an LLM. Going from random circuits to ones that do something useful, especially ones that require orders of magnitude more gates, is a whole different problem. In my opinion, no practical "quantum supremacy" has been demonstrated. The problem at hand is to explicilty model quantum phenomena, so it should be no surprise that a quantum computer could do better.
In my opinion, this whole thing is a regular exercise in technological marketing 1. Write about some seemingly powerful tech that creates fear (AI, nuclear, quantum, etc). Fear creates engagement. 2. Do a PR campaign where journalists write clickbaity headlines on a subject they are incapable of explaining, let alone understanding. 3. Try to enhance your brand by showing you are achieving some success on a hard problem. Google desperately needed to do this because they are losing the AI narrative, so they have a strong incentive to do a coordinated PR stunt.
Ok, this thread took off. For those who are new followers, I’m the founder of the company @RareSkills_io We focus on making difficult subjects in computer science, particularly blockchain, approachable and useful to you. Our most famous resource is the “RareSkills ZK Book” (do an internet search for that, X doesn’t like links). It is 100% free and requires no email or login. It teaches you how the field of cryptography “Zero Knowledge Proofs” work and how to code them up yourself.