Sunday, January 5, 2025

Quantum computing’s Achilles’ heel: Quantum error correction methods that ensure data integrity and scalability are crucial for widespread adoption.

Developing quantum computer systems poses formidable technical challenges. Qubits, the fundamental components of quantum computing, comprise superconducting circuits operating at temperatures mere thousands of degrees above absolute zero, as well as other entities such as individual atoms and photons. Beyond the hurdles of engineering at such limits, lies the fundamental conundrum: the universe’s innate propensity to reassert its unity by reassembling the subatomic particles forcibly severed from their natural harmony by physicists’ probing experiments. While quantum computers strive to confine quasiparticles within superconducting qubits or atoms within laser tweezers, the universe continues to exert subtle yet persistent influences through vibrations, radiations, and unexplained thermodynamic anomalies. All these intrusive disruptions pose an existential threat to the intricate computational systems, which risk descending into a maelstrom of unstructured entropy, mirroring the ambient turmoil of the cosmos.

For many people, quantum computing burst into awareness with Google’s 2019 announcement of “quantum supremacy,” a term that suddenly commanded widespread attention. The blog post and accompanying press coverage described a contrived task run on 52 superconducting qubits in their lab at UCSB, which they claimed was impossible to replicate on classical hardware within a reasonable timeframe? The heated debate surrounding this obscure topic had inadvertently overlooked a crucial point: Google’s AI lacked the capacity to identify and rectify mistakes, a fact that had gone unnoticed by those unversed in the subject. Google’s team employed the Sycamore system to programme the digital gates with subtle adjustments in alert management, aiming to minimize inaccuracies and errors; yet, the primary obstacle to successful experimental results was noise rather than the relatively small sample size.

Be taught quicker. Dig deeper. See farther.

As Google’s quantum machines and those from IBM, Rigetti, and IonQ sparked widespread interest, a lingering issue – noise – remained inadequately addressed, potentially misleading newcomers to the field of quantum computing. Physicist John Preskill’s discussion at the 2017 Q2B convention demystified complex concepts by describing the newly constructed machines as “noisy, intermediate-scale quantum computer systems,” a term that succinctly captures their limitations and capabilities. Physicist Preskill argued that NISQ computers have been value-producing for three reasons: firstly, to identify their limitations, thereby setting the stage for future machines to excel; secondly, to leverage current technology as unique laboratory tools capable of yielding novel scientific breakthroughs; and thirdly, due to the slender possibility that someone might stumble upon something useful for practical application.

The quest to unlock the potential of noisy intermediate-scale quantum (NISQ) computers has always been a long shot. For a considerable time, it had been assumed that finding solutions to errors caused by noise would require the development of intelligent software before any meaningful progress could be made. In 1995, Peter Shor’s discovery of the quantum factoring algorithm sparked widespread astonishment; however, its practical application seemed unlikely due to the daunting requirement for precise calculations, which necessitated the development of robust error correction mechanisms. Without religion, the pursuit of innovative engineering in quantum computing may initially yield high-quality qubits, but only time will tell whether this trend continues, with the next quarter-century serving as a significant test of this theory. By 2019, Google’s team had achieved a remarkable milestone in quantum computing: it could correct up to 0.16% of errors, translating to just 16 mistakes per 1,000 quantum operations on a single qubit?1

While overcoming mere engineering hurdles in developing qubits is crucial, these tiny units of quantum information are also vulnerable to a unique type of error that is specific to the realm of quantum computing itself. They will be susceptible to bit flips similar to classical computer systems, where a “0” is inadvertently flipped to a “1,” or conversely. Qubits can exhibit “bit-flip errors,” where the value remains unchanged but the physical state of the qubit is flipped from positive to negative, known as a “section flip.”

In a quantum computing context, this phenomenon occurs when the amplitude of a wave remains constant, while its peaks transform into troughs or vice versa, an effect unique to quantum systems.

Quantum data presents an inherently complex array of properties that can give rise to the concept of quantum computing’s potential energy. Quantum bits operate in a coherent state, harnessing the power of superposition and entanglement to generate vast, multidimensional computational energies. Measuring a qubit’s state to determine whether a bit or sector flip has occurred causes the state to collapse, resulting in the irreversible loss of all quantum data. However, not solely does this limitation make it challenging to instantly detect errors, it also means that when an error occurs, there is currently no method to reconstruct the correct quantum state?

Despite prevailing doubts, Peter Shor took a bold step, less than a year after his groundbreaking factoring algorithm breakthrough in 1994, to pioneer the first error-correcting code for quantum computing in 1995. Richard Hamming, an American mathematician who collaborated with Claude Shannon at Bell Labs, made significant contributions to classical error correction, building upon his work during the Manhattan Project. Hamming codes relied on data repetition to facilitate error detection and correction. However, this technique couldn’t simply be replicated in the quantum data realm due to the underlying reasons explained earlier. Shr’s resolution was to craft a coherent circuit that would spread a solitary quantum state across nine physical qubits, effectively creating a single logical qubit. The logical qubit combines a three-qubit bit-flip code with a three-qubit phase-flip code, rendering it resilient to both types of errors, as illustrated in Figure 1. The depicted circuit serves as a mere precursor to the actual implementation of a fault-tolerant quantum algorithm, which necessitates iterative cycles of measuring specific qubits, identifying errors, and implementing corrective measures throughout the circuit’s operation. These corrections could be implemented with additional gates, ultimately resulting in the measurement of the qubit’s subsequent state.

While Shor’s findings confirmed the feasibility of correcting errors in quantum data, his approach had limitations, being applicable only to single-qubit errors and thus insufficient for reliable long-term computation. Fortunately, when complex challenges arise, it often happens that multiple individuals and entities simultaneously focus on finding solutions. In 1997, a groundbreaking intellectual movement emerged when Alexei Kitaev, a renowned physicist at the time at the Landau Institute for Theoretical Physics in Russia, pioneered an innovative approach to projecting qubit states onto a lattice, as depicted in Figure 2, featuring a torus-shaped structure where edges converge and form a closed loop.

Every vertex on the lattice is a point of intersection; one such vertex is labelled in Figure 2, while each square unit represents a unique set of coordinates. Within the two-dimensional lattice structure, a fundamental unit known as a plaquette (labeled P1) serves as the building block. Is the logical qubit encoded by ensuring that each plaquette has a high number of 1-states among its four qubits? The vertices also require a considerable number of ones in their vicinity. Mid-circuit measurements enable the detection of unusual patterns of 1s, known as “syndromes,” which indicate a single-bit or section flip. As any bit flip may be detected by multiple adjacent plaquettes, this attribute enhances the floor code’s robustness, which proportionally increases with the torus’s scale, as illustrated in Figure 3. The toric code enables the encoding of two logical qubits within a minimum of 21 physical qubits, providing resilience against up to three correlated errors, also referred to as “distance-3” codes.

In the late 1990s, Shor’s and Kitaev’s pioneering research in quantum error correction led to the identification of two fundamental categories that have since been widely adopted as a basis for addressing errors in quantum systems. Shor’s method, often rebranded as a paradigmatic “additive” approach, adapted traditional error correction techniques for application to quantum data, in contrast Kitaev’s method leveraged the inherent arithmetic properties of quantum systems. While approaches akin to Shor’s, collectively known as Calderbank-Shor-Steane codes (CSS) in the theoretical realm, boast a reduced ratio of physical to logical qubits, they are less resilient and scalable. Topological codes, such as Kitaev’s, in addition to floor codes, color codes, and other variants, exhibit enhanced resilience, scalability, and durability when implemented. Quantum error correction is often oversimplified in its diverse panorama, although even the meticulous classification compiled by experts can convey the sheer complexity.

Despite successful demonstrations at a small scale of Shor’s and Kitaev’s codes, as well as their numerous variants and successors, much of the attention and funding during the NISQ era have focused on scaling up methods and improving physical quality. Recently, signs have emerged suggesting a shift in focus from noisy intermediate-scale quantum (NISQ) to logical qubits. Microsoft and Quantinuum have collaborated to demonstrate the creation of logical qubits through the execution of tesseract codes. In a significant breakthrough within the CSS household of classically derived “colour codes,” researchers successfully utilized this technique to encode four logical qubits from 16 physical qubits on the Quantinuum trapped-ion quantum computer. Researchers conducted five iterations of operations with error correction, achieving a remarkable 12 logical qubits; their measurement revealed an error rate of 0.11%, significantly higher than the error rates typically observed in physical qubits, at over 20 times the magnitude.

Meanwhile, Google has demonstrated significant progress in the field of topological quantum error correction by implementing the surface code, as evidenced by an outstanding paper published to the arXiv in August. Researchers have successfully implemented a comprehensive floor code on a 105-qubit device, achieving a remarkable distance of 7 and a corresponding error rate of just 0.143% per cycle. The company’s exceptional progress is evident in Determine 5, where their codebase efficiency significantly improved as they refined the implementation gap from 3 to 5, ultimately reaching a milestone of 7. As additional qubits were incorporated and logical qubit robustness was enhanced, the error rate consistently decreased below that of physical qubits, demonstrating a certain level of practical scalability.

As individual experiments unfold, they inadvertently highlight the potential drawbacks and challenges inherent in their corresponding trajectories. The Quantinuum experiment leveraged its high-quality charged atom-based qubits, boasting two-qubit gate fidelities of 99.87% and limitless coherence times, along with all-to-all connectivity allowing any qubit to be attached to another, a feature unparalleled in other trapped ion systems. Quantum computing’s limitations were previously restricted to approximately 30 qubits, but Quantinuum has pushed this boundary by designing a “racetrack” – a curved structure resembling an oval that enables ions to physically shuttle around, effectively extending the capacity for one-dimensional traps. While this engineering achievement is impressive, it doesn’t equate to a significant leap in the number of qubits being harnessed. While they do develop more complex architectures, ions still yield slow qubits, hindered by prolonged gate operations and physically transporting them to achieve the necessary proximity for two-qubit gates. With advancements in superconducting technology, operations can now be performed at an astonishingly faster pace, with improvements reaching orders of magnitude compared to traditional methods, measured against clock time.

Nonetheless, velocity isn’t every thing. According to Google’s findings, a widening gap in the floor code leads to a corresponding reduction in the error rate for logical qubits. To achieve a distance of seven, they aimed to realize 105 physical qubits, equivalent to one logical qubit. A logically encoded qubit with a measured error rate of approximately 10%.-6While errors might occur approximately once every 1 million operations when performing quantum computations with 1,457 bodily qubits, the exact distance required would be a critical factor in determining the desired precision level for these calculations. IBM’s most advanced superconducting quantum processor unit (QPU) is the 1,121-qubit Condor chip, boasting restricted interconnectivity but not yet accessible via its public cloud platform likely due to compromised gate fidelities. The complexity of scaling up processing power demands an innovative approach, necessitating the connection of approximately 1,500 smaller components to facilitate efficient and large-scale data transmission. Estimating the complexity of Shor’s algorithm to issue a 1,024-bit quantity to its primes, it is theoretically projected that at least 2,000 logical qubits are required, whereas Google’s floor code necessitates an astonishing 3,000,000 physical qubits for implementation. It will additionally require just a few billion gate operations, which could potentially imply that at a tenth-6 It’s estimated that one in every thousand transactions may result in an error, leading to a potential financial impact.

Quantum computing enthusiasts may be dismayed by the fundamental math, yet it’s crucial to acknowledge that a vital aspect of each experiment involves implementing naive solutions on hardware that hasn’t been specifically optimised for executing a particular code implementation? In August 2023, IBM published a groundbreaking paper on the arXiv, proposing that innovative chip designs could be instrumental in achieving unprecedented logical qubit ratios. The proposed approach drew upon another venerable error correction technique: low-density parity-check codes (LDPC), which emerged in the early 1960s. Once computational resources became available to harness its potential, LDPC gained widespread popularity in communication systems due to its remarkable efficiency? IBM’s team presented a groundbreaking innovation: a biplanar chip featuring 144 physical qubits per layer, meticulously connected to form 12 logical qubits, which leveraged quantum LDPC codes boasting an impressive distance of 12.

IBM’s “gross code” – a name stemming from the millions of quantum bits, or qubits, that theoretically exist on each chip – currently exists only in theory, outlined in a preprint on the arXiv and pending publication as of May 2024. Two Harvard and MIT professors, Mikhail Lukin and Vladan Vuletic, were reportedly impressed by IBM’s advancements in linear codes and decided to develop their own methodology for low-density parity-check (LDPC) codes, which they successfully implemented on an independent atomic-scale machine. The paper published in December 2023 showcased the remarkable adaptability of the optical lattice in confining atoms, and the group’s ability to manipulate them using optical tweezers enabled the creation of a Von Neumann-inspired architecture for their vacuum chamber, comprising distinct areas dedicated to storage, entanglement generation, readout operations, and error correction – as depicted in Figure 7. Researchers successfully harnessed 280 bodily qubits to generate 48 reliable logical qubits featuring a remarkable error-correcting capacity of distance-7 through the deployment of advanced LDPC codes. IBM’s landmark paper on low-density parity-check (LDPC) codes was surpassed by The impartial atom implementation, which demonstrated unprecedented transparency and innovation, as it not only encoded the complex 48 logical qubits but also executed an impressive 200 transversal gate operations with precision? Despite achieving operational fault tolerance, their results were incomplete since the system had not undergone a comprehensive cycle of gate operation, syndrome detection, and correction, ultimately requiring manual guidance to function effectively.

Neutral atoms, unlike ions trapped by scaling problems, are poised to revolutionize quantum computing through two-dimensional optical lattices that can support hundreds of qubits in current hardware from QuEra and Pasqal. According to Lukin and Vuletic’s experiment, researchers can also optimize processor designs for error correction, operating seamlessly throughout the design-fabrication-characterization life cycle of a superconducting chip. While impartial atom methods and trapped ions may share some similarities, they are hindered by a significant limitation: their operational pace is notoriously slow. The QuEra team’s current machine, Aquila, an analog quantum simulator devoid of gate operations, has a processing capacity of approximately three jobs per second. While gate speed and error correction may improve performance to some extent, they are unlikely to significantly accelerate the process. Within IBM, thousands of circuit layer operations per second (CLOPS) serve as a benchmark to gauge the efficacy of their methodologies.

When IBM successfully commercializes a quantum processor chip, there is no guarantee that it will mark the beginning of the era of scalable, fault-tolerant logical qubits. While LDPC codes developed by IBM and QuEra’s founders primarily protect Clifford gates, which can be efficiently classically simulated and do not involve a universal gate set. While Toffoli gates are often appended to the Clifford set for universality, their integration into a LDPC code may leave them vulnerable to errors, rendering them similarly susceptible to those on present-day systems. Corporations such as IBM and QuEra are developing workarounds to achieve universality, with IBM relying on z-rotations and QuEra employing transversal gates. Both approaches appear to leverage “magic states,” which can be used to purify logical states from noisy physical ones. If correctly implemented, markets may allow logical qubit explanations for outcome reports, despite minor inaccuracies.

Newer innovations in superconducting qubit technologies have led to the development of distinct hardware-assisted fault tolerance methods, including “cat qubits” and “dual-rail qubits,” as well as the implementation of bosonic codes using hardware-based solutions? Distributors corresponding to Alice & Bob, Nord Quantique, and Quantum Circuits Inc. We plan to introduce units in 2025 capable of offering primary alternatives to existing hardware-assisted logical qubits in operation. Google Quantum AI has unveiled the development of AlphaQubit, a groundbreaking “AI decoder” powered by DeepMind’s machine learning capabilities, designed specifically to decode quantum states with a remarkable 6% reduction in error rates compared to existing methodologies? While machine learning models have long been expected to contribute significantly to programming logical qubits, their application is hindered by the complexity of gate operations required for logical quantum gates, which surpass those needed for physical qubits?

Despite a year’s worth of progress in quantum error correction, the path to achieving fault-tolerant supremacy remains unclear. It appears certain that predictions regarding NISQ units’ inability to provide business value were accurate. In a stunning reversal, top executives from software companies formerly enthusiastic about hybrid algorithms merging noisy qubits with classical computing now express growing doubts. Tennin Yan, CEO of QunaSys, delivered the bombshell verdict on stage at Q2B Paris in 2023, bluntly stating that the approach is “dead.”2 As quantum computing technology advances, it’s likely that units with diverse error correction mechanisms and varying definitions of logical qubits will emerge in the next year, marking the beginning of a new era in its development. It’s often challenging to maintain optimism regarding the pace of advancements made in the industry, despite occasional breakthroughs. Despite these challenges, progress undoubtedly continues, with the threshold for tangible quantum gains drawing ever closer. Simulating entangled qubit states of such scale – numbering 50 or more – remains an insurmountable challenge, exceeding the capabilities of even the most powerful computational infrastructure globally. As IBM supplies five of its 12 logical qubit chips within a cluster, or QuEra delivers a tool featuring 300 neutral atoms encoding logical qubits, or we witness significant advancements alongside these trajectories from various providers, we might be on the cusp of a revolutionary era in quantum computing.


Footnotes

  1. Arya, K., Babbush, R., Arute, F. et al. “Quantum supremacy utilizing a programmable superconducting processor.” Nature 574 (2019): 505-510.
  2. Tennenbaum Yan, “Past the Vanguard of Evolution: Enhancing Quantum Computing’s Practical Applications” (Presentation at Q2B, Paris, May 4, 2023).

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles