NEWS
NEWS

IBM unveils its future Quantum Starling, 20,000 times more powerful than current ones

Updated

IBM lays out its roadmap to compete in the quantum computing market with a machine that is 20,000 times more powerful and fault-tolerant than current ones, aiming to have it by 2029. Other companies like Google, IonQ, and PsiQuantum have also presented projects to surpass the fault-tolerance threshold before 2030

Rendering of the future IBM Quantum Starling computer expected by the company in 2029.
Rendering of the future IBM Quantum Starling computer expected by the company in 2029.IBM

BM has officially presented its roadmap to build what promises to be one of the first large-scale fault-tolerant quantum computers in the world, IBM Quantum Starling. The competition is fierce as other companies like Google, IonQ, or PsiQuantum have also introduced projects to overcome fault-tolerance barriers before 2030.

Announced from its Quantum Data Center in Poughkeepsie, New York, this ambitious plan foresees the delivery of Starling in 2029. According to the company, this system will increase the processing capacity of current quantum computers by 20,000 times and overcome technical obstacles that have so far hindered scalability and error correction, essential for quantum algorithms to be applied to real-world problems.

If realized, this could be a milestone for quantum computing, as representing the complete quantum state of that machine would require more than a quindecillion of classical supercomputers working in parallel. It is worth noting that although other giants like DWave Advantage2, Atom Computing's system with Microsoft, and IBM's own Condor surpass it in terms of physical qubits; Starling will serve as the foundation for IBM Quantum Blue Jay, capable of executing 1 billion quantum operations with over 2,000 qubits by 2033.

The significance lies in the concept of 'fault-tolerant.' Starling not only aims to have physical qubits but fault-tolerant logical qubits, meaning groups of physical qubits that correct their errors and enable reliable calculations. Therefore, it will be one of the first systems with many logical qubits and real, scalable, and modular error correction.

According to IBM's CEO, Arvind Krishna, the company's combination of mathematics, physics, and engineering is poised to address real-world challenges in areas such as drug development, chemistry, and process optimization, supporting a revolution that until recently seemed like science fiction.

Jay Gambetta, IBM's Vice President of Quantum and one of the brains behind this project, explained that the announcement is supported by two new scientific articles. "These papers demonstrate how we can effectively process instructions and execute operations, and how to decode information in real-time using conventional computing resources," he detailed. This represents, according to Gambetta, a crucial step: transitioning from laboratory experiments to a practical system capable of executing millions of useful operations.

"We are publishing these works to demonstrate how the modular and scalable system retains the quantum advantage, even as we expand logical circuits," stated Matthias Steffen, IBM's Quantum Processor Technology Chief. As he explained, real-time error identification and correction has been achieved, which is an essential requirement for a truly fault-tolerant quantum computer.

These advancements are based on the so-called qLDPC (quantum Low-Density Parity Check) codes, an error correction approach that has significantly reduced the number of physical qubits required, up to 90% less than with other codes. This efficiency was highlighted in an article published in 'Nature', emphasizing that it is a more realistic solution for creating reliable logical qubits while reducing the infrastructure and control electronics that were previously prohibitive.

The path to large-scale fault tolerance not only involves theoretical advances. Steffen pointed out that they have designed and validated coupler architectures to connect qubits over longer distances, essential for building a modular system without resorting to impractical-sized chips. "We have demonstrated six-connection devices, the ability to reset qubits in nanoseconds, and the basic functionality of these couplers, all with coherence times and gate errors comparable to our current processors," he specified.

The commitment to a modular approach also translates into advantages over rival architectures like neutral atoms or trapped ions. "Superconducting qubits remain much faster, and the final runtime is what matters to the user," Gambetta affirmed. The scientific community is already exploring adaptations of qLDPC codes on different platforms, but IBM argues that its superconducting architecture offers a more realistic engineering margin to achieve scalability without sacrificing error correction.

Furthermore, the company already has intermediate processors in its roadmap that will pave the way towards Starling: Loon (2025) will test these components, Kookaburra (2026) will serve as the first modular unit with quantum memory, and Cockatoo (2027) will enable interlacing modules and scaling up to hundreds or thousands of logical qubits.

These milestones are part of an approach that, according to Steffen, transcends the limits of physics and delves into purely engineering challenges. Unlike older codes like the Surface Code, which required millions of physical qubits to correct errors, IBM's approach better aligns with current manufacturing yields. "We started with the Surface Code in 2011, but we found that, although it has high theoretical thresholds, the reality of factories made it almost unattainable," Gambetta confessed.

Thus, the shift to qLDPC codes and a modular design is a direct response to the limitations of semiconductor production and superconducting qubit physics. In other words, it is the key to doing more operations with less. In fact, IBM has extended the average coherence of its Heron devices from 150 to 250 microseconds, approaching the millisecond goal necessary to reduce errors in logical gates to viable levels.

Another crucial point is the combination of algorithm research and hardware development. Gambetta highlights that "Quantum Advantage" —the point where quantum computers unambiguously outperform classical ones— does not solely depend on hardware but also on finding suitable algorithms. Examples like the collaboration with RIKEN and the Fugaku supercomputer demonstrate that the integration of quantum processors and HPC is already showing comparable results in chemistry and optimization, bringing that quantum advantage threshold closer before the arrival of Starling.

Key to the Future of Cybersecurity

As explained by Gambetta and Steffen, the transition to cryptography that can withstand quantum computing is no longer an option but an inevitable necessity. This is because quantum computers pose a real threat to the security of current algorithms like RSA (used for encryption and digital signing). Furthermore, while progress towards this new secure cryptography largely depends on improving algorithms, a recent study by Google has shown that it is possible to significantly reduce the number of physical qubits needed to run the Shor algorithm, which would jeopardize RSA security.

With the goal of deploying over 80 quantum computers in the cloud starting in 2026 and with a global network of 650,000 users, IBM Quantum sees Starling not just as a scientific challenge but as an epic engineering feat. "It's no longer about whether it's possible," Gambetta concludes, "but about how we will make it a reality by 2029."