Quantum computing has run red-hot this year, as it marches toward utility.
In 2025, “someday” even started to sound like “soon” for the quantum computing space.
Analysts forecast the global quantum market to surge to more than a whopping $7 billion by 2030, growing at 34.6% annually. Businesses are testing algorithms to tackle optimization, chemistry, and other problems that classical systems find it virtually impossible to address.
Google added fuel to that momentum.
Its Quantum AI division recently claimed a mind-boggling 13,000× speedup in a physics simulation compared to the world’s fastest supercomputer. The bigger picture is that when paired with AI, quantum isn’t just a rival, but an incredible extender.
With hybrid pipelines, we could see AI models effectively learning from quantum outputs, while unlocking new efficiencies across data science and design.
That’s why this week’s major development in quantum has the industry buzzing.
Advanced Micro Devices (AMD) and International Business Machines Corporation’s (IBM) quiet collaboration has everyone talking, potentially shifting the timeline for when quantum evolves into a full-fledged enterprise tool.
AMD and IBM quietly unlock a new quantum milestone
A massive new breakthrough just put AMD and IBM on quantum’s toughest hurdle: fixing its mistakes quickly enough to make it useful.
Quantum error correction is arguably the most challenging part of the field. It might be compared to catching dozens of ping-pong balls in a dark room without letting any drop. Each quantum bit is fragile, and errors can accumulate rapidly.
IBM’s new real-time decoder is essentially a dependable glove in that analogy, detecting and correcting those slip-ups before they spread out. The big news is that it runs on off-the-shelf AMD FPGAs (from its Xilinx line) and is still able to hit latency targets with nearly 10× headroom.
More Tech Stocks:
- Senior analyst lifts Palantir stock price target with a catch
- Nvidia just scored a massive AI win, but CEO Huang has regrets
- Apple’s iPhone 17 story just took an unexpected turn
- Analysts revamp Salesforce stock forecast after key meeting
That means the decoder can effectively work on accessible, commodity chips, which makes fault-tolerant quantum computing seem like much less of a moonshot.
IBM says the result fits directly into its long-term plan of developing the large-scale quantum machine, Starling, by 2029. The big positive is that its achievement just landed a year ahead of schedule.
For AMD, this is less about incremental near-term sales and more about validation.
Related: Major cybersecurity CEO drops AI bombshell
The design-win pushes its FPGA stack’s credibility as quantum’s control layer. If FPGA becomes a mainstay for quantum control planes, AMD’s robust embedded and data-center businesses will benefit immensely over time.
Consequently, investors reacted quickly to the development. The headline sent AMD stock up nearly 7% as Wall Street leaned into the compelling quantum angle, while IBM logged its best session since January.
Quick takeaways:
- IBM’s real-time quantum decoder now runs on AMD’s off-the-shelf FPGAs, highlighting that error correction can now easily function on standard, scalable hardware.
- The critical development validates AMD’s role in quantum control systems while boosting its long-term data center narrative.
- Investors rewarded the move, signaling market appetite for credible design wins in quantum.
AI demand turns AMD from a challenger to a credible contender
Nvidia has hogged the AI spotlight over the past couple of years, but of late, AMD’s numbers are starting to match its ambition.
The chipmaker’s Q2 2025 sales jumped 32% year over year to $7.69 billion, spearheaded by its potent Data Center segment, which leapt 14% to $3.2 billion.
Also, despite the $800 million MI308 inventory write-down, AMD still posted its largest Q2 in history, guiding for another monster $8.7 billion for the upcoming quarter.
Related: Legendary fund manager Ray Dalio could soon be your assistant
The driving force behind the shift is the powerful MI300 series.
AMD’s flagship Instinct MI350 chips, developed on the CDNA 4 architecture, aim to deliver up to four times more AI compute along with 35 times faster inference compared to their predecessors.
Paired with ROCm 7.0, which offers 4× inference and 3× training gains compared to its prior software stack, AMD aims to build on one of the most robust ecosystems.
Further, the evidence is mounting.
For instance, Microsoft Azure’s MI300X v5 instances are now live, marking the first major large-scale cloud integration of AMD AI GPUs. Additionally, the El Capitan supercomputer, dubbed the fastest, is powered by AMD CPUs and GPUs.
Then comes the real kicker in the bombshell OpenAI multiyear deal for up to 6 GW of AMD GPUs.
Despite Nvidia still commanding the lion’s share of AI compute, the impressive hyperscaler capex ballooning toward $315 billion this year suggests there’s enough room for two winners.
Related: Google CEO drops bombshell quantum computing breakthrough


