Marvell Secures Google AI Chip Co‑Development as Hyperscaler Wins Reach 18
Google partners with Marvell to design AI inference chips as the chipmaker logs 18 hyperscaler design wins and a 42% revenue surge to $8.2 billion.
*TL;DR – Google is in talks with Marvell to co‑develop a memory processing unit and a next‑generation Tensor Processing Unit for AI inference, joining 18 recent hyperscaler design wins that helped lift Marvell’s fiscal 2026 revenue 42% to $8.2 billion.*
Context Nvidia currently commands roughly 90% of the AI accelerator market, a dominance that pushes cloud giants to diversify their silicon supply. Alphabet, Microsoft, Amazon and Meta are all building custom chips to reduce reliance on any single vendor. Marvell Technology, a specialist in custom ASICs and high‑speed interconnects, sits at the heart of this shift.
Key Facts - Google has opened negotiations with Marvell to create a memory processing unit and a next‑generation TPU aimed at inference workloads, the phase where trained models generate predictions. - Marvell recently announced 18 design wins with major hyperscalers, including Microsoft and Amazon, confirming its chips are already embedded in leading AI data centers. - Fiscal 2026 revenue rose 42% year‑over‑year to $8.2 billion, reflecting rapid growth in its custom ASIC segment, which expanded from near zero to $1.5 billion in a single year. - The company’s stock jumped more than 13% on the Google news, while analysts have lifted price targets amid expectations of $15 billion in sales by fiscal 2028. - Marvell’s recent $3.25 billion acquisition of Celestial AI adds photonic interconnect technology, enabling data to travel between chips at light speed and easing the “memory wall” that limits multi‑chip AI systems.
What It Means The Google partnership signals that hyperscalers view Marvell as a co‑design partner rather than a simple component supplier. Marvell’s expertise in PCIe, Ethernet and photonic networking bridges the gap between custom inference silicon and Nvidia’s CUDA software ecosystem, allowing cloud providers to build faster, more efficient AI pipelines without full dependence on Nvidia GPUs.
While Nvidia is likely to retain its lead in AI training, Marvell’s expanding footprint in inference and interconnects could make it indispensable for large‑scale AI deployments. The company’s projected 30%‑plus annual growth and a roadmap targeting $15 billion in revenue suggest a trajectory that may reshape the competitive landscape.
Looking Ahead Watch for formal announcements from Google on the co‑development timeline and any subsequent design wins that could further embed Marvell’s technology across hyperscaler data centers.
Continue reading
More in this thread
Conversation
Reader notes
Loading comments...