Tech1 hr ago

Google’s TPU 8t Pods Cut AI Training Time to Weeks and Scale to Million‑Chip Clusters

Google’s TPU 8t pods reduce AI model training from months to weeks, deliver 9,600 chips and two petabytes of memory per pod, and scale to up to one million chips in a single logical cluster.

Alex Mercer/3 min/US

Senior Tech Correspondent

TweetLinkedIn
Google’s TPU 8t Pods Cut AI Training Time to Weeks and Scale to Million‑Chip Clusters
Source: CloudOriginal source

TL;DR: Google’s TPU 8t chip cuts leading AI model training from months to weeks. Each pod holds 9,600 chips with two petabytes of shared memory and can scale to a million chips in one logical cluster.

Context

Google’s AI infrastructure relies on its own Tensor processing units (TPUs) rather than third‑party accelerators. After the seventh‑gen Ironwood TPU launched in 2025, the company introduced the eighth‑gen family, splitting it into TPU 8t for training and TPU 8i for inference. Google says the new chips support the new “agent era,” where AI systems act autonomously and demand different hardware traits.

Key Facts

- The TPU 8t reduces training time for leading models from months to weeks. - A single TPU 8t pod contains 9,600 chips and two petabytes of high‑bandwidth memory shared across the pod. - Google states the TPU 8t can scale linearly to as many as one million chips in a single logical cluster.

What It Means

With 9,600 chips per pod, each delivers about 121 FP4 EFlops of compute, nearly triple the Ironwood training ceiling. The two‑petabyte memory pool lets larger models stay in fast memory, reducing data movement bottlenecks. Scaling to a million chips could enable training of models far beyond today’s leading size, though it may also pressure RAM markets as demand for high‑bandwidth memory grows. Customers using Google Cloud can now train large models in weeks instead of months, accelerating experimentation.

Watch for how quickly Google’s cloud customers adopt TPU 8t pods, the first published benchmarks on million‑chip clusters, and whether competitors respond with comparable scale‑out offerings.

TweetLinkedIn

More in this thread

Reader notes

Loading comments...