Google Launches Eighth-Gen TPUs for AI Training

Global AI Watch··5 min read·Datacenter Dynamics
Google Launches Eighth-Gen TPUs for AI Training

Key Takeaways

  • 1Google unveils TPU 8t and TPU 8i with advanced architectures.
  • 2New chips boost AI model training capabilities and efficiency.
  • 3Enhances US tech autonomy in AI processing infrastructure.

Google has introduced its eighth-generation Tensor Processing Units (TPUs), specifically designed to optimize AI training and inference workloads. The TPU 8t focuses on training, capable of scaling to 9,600 chips with an impressive 121 exaflops of FP4 compute performance and a bandwidth of 19.2Tbps. Its newly architected Virgo Network enhances data transfer capabilities, facilitating the connection of over a million TPU chips and doubling bandwidth for substantial data operations. Meanwhile, TPU 8i caters to inference tasks, offering tailored specifications for low-latency processing.