Meta Unveils Four MTIA Chips for Accelerated AI Inference

Key Points
- 1Meta launches four MTIA chips for AI inference over two years.
- 2Significant increase in HBM bandwidth and compute capabilities announced.
- 3Potentially reduces dependency on Nvidia's AI hardware solutions.
Meta has announced the introduction of its Meta Training and Inference Accelerator (MTIA) chips, created in collaboration with Broadcom. These four chips—MTIA 300, 400, 450, and 500—will be rolled out over the next two years, with the first in production for recommendations training. Enhancements include a significant 4.5x increase in HBM bandwidth and a 25x increase in compute performance from MTIA 300 to MTIA 500, supporting heavy AI inference workloads.
Strategically, the MTIA chips aim to modularize AI deployment across data centers, significantly reducing the changeover time for new hardware. By targeting HBM bandwidth improvements rather than just raw compute FLOPs, Meta's chips are positioned to tackle existing market leaders like Nvidia's H100. This evolution not only aligns with Meta's goal of an inference-first technology while building based on industry standards, but also suggests a potential shift in the competitive landscape of AI hardware, promoting more domestic capabilities and less reliance on foreign technology.
Free Daily Briefing
Top AI intelligence stories delivered each morning.
Related Articles

Alibaba Releases Qwen3.6-27B for Local AI Coding

Data Centers Embrace AI Chips for Enhanced Performance

Lenovo Launches Powerful AI Workstation ThinkPad P16 Gen 3

OCP Members Advocate for DC Power in Data Centers
