Announcement below and post comment by Sundar Pichai, CEO of Google and Alphabet:
update
comment
Key details:
- TPU 8t, optimized for training, scales up to 9,600 TPUs and 2 petabytes of shared, high-bandwidth memory in a single superpod. It achieves three times the processing power of Ironwood and delivers up to 2x more performance/watt.
- TPU 8i, optimized for inference, connects 1,152 TPUs in a single pod, dramatically reducing latency, with 3x more on-chip SRAM, to deliver the massive throughput and low latency needed to concurrently run millions of agents cost-effectively.
#AI #innovationcommunity
------------------------------
Todor Kostov
Director
------------------------------