- Популярные видео
- Авто
- Видео-блоги
- ДТП, аварии
- Для маленьких
- Еда, напитки
- Животные
- Закон и право
- Знаменитости
- Игры
- Искусство
- Комедии
- Красота, мода
- Кулинария, рецепты
- Люди
- Мото
- Музыка
- Мультфильмы
- Наука, технологии
- Новости
- Образование
- Политика
- Праздники
- Приколы
- Природа
- Происшествия
- Путешествия
- Развлечения
- Ржач
- Семья
- Сериалы
- Спорт
- Стиль жизни
- ТВ передачи
- Танцы
- Технологии
- Товары
- Ужасы
- Фильмы
- Шоу-бизнес
- Юмор
Google splits TPU into training and inference chips as x86 exits the AI stack #GoogleTPU
AI infrastructure just specialized at the silicon level. Google's eighth-generation TPUs come in two distinct variants for the first time: the TPU 8t for training, scaling to 9,600 chips per pod with three times the compute of the prior generation, and the TPU 8i for inference, with triple the on-chip memory to keep long context windows resident during agentic workflows. The recognition driving this split is that training a frontier model and running millions of concurrent AI sessions are genuinely different physics problems, and Google concluded they deserve different atoms.
Buried in the announcement was a detail that deserves more attention than it got: both chip variants replaced their x86 CPU hosts with ARM-based Axion processors. The instruction-set architecture that dominated data centers for four decades was designed out of the AI stack in a bullet point. That architectural divergence, specialized silicon for specialized phases, is how the physical infrastructure of intelligence gets optimized in real time. If Google's claimed efficiency gains hold under production workloads, the energy cost per unit of useful AI computation starts bending downward even as capability scales upward.
Subscribe to The Century Report for daily coverage of the systems reshaping civilization.
Watch the full episode here: https://www.youtube.com/watch?v=9uv0hMy4JBE
Get the full story on all of this and much more - read the full edition of today's Century Report here: https://sharedsapience.com/the-century-report-april-23-2026/
#GoogleTPU #AIInfrastructure #PhysicalAI #CenturyReport #AIHardware
Видео Google splits TPU into training and inference chips as x86 exits the AI stack #GoogleTPU канала Shared Sapience
Buried in the announcement was a detail that deserves more attention than it got: both chip variants replaced their x86 CPU hosts with ARM-based Axion processors. The instruction-set architecture that dominated data centers for four decades was designed out of the AI stack in a bullet point. That architectural divergence, specialized silicon for specialized phases, is how the physical infrastructure of intelligence gets optimized in real time. If Google's claimed efficiency gains hold under production workloads, the energy cost per unit of useful AI computation starts bending downward even as capability scales upward.
Subscribe to The Century Report for daily coverage of the systems reshaping civilization.
Watch the full episode here: https://www.youtube.com/watch?v=9uv0hMy4JBE
Get the full story on all of this and much more - read the full edition of today's Century Report here: https://sharedsapience.com/the-century-report-april-23-2026/
#GoogleTPU #AIInfrastructure #PhysicalAI #CenturyReport #AIHardware
Видео Google splits TPU into training and inference chips as x86 exits the AI stack #GoogleTPU канала Shared Sapience
Комментарии отсутствуют
Информация о видео
Вчера, 2:29:43
00:01:53
Другие видео канала





















