Introducing Graphcore's Mk2 IPU systems: GC200 IPU, IPU-Machine M2000 and IPU-POD64
Graphcore Co-Founder and CEO Nigel Toon introduces the company's range of Mk2 IPU systems.
GC200 IPU: The made-for-machine intelligence 7nm processor with 900MB of on-chip memory and 1.472 cores, capable of executing 8,832 parallel computing threads.
IPU-Machine M2000: The 1U datacentre blade, capable of one PetaFlop of AI compute, powered by 4xGC200 IPU processors. Exchange-Memory extends the GC200's on-chip memory with off-processor Streaming Memory, up to 450GB. The IPU also includes on-board networking via the IPU-Gateway.
IPU-POD 64: The multi-IPU-Machine scaleout solution for datacentres with large AI compute needs. Connect thousands of machines for large Machine Intelligence problems, or multiple, concurrent workloads. Featuring ultra-high bandwidth, low-latency communication, enabled by Graphcore's breakthrough IPU-Fabric technology.
For more on Graphcore's second generation products, head to the Graphcore blog: https://www.graphcore.ai/posts/introducing-second-generation-ipu-systems-for-ai-at-scale
For details of Graphcore's products in English, visit:
https://www.graphcore.ai/products/mk2/ipu-machine-ipu-pod
For details of Graphcore's products in Chinese, visit:
https://www.graphcore.ai/zh/graphcore%E7%AC%AC%E4%BA%8C%E4%BB%A3ipu%E5%8F%91%E5%B8%83
Видео Introducing Graphcore's Mk2 IPU systems: GC200 IPU, IPU-Machine M2000 and IPU-POD64 канала Graphcore
GC200 IPU: The made-for-machine intelligence 7nm processor with 900MB of on-chip memory and 1.472 cores, capable of executing 8,832 parallel computing threads.
IPU-Machine M2000: The 1U datacentre blade, capable of one PetaFlop of AI compute, powered by 4xGC200 IPU processors. Exchange-Memory extends the GC200's on-chip memory with off-processor Streaming Memory, up to 450GB. The IPU also includes on-board networking via the IPU-Gateway.
IPU-POD 64: The multi-IPU-Machine scaleout solution for datacentres with large AI compute needs. Connect thousands of machines for large Machine Intelligence problems, or multiple, concurrent workloads. Featuring ultra-high bandwidth, low-latency communication, enabled by Graphcore's breakthrough IPU-Fabric technology.
For more on Graphcore's second generation products, head to the Graphcore blog: https://www.graphcore.ai/posts/introducing-second-generation-ipu-systems-for-ai-at-scale
For details of Graphcore's products in English, visit:
https://www.graphcore.ai/products/mk2/ipu-machine-ipu-pod
For details of Graphcore's products in Chinese, visit:
https://www.graphcore.ai/zh/graphcore%E7%AC%AC%E4%BA%8C%E4%BB%A3ipu%E5%8F%91%E5%B8%83
Видео Introducing Graphcore's Mk2 IPU systems: GC200 IPU, IPU-Machine M2000 and IPU-POD64 канала Graphcore
Показать
Комментарии отсутствуют
Информация о видео
Другие видео канала
Graphcore at NeurIPS - New Approaches to NLP and State of the Art Performance on BERT-BASE#16 Working at Graphcore - Web developer #WorkThatMattersWorking at Graphcore - Silicon engineering #WorkThatMattersWorking at Graphcore - hardware engineering #WorkThatMattersComputers for superhuman cognition: Simon KnowlesWorking at Graphcore - Silicon Design #WorkThatMattersBow IPU: The World's First 3D Wafer-on-Wafer ProcessorWorking at Graphcore - Silicon team #WorkThatMattersIntroduction to Early Careers at Graphcore: #WorkThatMattersIPU Interconnect at Scale | What is IPU-Fabric?#15 Working at Graphcore - Software frameworks #WorkThatMattersWorking at Graphcore - Silicon Product Engineer #WorkThatMattersIPU powers Molecular Dynamics Simulation ModelWorking at Graphcore - Customer Engineering #WorkThatMattersWorking at Graphcore - Software team #WorkThatMattersComputing for "Ultra Intelligence" : Simon Knowles at WAICF 2022그래프코어, AI 서밋 2020 서울 참가Poplar Software at Scale | Running AI Workloads in the Data CentreForecasting the future of AI + High Performance ComputingSoftware: Working at Graphcore | #WorkThatMattersScalable Machine Intelligence Systems: Graphcore SVP Software Matt Fyles at ISC2020