Authors: Alex Norton, Michael Feldman
Publication Date: May 2020
Length: 3 pages
The recent announcement of a novel neural network processor by Silicon Valley startup Groq reflects the diversification taking place in hardware accelerators being developed for the artificial intelligence (AI) market. Groq’s Tensor Streaming Processor (TSP) uses a simplified architecture to enforce deterministic behavior, which offers a unique attribute for machine learning inference in both datacenter applications and for those at the edge.
The Department of Energy at Oak Ridge National Laboratory (ORNL) recently announced plans for the development of a 1.5 exaflops system called Frontier to be delivered in 2021. US HPC maker Cray and chip maker AMD are the two key US commercial partners in this effort. Despite numerous press articles centered on the 1.5 exaflops peak performance of Frontier, ORNL's original RFP released in April of 2018 clearly called out the diverse workload requirements that Frontier would have to successfully handle that span the traditional modeling and simulation sector, big data analysis, and AI applications, while demonstrating a 50X improvement in solving key DOE science problems that today run at the 20 petaflops level. To meet those ambitious goals, strong support from DOE's companion $1.7 billion Exascale Computing Project (ECP) will be critical.
May 2019 | Quick Take
Countries around the world are developing plans for the next generation of large supercomputers, with investments that exceed $300 million per system in many cases. This Quick Take provides Hyperion Research's estimate of schedules of installations and prices of accepted near-exascale and exascale supercomputers around the world.
June 2019 | Quick Take