Nvidia’s First Ampere GPU Targeted for Datacenters
Authors: Michael Feldman, Earl Joseph
Publication Date: July 2020
Length: 3 pages
Nvidia’s recently announced Ampere GPU for datacenters (A100) comes to the market at a time of increased market competition for HPC and AI silicon. However, rather than offering specialized datacenter products aimed at these two application categories, the new Ampere offering carries forward the dual HPC/AI approach previously introduced in Nvidia’s Volta architecture. To realize this, the company has introduced a number of innovations that significantly boost performance in each area.
ORNL Announces Newest Leadership HPC: It’s More Than Just Exaflops
The Department of Energy at Oak Ridge National Laboratory (ORNL) recently announced plans for the development of a 1.5 exaflops system called Frontier to be delivered in 2021. US HPC maker Cray and chip maker AMD are the two key US commercial partners in this effort. Despite numerous press articles centered on the 1.5 exaflops peak performance of Frontier, ORNL's original RFP released in April of 2018 clearly called out the diverse workload requirements that Frontier would have to successfully handle that span the traditional modeling and simulation sector, big data analysis, and AI applications, while demonstrating a 50X improvement in solving key DOE science problems that today run at the 20 petaflops level. To meet those ambitious goals, strong support from DOE's companion $1.7 billion Exascale Computing Project (ECP) will be critical.
May 2019 | Quick Take
The AI Hardware Summit: A Recap
Alex Norton, Bob Sorensen, Steve Conway and Earl Joseph
This inaugural, two-day AI Hardware summit held in Mountain View, California, at the Computer History Museum, brought together researchers, vendors, and users to explore the development of the AI ecosystem from a hardware perspective. Large companies, startups, and analysts joined to hear over 30 speakers and roundtables. Although the overall theme was all AI hardware, many of the presentations focused on AI processors and the work that is being done to design hardware to further the development of the evolving and growing field of AI, machine learning, and deep learning.
November 2018 | Quick Take