
Cerebras Announces Capability to Train Largest Models Easily
$1,500.00
Authors: Alex Norton and Thomas Sorensen
Publication Date: September 2022
Length: 1 pages
In mid-June of 2022, Cerebras Systems announced a new feature that allows users to train some of the largest AI models in the world within a single CS-2 machine using a simplified software support scheme. The announcement highlights multiple capabilities that Cerebras sees as their competitive advantages over other companies. Notable examples cited include the ability to accommodate an entire training model within the memory, through Cerebras’ Weight Streaming software on the Wafer Scale Engine (WSE), instead of splitting it across processors, as well as the ability for users to manipulate a few inputs within the software scheme and GUI to choose the scale of model desired for training (i.e., GPT-3 13B, GPT-3XL 1.3B). Cerebras claims that this advancement can cut down the setup of large model training runs from months to minutes, with the Cerebras software managing much of the initial setup.
Related Products
A Look at the EU’s Four Pillar Strategy for Leadership in HPC
Mark Nossokoff, Steve Conway, Earl Joseph
High performance computers (HPCs) are recognized globally as fundamental tools for conducting R&D, innovation and advancing the economic competitiveness of nations. More countries than ever are increasing their spending on HPC infrastructure and critical associated areas such as AI, cloud, quantum computing, applications development and optimization, and workforce development and retention. The European Commission (EC), through the EuroHPC Joint Undertaking (JU), Digital Europe Programme (DEP), and Horizon 2020/Europe funding programs, fully recognized this criticality for the European Union (EU) to fortify its global HPC leadership position. Further refining and extending its initial long-term mission, the JU has instituted a Four Pillar Strategy focusing on Infrastructure, Technologies, Applications, and Take-Up and Skill Sets, to define an approach to help realize its HPC ambitions. Continued execution of this strategy should well-position the EU to achieve its goals of deploying an integrated world-class supercomputing and data infrastructure.
September 20 | Uncategorized
World’s First Data Center APU Stood Up in AMD Laboratory
Tom Sorensen and Alex Norton
During the recent Wells Fargo 2022 TMT Summit, Mark Papermaster, CTO of AMD, reported that the Instinct MI300 accelerated processing unit (APU) is, as of early December 2022, up and running. Currently confined to their in-house lab, the Instinct MI300 will be used in the exascale supercomputer currently in development at Lawrence Livermore National Laboratory, El Capitan, scheduled to be delivered in 2024. Described by Papermaster as a "true datacenter APU," AMD expects general availability of the processor in 2023. This multi-chiplet processor makes use of both AMD's Zen 4 (x86) CPU architecture and CDNA 3, AMD's GPU architecture, designed specifically with exascale computing in mind, and will be produced by Taiwan's TSCM at the 5nm process node. Papermaster sees this development as an important way to continue introducing greater density and optimization into components now that the sector is no longer in the era of the "old Moore's Law."
December 2022 | Uncategorized