Cerebras Announces Capability to Train Largest Models Easily
$1,500.00
Authors: Alex Norton and Thomas Sorensen
Publication Date: September 2022
Length: 1 pages
In mid-June of 2022, Cerebras Systems announced a new feature that allows users to train some of the largest AI models in the world within a single CS-2 machine using a simplified software support scheme. The announcement highlights multiple capabilities that Cerebras sees as their competitive advantages over other companies. Notable examples cited include the ability to accommodate an entire training model within the memory, through Cerebras’ Weight Streaming software on the Wafer Scale Engine (WSE), instead of splitting it across processors, as well as the ability for users to manipulate a few inputs within the software scheme and GUI to choose the scale of model desired for training (i.e., GPT-3 13B, GPT-3XL 1.3B). Cerebras claims that this advancement can cut down the setup of large model training runs from months to minutes, with the Cerebras software managing much of the initial setup.
Related Products
HPC Users Express Mixed Optimism Towards Adopting Edge Computing
Melissa Riddle and Mark Nossokoff
According to recent study results, a quarter of HPC users (28.2%) either currently employ edge computing or expect to within two years. The top motivators driving edge computing growth include improving real-time data collection and processing, accelerating HPC applications, access to IoT devices for data collection, and a wider range of sensor data. Top deterrents dampening edge computing growth include complex and varied IoT formats, inadequate edge vendor support, lack of in-house expertise for integration, and the cost of integrating into existing infrastructure. This data is from an annual study that is part of the eighth edition of Hyperion Research's HPC end-user-based tracking of the HPC marketplace. It included 181 HPC end-user sites with 3,830 HPC systems.
June 2023 | Uncategorized