HPC Users Willing to Pay 10-15% Premium for Faster, Higher Performance Processors and Larger, Faster Memory
Authors: Melissa Riddle and Mark Nossokoff
Publication Date: June 2023
Length: 3 pages
When respondents were asked to identify specific system attributes for which they would be willing to pay a 10-15% premium on top of system price, the most desirable attributes were faster or higher performance processors (48%), and larger or faster memory (39%). More than one third of respondents were willing to pay a premium for higher performance external I/O and storage interconnects between nodes (35%) and better density, power, or cooling attributes (32%). In contrast, only a small percentage (9%) expressed a willingness to pay a premium for a specific vendor, and about one in ten indicated that they would not be willing to pay a 10-15% premium for any specific attribute/feature. This data is from an annual study that is part of the eighth edition of Hyperion Research’s HPC end-user-based tracking of the HPC marketplace. It included 181 HPC end-user sites with 3,830 HPC systems.
GPUs Stand Out as Planned Processor Element at a Rate of 74%
Tom Sorensen and Earl Joseph
Survey respondents cited GPUs as the most anticipated processing element within the next 12-18 months at a rate of 74%. When asked about which processing elements they expect to be incorporated into their HPC/AI/HPDA computing resources, the majority of respondents across all sectors expected GPUs to be first (74.0%) and TPUs (24.3%) as next most anticipated. Government and academia respondents reported the highest expectation at a rate of 84%. This data is from the eighth annual study of Hyperion Research's high-performance computing (HPC) end-user-based tracking of the HPC marketplace. It included 181 HPC end-user sites with 3,830 HPC systems.
June 20 | Uncategorized
Cerebras Announces Capability to Train Largest Models Easily
Alex Norton and Thomas Sorensen
In mid-June of 2022, Cerebras Systems announced a new feature that allows users to train some of the largest AI models in the world within a single CS-2 machine using a simplified software support scheme. The announcement highlights multiple capabilities that Cerebras sees as their competitive advantages over other companies. Notable examples cited include the ability to accommodate an entire training model within the memory, through Cerebras' Weight Streaming software on the Wafer Scale Engine (WSE), instead of splitting it across processors, as well as the ability for users to manipulate a few inputs within the software scheme and GUI to choose the scale of model desired for training (i.e., GPT-3 13B, GPT-3XL 1.3B). Cerebras claims that this advancement can cut down the setup of large model training runs from months to minutes, with the Cerebras software managing much of the initial setup.
September 2022 | Uncategorized