
Slurm Remains Top Resource Manager
$2,000.00
Authors: Melissa Riddle and Mark Nossokoff
Publication Date: June 2February3 20
Length: 3 pages
Slurm continues to be the most popular job queuing, resource manager, or scheduling software at HPC sites around the world. In a recent study, Slurm maintained its lead with half of all respondents (50.0%) reporting they use Slurm at least some of the time. After Slurm, the most popular resource managers and schedulers were OpenPBS (18.9%), PBS Pro (13.9%), Torque (13.3%), NQS (12.2%), and LSF (10.6%).
This data is from an annual study that is part of the eighth edition of Hyperion Research’s HPC end-user-based tracking of the HPC marketplace. It included 181 HPC end-user sites with 3,830 HPC systems.
Related Products
Programming Languages Becoming More Ubiquitous: Most HPC Sites Use C/C++ and Python
Melissa Riddle and Jaclyn Ludema
The usage of multiple programming languages is becoming ubiquitous at HPC sites as the number of languages per site continues to rise. C/C++ and Python are particularly popular, with the majority of HPC sites using both languages at least some of the time. This data is from an annual study that is part of the eighth edition of Hyperion Research's HPC end-user-based tracking of the HPC marketplace. It included 181 HPC end-user sites with 3,830 HPC systems.
June 20 | Uncategorized
Cerebras Announces Capability to Train Largest Models Easily
Alex Norton and Thomas Sorensen
In mid-June of 2022, Cerebras Systems announced a new feature that allows users to train some of the largest AI models in the world within a single CS-2 machine using a simplified software support scheme. The announcement highlights multiple capabilities that Cerebras sees as their competitive advantages over other companies. Notable examples cited include the ability to accommodate an entire training model within the memory, through Cerebras' Weight Streaming software on the Wafer Scale Engine (WSE), instead of splitting it across processors, as well as the ability for users to manipulate a few inputs within the software scheme and GUI to choose the scale of model desired for training (i.e., GPT-3 13B, GPT-3XL 1.3B). Cerebras claims that this advancement can cut down the setup of large model training runs from months to minutes, with the Cerebras software managing much of the initial setup.
September 2022 | Uncategorized

