Hyperion Research LogoHyperion Research Logo_StickyHyperion Research LogoHyperion Research Logo
  • Home
  • Services
    • Traditional and Emerging HPC
    • HPC User Forum
    • Worldwide High Performance Technical Server QView
    • Worldwide HPC Server, Verticals and Countries Forecast Database
    • High Performance Data Analysis-Artificial Intelligence (HPDA-AI)
    • Cloud Computing Program
    • Consulting Services
    • Quantum Computing Continuing Information Service
    • High-Performance Computing Pathfinders
    • HPC End-User Multi-Client Study 2022
  • Team
  • Sample Projects
    • Research Plan
    • List of Recent Reports
    • To Out-compute is to Out-compete: Competitive Threats and Opportunities Relative to U.S. Government HPC Leadership
    • HPC-AI Success Story
    • HPC Market Update during SC22
    • Taxonomy
      • AI-HPDA Taxonomy
      • HPC Server Tracking and Application Workload Segments
      • Traditional HPC and AI-HPDA Subverticals
    • NERSC Update, May 2021 HPC User Forum
    • Cloud Computing Changing HPC Spending
    • NASA Bespoke HPC Study
    • ROI with HPC
    • Interview Series
    • Cloud Application Assessment Tool
    • MCS Server Highlights 2021
    • QC User Study 2021
    • HPC Storage Review 2021 First Half Yr
    • Hyperion Research Sponsored Tech Spotlight AMD-Supermicro
  • Events
  • Contact
0

$0.00

LOGIN
✕
  • Home
  • Uncategorized
  • HPC Users Willing to Pay 10-15% Premium for Faster, Higher Performance Processors and Larger, Faster Memory
Awaiting product image

HPC Users Willing to Pay 10-15% Premium for Faster, Higher Performance Processors and Larger, Faster Memory

$1,500.00

Authors: Melissa Riddle and Mark Nossokoff

Publication Date: June 2023

Length: 3 pages

Category: Uncategorized
Share
Description

When respondents were asked to identify specific system attributes for which they would be willing to pay a 10-15% premium on top of system price, the most desirable attributes were faster or higher performance processors (48%), and larger or faster memory (39%). More than one third of respondents were willing to pay a premium for higher performance external I/O and storage interconnects between nodes (35%) and better density, power, or cooling attributes (32%). In contrast, only a small percentage (9%) expressed a willingness to pay a premium for a specific vendor, and about one in ten indicated that they would not be willing to pay a 10-15% premium for any specific attribute/feature. This data is from an annual study that is part of the eighth edition of Hyperion Research’s HPC end-user-based tracking of the HPC marketplace. It included 181 HPC end-user sites with 3,830 HPC systems.

Related Products

    GPUs Stand Out as Planned Processor Element at a Rate of 74%

    Tom Sorensen and Earl Joseph

    Survey respondents cited GPUs as the most anticipated processing element within the next 12-18 months at a rate of 74%. When asked about which processing elements they expect to be incorporated into their HPC/AI/HPDA computing resources, the majority of respondents across all sectors expected GPUs to be first (74.0%) and TPUs (24.3%) as next most anticipated. Government and academia respondents reported the highest expectation at a rate of 84%. This data is from the eighth annual study of Hyperion Research's high-performance computing (HPC) end-user-based tracking of the HPC marketplace. It included 181 HPC end-user sites with 3,830 HPC systems.

    June 20 | Uncategorized

    Cerebras Announces Capability to Train Largest Models Easily

    Alex Norton and Thomas Sorensen

    In mid-June of 2022, Cerebras Systems announced a new feature that allows users to train some of the largest AI models in the world within a single CS-2 machine using a simplified software support scheme. The announcement highlights multiple capabilities that Cerebras sees as their competitive advantages over other companies. Notable examples cited include the ability to accommodate an entire training model within the memory, through Cerebras' Weight Streaming software on the Wafer Scale Engine (WSE), instead of splitting it across processors, as well as the ability for users to manipulate a few inputs within the software scheme and GUI to choose the scale of model desired for training (i.e., GPT-3 13B, GPT-3XL 1.3B). Cerebras claims that this advancement can cut down the setup of large model training runs from months to minutes, with the Cerebras software managing much of the initial setup.

    September 2022 | Uncategorized

Have any questions?

365 Summit Ave.
St. Paul MN 55102, USA.

info@hyperionres.com

© 2021 Hyperion Research. All Rights Reserved | Privacy Policy | Website Terms of Use
LOGIN
0

$0.00

✕

Login

Lost your password?