System Interconnect Architectures are Expected to Shift with Future HPC Procurements
$1,500.00
Authors: Mark Nossokoff and Jaclyn Ludema
Publication Date: July 2023
Length: 3 pages
Much attention is paid to the compute elements of HPC architectures, and rightly so. However, if those elements are not appropriately connected in a balanced and performant manner to other servers and to storage, then the optimizations provided by the server technologies will be unrealized as new bottlenecks appear. While there are multiple approaches for compute and storage interconnect architectures, recent survey data suggests the prevailing architecture is expected to shift from a preference of independent node-node and node-storage networks to a preference of converged networks with respondents’ next HPC procurement. This data is from an annual study which is part of the eighth edition of Hyperion Research’s high-performance computing (HPC) end-user-based tracking of the HPC marketplace. It included 181 HPC end-user sites with 3,830 HPC systems.
Related Products
49% of HPC Sites Indicate That AI Expertise Is the Number One Barrier to Increased AI Adoption and Usage
Tom Sorensen and Earl Joseph
Expertise is now a major concern for both HPC and AI centers, outranked only by budget concerns. When asked about barriers to furthering AI capabilities, AI-specific expertise was a significant concern. Popular responses included access to AI expertise (49%), skills in AI model development (47%), and skills in AI programming (36%). This development comes at a time when a third of HPC sites (33%) report a lack of knowledge or skilled staff and identified this issue among their top three barriers to expanding on-premises HPC. This data is from the eighth annual study of Hyperion Research's high-performance computing (HPC) end-user-based tracking of the HPC marketplace. It included 181 HPC end-user sites with 3,830 HPC systems.
May 20 | Uncategorized
GPUs Stand Out as Planned Processor Element at a Rate of 74%
Tom Sorensen and Earl Joseph
Survey respondents cited GPUs as the most anticipated processing element within the next 12-18 months at a rate of 74%. When asked about which processing elements they expect to be incorporated into their HPC/AI/HPDA computing resources, the majority of respondents across all sectors expected GPUs to be first (74.0%) and TPUs (24.3%) as next most anticipated. Government and academia respondents reported the highest expectation at a rate of 84%. This data is from the eighth annual study of Hyperion Research's high-performance computing (HPC) end-user-based tracking of the HPC marketplace. It included 181 HPC end-user sites with 3,830 HPC systems.
June 20 | Uncategorized