Hyperion Research LogoHyperion Research Logo_StickyHyperion Research LogoHyperion Research Logo
  • Home
  • Services
    • Consulting Services
    • Artificial Intelligence-High Performance Data Analysis (AI-HPDA)
    • Traditional and Emerging HPC
    • Quantum Computing Continuing Information Service
    • HPC User Forum
    • Worldwide High Performance Technical Server QView
    • Worldwide HPC Server, Verticals and Countries Forecast Database
    • HPC End-User Multi-Client Study 2024
    • High-Performance Computing Pathfinders
    • Cloud Computing Program
  • Team
  • Sample Projects
    • List of Recent Reports
    • Top 10 Predictions for the Global HPC-AI Community for 2025
    • HPC User Forum: Dr. Ann Speed
    • QC Optimization Status and Prospects
    • To Out-compute is to Out-compete: Competitive Threats and Opportunities Relative to U.S. Government HPC Leadership
    • HPC-AI Success Story
    • HPC+AI Market Update SC24
    • Taxonomy
      • AI-HPDA Taxonomy
      • HPC Server Tracking and Application Workload Segments
      • Traditional HPC and AI-HPDA Subverticals
    • NERSC Update, May 2021 HPC User Forum
    • Cloud Computing Changing HPC Spending
    • NASA Bespoke HPC Study
    • ROI with HPC
    • Interview Series
    • Cloud Application Assessment Tool
    • MCS Server Highlights 2021
    • QC User Study 2021
    • HPC Storage Review 2021 First Half Yr
    • Hyperion Research Sponsored Tech Spotlight AMD-Supermicro
    • U.S. HPC Centers of Activity
  • Events
  • Contact
0

$0.00

LOGIN
✕
  • Home
  • Uncategorized
  • NOAA and Microsoft Announce Cloud Computing Collaboration to Advance Climate-Ready Nation Mission
Awaiting product image

NOAA and Microsoft Announce Cloud Computing Collaboration to Advance Climate-Ready Nation Mission

$1,500.00

Authors: Jaclyn Ludema and Mark Nossokoff

Publication Date: December 2022

Length: 1 pages

Category: Uncategorized
Share
Description

The US National Oceanic and Atmospheric Administration (NOAA) and Microsoft have entered into a Cooperative Research and Development Agreement (CRADA), formalizing NOAA’s commitment to using Microsoft Azure cloud computing resources in the pursuit of NOAA’s mission to build a Climate[1]Ready Nation by 2030. Several initiatives are envisioned whereby NOAA scientists and engineers will work with Microsoft experts to leverage Azure’s machine learning and HPC capabilities:
▪ Fast-tracking innovative contributions to NOAA Earth Prediction Innovation Center (EPIC) earth systems modeling and research
▪ Applying machine learning capabilities to improve models supporting air quality, smoke, and particulate pollution forecasts, as well as relevant NOAA climate models
▪ Accelerating NOAA Fisheries’ survey and observations data collection and management
▪ Creating new ocean observations cataloging efforts
▪ Designing resilient and accessible weather modeling and forecasting that can incorporate external data sources with NOAA enterprise data

Related Products

    World’s First Data Center APU Stood Up in AMD Laboratory

    Tom Sorensen and Alex Norton

    During the recent Wells Fargo 2022 TMT Summit, Mark Papermaster, CTO of AMD, reported that the Instinct MI300 accelerated processing unit (APU) is, as of early December 2022, up and running. Currently confined to their in-house lab, the Instinct MI300 will be used in the exascale supercomputer currently in development at Lawrence Livermore National Laboratory, El Capitan, scheduled to be delivered in 2024. Described by Papermaster as a "true datacenter APU," AMD expects general availability of the processor in 2023. This multi-chiplet processor makes use of both AMD's Zen 4 (x86) CPU architecture and CDNA 3, AMD's GPU architecture, designed specifically with exascale computing in mind, and will be produced by Taiwan's TSCM at the 5nm process node. Papermaster sees this development as an important way to continue introducing greater density and optimization into components now that the sector is no longer in the era of the "old Moore's Law."

    December 2022 | Uncategorized

    Cerebras Announces Capability to Train Largest Models Easily

    Alex Norton and Thomas Sorensen

    In mid-June of 2022, Cerebras Systems announced a new feature that allows users to train some of the largest AI models in the world within a single CS-2 machine using a simplified software support scheme. The announcement highlights multiple capabilities that Cerebras sees as their competitive advantages over other companies. Notable examples cited include the ability to accommodate an entire training model within the memory, through Cerebras' Weight Streaming software on the Wafer Scale Engine (WSE), instead of splitting it across processors, as well as the ability for users to manipulate a few inputs within the software scheme and GUI to choose the scale of model desired for training (i.e., GPT-3 13B, GPT-3XL 1.3B). Cerebras claims that this advancement can cut down the setup of large model training runs from months to minutes, with the Cerebras software managing much of the initial setup.

    September 2022 | Uncategorized

Have any questions?

365 Summit Ave.
St. Paul MN 55102, USA.

info@hyperionres.com

© 2021 Hyperion Research. All Rights Reserved | Privacy Policy | Website Terms of Use
LOGIN
0

$0.00

✕

Login

Lost your password?