
NOAA and Microsoft Announce Cloud Computing Collaboration to Advance Climate-Ready Nation Mission
$1,500.00
Authors: Jaclyn Ludema and Mark Nossokoff
Publication Date: December 2022
Length: 1 pages
The US National Oceanic and Atmospheric Administration (NOAA) and Microsoft have entered into a Cooperative Research and Development Agreement (CRADA), formalizing NOAA’s commitment to using Microsoft Azure cloud computing resources in the pursuit of NOAA’s mission to build a Climate[1]Ready Nation by 2030. Several initiatives are envisioned whereby NOAA scientists and engineers will work with Microsoft experts to leverage Azure’s machine learning and HPC capabilities:
▪ Fast-tracking innovative contributions to NOAA Earth Prediction Innovation Center (EPIC) earth systems modeling and research
▪ Applying machine learning capabilities to improve models supporting air quality, smoke, and particulate pollution forecasts, as well as relevant NOAA climate models
▪ Accelerating NOAA Fisheries’ survey and observations data collection and management
▪ Creating new ocean observations cataloging efforts
▪ Designing resilient and accessible weather modeling and forecasting that can incorporate external data sources with NOAA enterprise data
Related Products
World’s First Data Center APU Stood Up in AMD Laboratory
Tom Sorensen and Alex Norton
During the recent Wells Fargo 2022 TMT Summit, Mark Papermaster, CTO of AMD, reported that the Instinct MI300 accelerated processing unit (APU) is, as of early December 2022, up and running. Currently confined to their in-house lab, the Instinct MI300 will be used in the exascale supercomputer currently in development at Lawrence Livermore National Laboratory, El Capitan, scheduled to be delivered in 2024. Described by Papermaster as a "true datacenter APU," AMD expects general availability of the processor in 2023. This multi-chiplet processor makes use of both AMD's Zen 4 (x86) CPU architecture and CDNA 3, AMD's GPU architecture, designed specifically with exascale computing in mind, and will be produced by Taiwan's TSCM at the 5nm process node. Papermaster sees this development as an important way to continue introducing greater density and optimization into components now that the sector is no longer in the era of the "old Moore's Law."
December 2022 | Uncategorized
Cerebras Announces Capability to Train Largest Models Easily
Alex Norton and Thomas Sorensen
In mid-June of 2022, Cerebras Systems announced a new feature that allows users to train some of the largest AI models in the world within a single CS-2 machine using a simplified software support scheme. The announcement highlights multiple capabilities that Cerebras sees as their competitive advantages over other companies. Notable examples cited include the ability to accommodate an entire training model within the memory, through Cerebras' Weight Streaming software on the Wafer Scale Engine (WSE), instead of splitting it across processors, as well as the ability for users to manipulate a few inputs within the software scheme and GUI to choose the scale of model desired for training (i.e., GPT-3 13B, GPT-3XL 1.3B). Cerebras claims that this advancement can cut down the setup of large model training runs from months to minutes, with the Cerebras software managing much of the initial setup.
September 2022 | Uncategorized