
Anthropic to Train/Deploy Foundation Models on AWS AI Chips
$1,500.00
Authors: Bob Sorensen and Tom Sorensen
Publication Date: November 202024
Length: 1 pages
US-based large language model (LLM) developer Anthropic and major cloud service provider (CSP) Amazon Web Services (AWS) recently announced that they were deepening their existing collaboration to support both firms’ competitive prospects in the rapidly growing and increasingly competitive generative AI sector. Anthropic named AWS as its primary LLM training partner and will work with AWS to further develop AWS’s AI-centric Trainum and Inferentia chips. Plans also call for Anthropic to use the AWS chips to train and deploy its future foundation models. In addition, AWS will invest $4 billion in Anthropic, adding to its earlier $4 billion investment last year, although AWS will remain a minority investor in Anthropic.
Related Products
Consortium Aims to Standardize Chiplet Interconnect
Mark Nossokoff, Bob Sorensen
Seeking to establish a die-to-die interconnect standard and foster an open chiplet ecosystem, a strong collection of major chip makers and users recently announced the formation of the UCIe (Universal Chiplet Interconnect Express) industry consortium. The consortium has published version 1.0 of the UCIe specification, covering the die-to-die I/O physical layer, die-to-die protocols, and software stack. Promoter members of the consortium are Advanced Semiconductor Engineering, Inc. (ASE), AMD, Arm, Google Cloud, Intel Corporation, Meta, Microsoft Corporation, Qualcomm Incorporated, Samsung, and Taiwan Semiconductor Manufacturing Company (TSMC)
3 202022 | HYP_Link
MLCommons Adds Edge/Embedded AI Inference Benchmark
Alex Norton and Bob Sorensen
MLCommons, an international artificial intelligence (AI) standards body formed in 2018, launched MLPerf Tiny, their first benchmark targeted at the inference capabilities of edge and embedded devices, or what they call "intelligence in everyday devices". The new benchmark is now part of the overall MLPerf benchmark suite, which measures AI training and inference performance on a wide variety of workloads, including natural language processing and image recognition. The benchmark covers four machine learning (ML) tasks focused on camera and microphone sensors as inputs: keyword spotting, visual wake words, tiny image classification, and anomaly detection. Some important use cases include smart home security, virtual assistants, and predictive maintenance.
6 202021 | HYP_Link