Authors: Alex Norton, Bob Sorensen
Publication Date: 7 2021
Length: 1 pages
Two weeks ago, the US Department of Defense officially canceled its Joint Enterprise Defense Infrastructure (JEDI) cloud solicitation and contract, ending a long period of uncertainty and controversy. Originally, the contract, which designated $10 billion to support cloud computing capabilities for a variety of workloads and departments across the DoD, had been awarded to a single vendor, Microsoft Azure, in 2019. However, after appeals from other vendors, the process was reevaluated. Ultimately Microsoft was awarded the contract a second time. After nearly two years into the JEDI solicitation and award process, the DoD stated that their needs had evolved, and the original contract no longer aligned with the requirements of the department. A new solicitation was issued, the Joint Warfighter Cloud Capability (JWCC) contract, which indicated a plan to use multiple vendors to fulfill the needs of the contract. Currently, the DoD is seeking proposals from Microsoft and Amazon Web Services but will likely evaluate other qualified U.S. based CSPs.
Several years ago, anecdotal evidence led Hyperion Research to compile a list of applications that promised to be the most economically important HPC-enabled AI use cases. Rather than simply drawing attention as interesting one-off examples, these applications had emerged as repetitive AI workloads that vendors could begin to pursue as emerging market segments. Hyperion Research's recently completed multi-client study of the worldwide HPC market presented a direct opportunity to ask HPC user organizations whether they use or plan to use any of the economically important HPC-enabled AI applications.
7 2021 | Uncategorized
MLCommons, an international artificial intelligence (AI) standards body formed in 2018, launched MLPerf Tiny, their first benchmark targeted at the inference capabilities of edge and embedded devices, or what they call "intelligence in everyday devices". The new benchmark is now part of the overall MLPerf benchmark suite, which measures AI training and inference performance on a wide variety of workloads, including natural language processing and image recognition. The benchmark covers four machine learning (ML) tasks focused on camera and microphone sensors as inputs: keyword spotting, visual wake words, tiny image classification, and anomaly detection. Some important use cases include smart home security, virtual assistants, and predictive maintenance.
6 202021 | Uncategorized