
US Department of Defense Considers AI’s Role in Future Decision Making Process
$1,500.00
Authors: Tom Sorensen, Alex Norton
Publication Date: 8 202021
Length: 1 pages
Late last month, US Department of Defense (DOD) leadership explored the potential to inculcate artificial intelligence (AI) processes into its overall military operations, signaling a fundamental change in how information and data are used to increase the decision space for leaders both in military and civilian domains. Delivered during the third and most recent iteration of the Global Information Dominance Experiment (GIDE 3), which included representatives from all 11 combatant commands, NORTHCOM Commander Gen. Glen D. VanHerck’s remarks on AI were aimed at progressing the ability to maintain domain awareness, achieve information dominance, and provide decision superiority in both competition and crisis.
Related Products
MLCommons Adds Edge/Embedded AI Inference Benchmark
Alex Norton and Bob Sorensen
MLCommons, an international artificial intelligence (AI) standards body formed in 2018, launched MLPerf Tiny, their first benchmark targeted at the inference capabilities of edge and embedded devices, or what they call "intelligence in everyday devices". The new benchmark is now part of the overall MLPerf benchmark suite, which measures AI training and inference performance on a wide variety of workloads, including natural language processing and image recognition. The benchmark covers four machine learning (ML) tasks focused on camera and microphone sensors as inputs: keyword spotting, visual wake words, tiny image classification, and anomaly detection. Some important use cases include smart home security, virtual assistants, and predictive maintenance.
6 202021 | HYP_Link
New Error Correction Scheme Seeks to Advance Quantum Computing Capabilities
Bob Sorensen, Tom Sorensen
Researchers at the US-based Lawrence Berkeley National Lab (LBNL) recently reported a new approach to error mitigation in a quantum computer (QC) that targets error-producing noise, a ubiquitous problem that can severely limit the performance and utility of existing and near-future quantum computers. The method developed at LBNL consists of taking an initial noisy target circuit and constructing an analogous estimation circuit that is configured specifically for accurate noise characterization. The information gathered from running the estimation circuit is then applied to correct the noise in the original target circuit.
3 202022 | HYP_Link

