
QC Benchmarks: A Critical Element to QC Progress
$1,500.00
Authors: Bob Sorensen and Tom Sorensen
Publication Date: March 202025
Length: 1 pages
Researchers at the University of Science and Technology of China (USTC), recently published a paper that outlined their efforts to execute a random circuit sampling (RCS) test on their 105 qubit Zuchongzhi 3.0 quantum processor unit (QPU). The paper, appearing in the APS Physical Review Letters, reported that the QPU achieved 106 samples in just a few hundred seconds, a task cited as infeasible on the most powerful classical supercomputers today and that USTC researchers claimed would require the US DOE’s Frontier HPC approximately 5.9×109 years to replicate. This follows a similar Google announcement made late last year that their 105 qubit Willow QPU performed a similar RCS test in under five minutes, which Google researchers said would have taken one of the fastest supercomputers 1025 years, a number that greatly exceeds the age of the universe.
Related Products
International Collaborators Create Guide for Understanding AI in Healthcare
Tom Sorensen, Alex Norton
During the recent conference held by the Special Interest Group on Knowledge Discovery and Data Mining held in Singapore, three international public science policy advocacy groups presented a guide, Using Artificial Intelligence to Support Healthcare Decisions, aimed at empowering and educating the public on the growing use of AI platforms in the healthcare decision-making process. The guide covers explanations of common applications of artificial intelligence platforms in healthcare and, more importantly, outlines specific questions one can pose to cut to the core of the efficacy and reliability of an AI platform in those applications.
9 202021 | HYP_Link
MLCommons Adds Edge/Embedded AI Inference Benchmark
Alex Norton and Bob Sorensen
MLCommons, an international artificial intelligence (AI) standards body formed in 2018, launched MLPerf Tiny, their first benchmark targeted at the inference capabilities of edge and embedded devices, or what they call "intelligence in everyday devices". The new benchmark is now part of the overall MLPerf benchmark suite, which measures AI training and inference performance on a wide variety of workloads, including natural language processing and image recognition. The benchmark covers four machine learning (ML) tasks focused on camera and microphone sensors as inputs: keyword spotting, visual wake words, tiny image classification, and anomaly detection. Some important use cases include smart home security, virtual assistants, and predictive maintenance.
6 202021 | HYP_Link