Beta: This is a preview and may contain discrepancies; data and visuals are still stabilizing. Please check back soon for the 1.0 release.

Quantum computing benchmarks

Metriq is a collaborative platform maintained by Unitary Foundation for community-driven quantum benchmarking that integrates benchmark execution (metriq-gym), data collection (metriq-data), and public presentation into a unified workflow.

A scoreboard that assigns each device a composite Metriq Score based on results from the metriq-gym benchmark suite. For individual benchmark runs, explore the «Results» tab. For an in-depth understanding of each benchmark, visit the «Documentation» tab.

FAQ

What if a result looks wrong?
Please create an issue in the metriq-data GitHub repository.
Can I add new benchmark results?
Yes, in fact everyone is encouraged to. Please use the upload command from the metriq-gym CLI.
How was the baseline device chosen?
The device ibm_torino from IBM Quantum Cloud is used as a consistent reference point for the derived “score” scale (so 100 means “on par with the baseline”); it was chosen because it is a widely-run, stable platform that provides good coverage across benchmarks. Note this choice does not bias the data: raw benchmark metrics are unchanged, and the baseline is only an anchor for normalization/visual reference (and can be changed in the configuration of the dataset). Picking a different baseline would primarily rescale the scores, but the relative comparisons between platforms would remain the same.
What are Metriq's "classical" inspirations?
Metriq draws inspiration from established “classical” benchmarking efforts like Geekbench for CPU/GPU, and MLCommons and Epoch AI for AI/ML models, adapted to the needs and constraints of quantum hardware and workflows.
What happened to the old Metriq website?
The original vision of Metriq started in 2022 as a website to crowd-source and share benchmarking results ingested from the literature in a wiki-like website. However, we found that this approach did not fully address the challenges of fragmentation and reproducibility in quantum benchmarking. As a result, we evolved Metriq into a more integrated platform that combines benchmark execution, data collection, and public presentation into a unified workflow. In the future, we may explore ways to incorporate literature results in a way that complements the core focus on executable benchmarks and reproducibility, but for now we are prioritizing building out the integrated workflow and encouraging direct contributions of benchmark results through the metriq-gym CLI.

Citation & licence

If you use Metriq data in a publication, please cite our article (link coming soon).

Data license

Except where otherwise noted, benchmark data is released under the Creative Commons Attribution 4.0 International license (CC BY 4.0). You are free to share and adapt the data, provided you give appropriate credit and indicate if changes were made.