Submit Your Quantum Advantage Claim

Follow these steps to contribute new benchmark instances or submissions to the Advantage Trackers.

View Repository

Preparation Checklist

1. Choose a Pathway

Decide which tracker aligns with your claim:

  • Observable Estimations: expectation values with rigorous error bars.
  • Variational Problems: energy (or cost) estimates bounded by the variational principle.
  • Classically Verifiable Problems: outputs validated against known or efficiently checkable solutions.

2. Gather Artifacts

Each submission must link to a reproducible method and specify:

  • Quantum and classical runtimes.
  • Quantum and classical hardware details.
  • Validation evidence that meets the pathway criteria.
  • A QASM circuit and metadata for the problem instance (if introducing a new benchmark).

Repository Layout

The tracker data is split into per-path directories under data/paths/.

Problems

Each problem instance lives in its own JSON file.

  • data/paths/<path-id>/problems/<problem-id>.json
  • Mirror metadata in problems/<path-id>/<problem-id>.json
  • Provide a matching QASM file at problems/<path-id>/<problem-id>.qasm

Submissions

One JSON file per claim:

  • data/paths/<path-id>/submissions/yyyy-mm-dd_problem_institution.json
  • Populate the fields listed in the pull request template.
  • Do not remove earlier claims—update or supersede them explicitly.

Submit a Pull Request

1. Update Data Files

  1. Edit or add problem and submission JSON files.
  2. Drop new QASM files into problems/<path-id>/.
  3. Run python3 scripts/build_trackers.py to refresh aggregated tables.

2. Open the PR

  1. Use the repository pull request template to summarize your claim.
  2. Attach links to papers, code, and validation evidence.
  3. Ensure git status is clean after formatting to pass CI.

Review Expectations

Claims are merged once the pathway maintainers confirm reproducibility, validation criteria, and data fidelity.

Evidence

Provide logs, notebooks, or data products that allow reviewers to recompute key metrics.

Traceability

Reference commit hashes, tagged releases, or archived datasets so results remain accessible.

Transparency

Document assumptions, calibration routines, and sources of uncertainty in your method description.