Selective Scenario Re-runs for Voice AI Testing

Sumanyu Sharma
Sumanyu Sharma
Founder & CEO
, Voice AI QA Pioneer

Has stress-tested 1M+ voice agent calls to find where they break.

December 11, 20242 min read
Selective Scenario Re-runs for Voice AI Testing

Selective Scenario Re-runs for Voice AI Testing

A customer called us frustrated. They'd fixed a bug in their appointment booking flow—one specific edge case—and now they had to wait 45 minutes for their entire test suite to run. 800 scenarios, most of which had nothing to do with the fix.

"I just changed one flow," they said. "Why do I have to rerun everything?"

Fair point. Now you don't have to.

Quick filter: If you only changed one flow, you shouldn't have to rerun everything.

What's New

  • Selective scenario re-runs: Choose specific test cases to validate without running the entire test suite
  • Re-scoring functionality: Apply new scoring metrics to previously completed tests
FeatureQuestion it answersWhy it matters
Selective re-runsWhich specific cases should I retest?Saves time versus full suites
Re-scoringHow do new metrics change results?Validates updated eval criteria

Why It Matters

Efficient testing is essential for rapid development and deployment of voice AI agents. Running complete test suites repeatedly can be time-consuming and resource-intensive. If you have ever waited on a full suite just to validate one fix, this is for you. With selective re-runs, you can:

  • Accelerate Development: Test specific changes quickly and efficiently
  • Optimize Resources: Minimize processing overhead by running only necessary tests
  • Enhance Accuracy: Focus on specific scenarios for precise validation
  • Improve Workflow: Integrate targeted testing into your development process

This targeted approach enables teams to iterate faster while maintaining high quality standards, making issue identification and resolution more efficient.

How to Use

To use selective scenario re-runs:

  1. Navigate to Voice Agents
  2. Select a voice agent
  3. Press "Run Scenarios"
  4. Select the scenarios you want to re-run

Looking Ahead

The selective re-run feature demonstrates our commitment to making voice AI testing more efficient and accessible.

Frequently Asked Questions

Selective scenario rerun lets you re-run just a subset of scenarios from an existing dataset instead of the entire suite. It is built for those “one flow changed” moments.

It lets teams validate fixes quickly without waiting on full suites. That speeds up iteration and focuses testing on the exact failure cases or edge conditions you just touched.

Instead of rerunning everything, Hamming lets you pick the exact scenarios you care about—like the ones related to a bug fix or a changed flow—and rerun them immediately. Teams also use re-scoring to apply new evaluation criteria to past runs so comparisons stay apples-to-apples.

Use selective re-runs during development when you know which flows changed (faster feedback, lower cost). Run the full suite before releases and after major vendor or model updates to catch unexpected side effects, then promote new failures into your regression set.

Sumanyu Sharma

Sumanyu Sharma

Founder & CEO

Previously Head of Data at Citizen, where he helped quadruple the user base. As Senior Staff Data Scientist at Tesla, grew AI-powered sales program to 100s of millions in revenue per year.

Researched AI-powered medical image search at the University of Waterloo, where he graduated with Engineering honors on dean's list.

“At Hamming, we're taking all of our learnings from Tesla and Citizen to build the future of trustworthy, safe and reliable voice AI agents.”