Skip to content

Frequently Asked Questions

Find answers to common questions about Hamming AI's voice agent testing, monitoring, and analytics platform.

Voice AI Terminology

ASR (Automatic Speech Recognition) is the technology that converts spoken audio into text. It's the first step in voice agent processing, transcribing what users say so the AI can understand and respond. ASR accuracy directly impacts agent performance—errors in transcription lead to misunderstandings and poor responses.

TTS (Text-to-Speech) is the technology that converts written text into spoken audio. It's the final step in voice agent processing, turning the AI's text responses into natural-sounding speech. Modern TTS systems use neural networks to produce human-like voices with appropriate intonation, pacing, and emotion.

A voice agent hallucination occurs when an AI generates false, fabricated, or nonsensical information that sounds plausible but isn't true. Hallucinations are a significant risk in production voice agents, potentially leading to customer misinformation, compliance violations, and brand damage. Testing and monitoring help detect and prevent hallucinations.

STT (Speech-to-Text) is another term for Automatic Speech Recognition (ASR)—the technology that transcribes spoken audio into written text. STT systems analyze audio signals, identify words and phrases, and output text for further processing by the AI. STT accuracy is critical for voice agent understanding.

VAD (Voice Activity Detection) is technology that determines when someone is speaking versus when there's silence or background noise. VAD is essential for voice agents to know when to listen and when to respond. Poor VAD can cause agents to interrupt users or miss speech entirely.

Prompt adherence measures how well a voice agent follows its defined instructions, scripts, and guardrails. High prompt adherence means the agent stays on-topic, follows required disclosures, and avoids prohibited behaviors. Low prompt adherence indicates the agent is going off-script, which can cause compliance and quality issues.

Turn-taking is the natural alternation between speakers in a conversation—knowing when one person stops talking and another should begin. Voice agents must accurately detect turn boundaries to avoid interrupting users or creating awkward silences. Poor turn-taking creates unnatural conversations that frustrate users.

Barge-in is when a user speaks over the voice agent, interrupting its response. Well-designed agents detect barge-in quickly, stop speaking, and listen to the user. Barge-in handling is critical for natural conversations—users shouldn't have to wait for the agent to finish before correcting it or providing information.

End-of-turn detection is the ability to recognize when a speaker has finished their turn in a conversation. It involves analyzing speech patterns, pauses, and linguistic cues to determine if someone is done speaking. Accurate end-of-turn detection prevents agents from interrupting too early or waiting too long to respond.

A voice agent persona is the consistent character, tone, and style that defines how an agent communicates. It includes voice characteristics, language patterns, formality level, and personality traits. Well-defined personas create consistent customer experiences and help agents maintain brand voice across all interactions.

LLM inference latency is the time it takes for a Large Language Model to process input and generate a response. In voice agents, LLM inference is often the largest contributor to overall response latency. Optimizing inference latency through model selection, caching, and infrastructure tuning is critical for responsive agents.

Call transcription accuracy measures how correctly the ASR system converts spoken words into text, typically expressed as a percentage or Word Error Rate (WER). High transcription accuracy is essential for voice agents to understand user intent correctly. Accuracy can vary based on accents, background noise, and audio quality.

About Hamming

Teams use Hamming to test voice agents end-to-end: generate realistic scenarios, validate pre-launch behavior, monitor production calls, and improve reliability, compliance, and customer experience in one platform. Unlike manual testing or basic unit tests, Hamming simulates real conversations with diverse accents, interruptions, and edge cases to catch issues before customers do.

Hamming is designed for anyone building and scaling voice agents—from hardcore engineers optimizing latency to product managers focused on customer experience. Whether you're at a startup launching your first agent or an enterprise managing dozens of voice workflows, Hamming provides the tools to make your voice agents production-ready and reliable.

You can use Hamming to measure AI voice agent performance, simulate real-world conversations with diverse personas, stress-test agents with edge-case failures, monitor latency and compliance 24/7 in production, and catch regressions before they impact customers. Teams use Hamming across the entire voice agent lifecycle from development to production monitoring.

Both. Hamming enables comprehensive pre-production testing to validate agent behavior before launch, and 24/7 production monitoring to track performance, detect issues, and alert teams when metrics breach thresholds. This end-to-end approach helps you catch issues early in development and maintain reliability in live environments.

No. Hamming is designed for both technical and non-technical users. Product managers, QA teams, and customer experience leaders can create test scenarios, review call recordings, and analyze dashboards without writing code. Engineers can also leverage APIs and advanced features for deeper integrations and automated testing pipelines.

Hamming fits across the entire voice AI lifecycle. During development, use it to validate agent behavior and catch edge cases. Before launch, stress-test with hundreds of simulated conversations. Post-launch, monitor production calls 24/7 for latency spikes, compliance issues, and quality regressions. This continuous approach ensures agents stay reliable at every stage.

Hamming is used across industries including finance, healthcare, insurance, retail, travel, and customer support. Banks use it for compliance testing. Healthcare organizations validate HIPAA-sensitive interactions. Insurance companies test claims workflows. Any industry deploying voice agents benefits from Hamming's testing and monitoring capabilities to ensure reliability and regulatory compliance.

Yes. Hamming is a cloud-based SaaS platform, requiring no infrastructure setup or maintenance. Your team can start testing in minutes through our web dashboard or API. We maintain SOC 2 Type II certification and offer HIPAA BAAs for healthcare customers, with data residency options in the US, EU, and UK.

Yes. Hamming supports multilingual testing and monitoring across 10+ languages, including regional variations like Castilian Spanish vs. Mexican Spanish, Brazilian Portuguese vs. European Portuguese, and various English accents. Test how your agent handles language switching mid-conversation and ensure consistent quality across all supported languages.

Setup takes just minutes, not weeks. Connect your agent by dialing SIP numbers or via WebRTC (LiveKit, Pipecat). Upload your prompt or connect via API. Hamming auto-generates test cases from your prompt. Most teams run their first test call in under 10 minutes and have comprehensive test suites running within an hour.

Yes. You can add users to your workspace once onboarded, with role-based access controls to manage permissions. Grant different access levels to engineers, product managers, and executives. Team members can collaborate on test scenarios, review call recordings, and share dashboards with appropriate visibility based on their roles.

Getting Started

You can sign up for Hamming at app.hamming.ai. Create your account in minutes, connect your first agent, and Hamming will auto-generate test cases from your prompt. Most teams run their first test within 10 minutes of signing up. Book a demo for personalized onboarding assistance.

You can book a demo directly with our team here. During the demo, we'll walk through your specific use case, show relevant features for your industry, discuss pricing options, and answer any questions. Demos typically run 30 minutes and include time for Q&A.

Yes. You can add and test multiple agents from your workspace without additional setup. Compare performance across different agents, A/B test prompt variations, and monitor all your voice agents from a single dashboard. This is essential for teams managing multiple use cases or iterating on agent designs.

Yes. Hamming supports testing in staging environments to validate changes before production deployment, then seamlessly transitions to 24/7 production monitoring. Test rigorously in staging, deploy with confidence, and monitor continuously in production—all from the same platform with consistent metrics and workflows.

Once onboarded, invite team members through workspace settings. Assign roles with appropriate access permissions—engineers get API access, product managers access dashboards, executives see summary reports. Role-based access ensures everyone has the visibility they need while protecting sensitive configurations and data.

Yes. Comprehensive documentation is available to customers through your Hamming workspace after onboarding. Documentation covers API reference, integration guides, best practices, and troubleshooting. Our team also provides direct support via Slack for questions not covered in documentation or specific to your implementation.

Yes. We provide personalized onboarding support including initial setup assistance, integration guidance, and best practices for your use case. Every customer receives a dedicated Slack channel for direct access to our team. Enterprise customers receive additional support including custom integration development and regular check-ins.

Testing Voice Agents with Hamming

Voice agent testing is the process of simulating realistic conversations to evaluate an AI voice agent's performance, accuracy, and reliability before deployment. It involves testing how agents handle diverse scenarios including accents, background noise, interruptions, edge cases, and compliance requirements. Effective testing catches issues before they impact real customers.

The number of prompts and test cases depends on your plan. Hamming scales from startup teams running hundreds of tests to enterprises running thousands daily. We provide flexible limits based on your usage patterns. Full details about your specific usage allocation are shared during onboarding based on your team's needs.

Yes, you can test error boundaries to ensure fallback prompts and recovery flows work as expected. Simulate scenarios like API timeouts, low ASR confidence, and context loss to verify your agent handles errors gracefully. This helps ensure customers experience smooth conversations even when unexpected issues occur.

Yes, Hamming fully supports multi-turn conversation testing. Create complex dialogue flows that span multiple exchanges, test context retention across turns, and verify your agent maintains coherence throughout extended interactions. Multi-turn testing is essential for validating real-world conversation quality beyond simple single-response tests.

Yes, you can red-team your voice agent with Hamming to uncover vulnerabilities and edge cases. We recently red-teamed Ani, Grok's AI companion. Simulate adversarial scenarios, prompt injection attempts, and off-topic requests to ensure your agent handles them safely and appropriately.

Yes, you can test and measure both customer and agent interruptions. Evaluate how your agent handles being cut off mid-sentence, whether it gracefully yields or continues speaking over the user. Proper interruption handling is critical for natural conversation flow and positive customer experience.

Monthly test case limits depend on your plan, ranging from hundreds for startup teams to thousands for enterprise customers. Hamming supports burst testing with 1,000+ concurrent calls for stress testing scenarios. Your specific allocation and usage details are shared during onboarding based on your team's requirements.

Yes, test call recordings are stored securely and accessible from your dashboard. Review recordings to analyze agent behavior, share with teammates for debugging, and use as evidence for compliance audits. Recordings are encrypted at rest and in transit. Retention policies can be configured based on your requirements.

Yes, you can define fully custom test scenarios. Customize the conversation context, success criteria, and user personas to reflect real-world conditions. Add background noise, specify accents, simulate interruptions, and set up multi-turn dialogues. Hamming also auto-generates scenarios from your prompt to cover edge cases automatically.

Yes, Hamming can test authentication and verification flows including PIN entry, account verification, and identity confirmation. Monitor whether your agent handles sensitive authentication steps securely and complies with PCI DSS and other regulatory requirements. Test edge cases like incorrect entries and retry limits.

Yes, you can replay any test call from your dashboard. Listen to the full conversation, review the transcript, examine turn-by-turn metrics, and analyze where issues occurred. Replay functionality is essential for debugging agent behavior, training team members, and documenting compliance verification processes.

Yes. Hamming was designed for stress testing voice agents at scale. Run 1,000+ concurrent test calls to simulate peak load conditions, identify breaking points, and measure performance degradation under stress. Stress testing reveals issues that only appear at scale, such as latency spikes and resource exhaustion.

Yes, you can export test results via PDF reports emailed directly to you. Reports include call recordings, transcripts, metrics, and detailed analysis. Use exports for stakeholder reviews, compliance documentation, and team collaboration. API access also enables programmatic export for integration with your existing workflows.

Yes, schedule automated tests to run at specific intervals from your dashboard. Set up hourly health checks, nightly regression suites, or weekly comprehensive test runs. Scheduled testing ensures continuous monitoring of agent quality and catches regressions quickly without manual intervention from your team.

Yes, Hamming fully supports voice agent regression testing. Re-run your test suites after prompt updates, model changes, or infrastructure modifications to verify existing functionality remains intact. Compare results across versions to identify unintended side effects and ensure updates improve rather than degrade performance.

Monitoring & Observability

Voice agent monitoring is the continuous, real-time tracking of your AI voice agent's performance during live production calls. It includes measuring latency, transcription accuracy, compliance adherence, and customer satisfaction. Effective monitoring detects issues as they happen, enabling teams to respond quickly before problems impact customer experience.

Testing happens before deployment, using simulated conversations to validate agent behavior in controlled conditions. Monitoring tracks real customer calls after launch, measuring actual production performance. Both are essential—testing catches issues pre-launch while monitoring ensures ongoing reliability, detects regressions, and provides insights from real-world usage patterns.

Yes, Hamming monitors live production calls in real-time. Stream call data to Hamming's platform for immediate analysis of latency, compliance, sentiment, and quality metrics. Real-time monitoring enables rapid detection and response to issues affecting customer experience, with alerts sent when thresholds are breached.

Yes, Hamming sends real-time alerts via Slack, email, or webhooks when monitored metrics breach your defined thresholds. Configure alerts for latency spikes, compliance violations, quality degradation, or any custom metric. Alerts enable your team to respond immediately to production issues before they escalate.

Yes, Hamming detects and monitors authentication failures in real-time. Track failed PIN attempts, identity verification issues, and security-related events. Monitor patterns that might indicate fraud attempts or system issues. Authentication monitoring is critical for compliance with PCI DSS and security best practices.

Yes, you can fully customize what Hamming monitors. Choose which metrics to track (latency, quality scores, compliance), create custom dashboards for different stakeholders, and configure alert thresholds for your specific needs. You can also set guardrails and compliance rules tailored to your industry—define custom detection patterns, required disclosures, and prohibited responses.

Yes. Hamming maintains SOC 2 Type II compliance, demonstrating our commitment to security, availability, and confidentiality. Our compliance was achieved in December 2025 through rigorous third-party auditing. We also offer HIPAA BAAs for healthcare customers and provide data residency options in the US, EU, and UK.

Yes. Hamming can simulate red-team compliance attacks to safely uncover vulnerabilities before bad actors do. Test how agents respond to social engineering attempts, prompt injection attacks, and attempts to extract sensitive information. Red-teaming helps identify weaknesses in your compliance controls and improve agent security.

Yes. Hamming can simulate background noise conditions to test how your agents perform in realistic production environments. Test with office noise, street sounds, crowd chatter, and other common interference patterns. This helps identify transcription accuracy issues and ensure your agent handles noisy conditions gracefully.

Yes, Hamming detects when agents deviate from expected behavior and measures prompt adherence. Track how well agents stick to your defined script, identify hallucinations or unauthorized responses, and monitor for compliance-critical deviations. Off-script detection is essential for maintaining brand voice and regulatory compliance.

Yes, you can monitor multi-language agents with Hamming across 10+ supported languages. Track performance when agents switch languages mid-conversation, monitor quality metrics per language, and ensure consistent experience regardless of language. Compare metrics across regions to identify localization-specific issues.

Yes. Hamming's monitoring metrics are benchmarked against industry standards so you can compare your agent's performance to typical expectations. Additionally, set your own custom benchmarks from the dashboard to track progress toward your specific goals. Benchmarking helps contextualize performance and prioritize improvements.

Monitoring data streams in real-time, with results available immediately after each call ends. Latency metrics, transcripts, and quality scores are processed within seconds. Real-time availability enables rapid response to issues, same-day trend analysis, and immediate visibility into production performance without waiting for batch processing.

Compliance & Security

Voice AI risks include exposing personally identifiable information (PII), PCI DSS violations when handling payment data, HIPAA violations when processing protected health information, and unauthorized data retention. Agents can also inadvertently collect sensitive data, fail to disclose AI identity, or mishandle regulated industries' specific requirements.

Hamming supports compliance through pre-production testing and production monitoring. Simulate compliance edge cases before launch to identify risks. Monitor live calls for violations in real-time. Detect PII exposure, verify authentication flows, and ensure agents follow required scripts. Comprehensive audit trails support regulatory reporting requirements.

Yes, Hamming tests agents against PCI DSS compliance use cases to identify risks before they reach production. Verify that payment card information is handled securely, sensitive data isn't logged inappropriately, and agents follow required security protocols. Continuous monitoring detects compliance violations during live calls for immediate remediation.

Yes. Hamming tests agents against HIPAA-related use cases to verify protected health information (PHI) is handled securely. Test authentication flows, data handling procedures, and disclosure requirements. Monitor production calls for potential violations. We offer Business Associate Agreements (BAA) for healthcare customers requiring HIPAA compliance.

Yes. Hamming monitors and detects when agents risk exposing personally identifiable information (PII). Test scenarios where agents might inadvertently repeat sensitive data, store information inappropriately, or disclose customer details. Real-time monitoring catches PII handling issues in production before they become compliance violations or security incidents.

Yes, you can set specific guardrails and compliance rules tailored to your industry and regulatory requirements. Define custom detection patterns, required disclosures, prohibited responses, and authentication requirements. Configure alerts when compliance rules are violated. Rules can be industry-specific for healthcare, finance, insurance, and other regulated sectors.

Latency & Performance

Latency in voice agents is the time delay between when a user finishes speaking and when the agent begins responding. It includes speech recognition processing time, LLM inference time, text-to-speech generation, and network transmission. Low latency is critical for natural-feeling conversations that don't frustrate users with awkward silence gaps.

Latency directly affects customer experience and conversation quality. Poor latency creates awkward silence gaps that make customers feel the agent has abandoned the call, leading to frustration and call abandonment. Industry benchmarks suggest responses should begin within 1.5 seconds. High latency can significantly reduce conversion rates and customer satisfaction.

Time to First Word (TTFW) is the duration between when a user finishes speaking and when the agent begins audibly responding. It measures the complete processing pipeline including voice activity detection, speech recognition, LLM processing, and text-to-speech synthesis. TTFW is the primary latency metric for evaluating voice agent responsiveness.

Industry benchmarks suggest an acceptable Time to First Word (TTFW) is around 1.5 seconds or less. Users begin perceiving delays above 1.5 seconds, and latency over 2 seconds significantly degrades experience. Best-in-class voice agents target sub-1-second TTFW. Hamming helps you measure and optimize toward these benchmarks.

Yes. Hamming measures latency percentiles including p50 (median), p90, and p99. Percentile metrics reveal the distribution of response times, helping you understand typical performance versus tail latency affecting your worst experiences. Tracking percentiles is essential for ensuring consistent quality across all customer interactions, not just average cases.

Yes, Hamming detects latency spikes in real-time during production monitoring. Configure alerts to receive immediate notifications when latency exceeds your defined thresholds. Real-time detection enables rapid response to infrastructure issues, model degradation, or traffic spikes before they significantly impact customer experience at scale.

Yes. Poor latency directly impacts revenue through call drop-off, reduced conversion rates, lower customer satisfaction scores, and increased customer churn. Research shows each additional second of latency can decrease conversions by 7-10%. Hamming's latency analytics help quantify the business impact of performance improvements.

Yes, you can export latency reports via PDF, which are emailed directly to you. Reports include p50, p90, and p99 latency breakdowns, trend analysis, and comparison to benchmarks. Use exports for executive reviews, performance optimization discussions, and tracking improvement over time against your baseline metrics.

Yes, Hamming monitors live latency metrics in production calls continuously. Track TTFW, total response time, and component-level latency breakdowns in real-time. Identify which parts of your pipeline (ASR, LLM, TTS) contribute most to delays. Live monitoring enables proactive optimization before latency issues impact customer experience.

Error Handling & Recovery

Error handling is how a voice agent detects, responds to, and recovers from failures during conversations. It includes gracefully managing ASR misrecognitions, API timeouts, context loss, and unexpected user inputs. Effective error handling maintains conversation flow and user trust even when issues occur, preventing frustrating dead ends.

Poor error handling destroys customer experience, causing frustration, confusion, and call abandonment. When agents fail silently, repeat errors, or crash conversations, customers lose trust and may never return. Robust error handling ensures agents recover gracefully, maintain context, and provide helpful fallbacks that keep conversations productive despite issues.

Voice agents commonly face ASR misrecognitions from background noise or unclear speech, API timeouts from slow external services, context loss when conversation state isn't properly maintained, LLM hallucinations, network failures, and unexpected user inputs including profanity, gibberish, or out-of-scope requests requiring graceful handling.

When ASR confidence is low, agents should politely ask for clarification with specific, contextual rephrasing. For example: 'I didn't quite catch that. Could you repeat your account number?' If repeated attempts fail, offer alternative input methods or escalate to a human agent rather than proceeding with potentially incorrect information.

Agents should acknowledge the delay naturally with responses like 'Let me look that up for you' followed by fallback messaging such as 'I'm having trouble accessing that right now. Can I try a different approach?' Implement retry logic with exponential backoff, and escalate to human agents when retries are exhausted.

When context is lost, agents should summarize the last known conversation state and confirm with the user before proceeding. For example: 'I want to make sure I have this right—we were discussing your account balance. Is that correct?' If confirmation fails, politely restart the relevant flow rather than proceeding with assumptions.

Yes, you can thoroughly test fallback prompts with Hamming. Simulate error conditions like API timeouts, low ASR confidence, and context loss to verify fallback responses activate correctly. Test that fallbacks feel natural, provide helpful alternatives, and maintain user trust rather than creating dead ends or frustrating loops.

Escalation paths are predefined rules that hand off conversations to human agents when the voice agent cannot resolve an issue. They include triggers like repeated failures, customer frustration detection, compliance-sensitive situations, or explicit transfer requests. Well-designed escalation paths preserve context and provide seamless handoffs to human support.

Yes, Hamming simulates escalation events to test when and how handovers to human agents are triggered. Verify escalation triggers activate correctly, context is preserved during handoff, and the transition feels seamless to users. Test edge cases like premature escalation, failed escalation, and escalation path routing logic.

Yes. Hamming can test how your agent responds to profanity, abusive language, and inappropriate content. Verify agents respond appropriately—acknowledging frustration while maintaining professionalism—and test whether escalation or termination triggers activate when needed. Profanity handling is essential for protecting agent operators and maintaining brand safety.

Integrations & APIs

Hamming is platform-agnostic, integrating with any voice agent infrastructure. We support telephony via SIP for inbound and outbound calls, WebRTC via LiveKit, Daily, and Pipecat for web-based agents, and direct integration with platforms like VAPI, Retell, and ElevenLabs. Custom integrations can be built using our API.

Yes. Hamming supports custom-built voice agents regardless of your underlying technology stack. Connect via SIP for telephony-based agents or WebRTC for web-based implementations. Our platform-agnostic approach means you can test and monitor agents built with any LLM, ASR, or TTS provider combination you've chosen.

Yes, Hamming pushes real-time alerts directly to Slack channels. Configure which metrics trigger notifications and route alerts to appropriate channels—engineering for latency issues, compliance team for violations, product for quality regressions. Slack integration keeps your team informed without requiring constant dashboard monitoring.

Yes, Hamming provides a comprehensive REST API for programmatic access to all platform capabilities. Schedule test runs, fetch results, configure agents, manage test cases, and retrieve monitoring data. The API enables integration with CI/CD pipelines, custom dashboards, and automated workflows for seamless voice agent DevOps.

Yes. You can trigger individual tests or full test suites programmatically via the API. Integrate testing into your CI/CD pipeline to automatically validate agent changes before deployment. Schedule recurring tests, trigger regression suites after updates, and build custom testing workflows tailored to your development process.

Yes, Hamming provides webhooks for real-time event notifications. Receive alerts when tests complete, metrics breach thresholds, or compliance issues are detected. Webhooks enable integration with your existing monitoring infrastructure, ticketing systems, and custom automation workflows for seamless voice agent operations management.

Dashboards & Analytics

Hamming provides a comprehensive voice agent analytics dashboard designed to measure performance and reliability across all your voice agents. View latency metrics, quality scores, compliance status, call volume trends, and detailed per-call analysis—all from a single, customizable interface.

Yes, fully customize your dashboard to focus on metrics that matter most to your team. Create custom views for different stakeholders, add specific metrics and KPIs, apply filters by agent, time period, or outcome. Save dashboard configurations and share them with team members for consistent visibility.

Yes, dashboards can be shared with role-based access controls. Share specific views with engineering, product, and executive stakeholders—each seeing metrics relevant to their needs. Control who can view, edit, or export dashboard data. Shared dashboards ensure team alignment on voice agent performance.

Yes. Click from summary metrics to detailed breakdowns and individual call analysis. Drill from aggregate latency trends to specific slow calls, from quality score averages to calls with issues. Drill-down capability enables root cause investigation when you spot anomalies in your high-level dashboards.

Yes, compare performance across all your voice agents side-by-side. Identify which agents perform best, which need optimization, and how changes impact relative performance. Comparison views help A/B test prompt variations, evaluate different LLM providers, and benchmark new agents against established baselines.

Yes, export dashboard data via PDF reports emailed to you. Reports include visualizations, metrics, and detailed analysis suitable for stakeholder presentations. API access enables programmatic export for integration with business intelligence tools, custom reporting systems, and automated executive summaries.

Yes, dashboards display p50, p90, and p99 latency breakdowns with clear visualizations. Track percentile trends over time, compare across agents, and identify tail latency issues affecting your worst-performing calls. Percentile visualization helps you understand the full distribution, not just averages.

Yes, view comprehensive call volume analytics including total calls, calls per agent, calls by outcome, and volume trends over time. Identify peak usage periods, track growth, and correlate volume with performance metrics. Volume analytics help with capacity planning and resource allocation decisions.

Pricing & Plans

Hamming offers tailored pricing based on your team's needs and usage patterns. Plans scale from startup teams running hundreds of tests to enterprises with thousands of production calls. Pricing is primarily based on test volume rather than per-seat. Detailed pricing and plan options are shared during your demo call.

We don't offer a traditional free trial with self-serve access. Instead, we provide personalized demos and onboarding where you can experience Hamming with your actual agents and use cases. This approach ensures you see value relevant to your specific needs rather than exploring features you won't use.

Plans are primarily based on test volume and usage rather than per-seat pricing. Add your entire team without additional user fees. Volume-based pricing scales with your actual usage, making it predictable and cost-effective whether you have a small team running many tests or a large team with lighter usage.

Yes, we offer enterprise pricing with custom terms, volume discounts, dedicated support, and SLAs. Enterprise plans include priority support with response time guarantees, custom integrations, dedicated success management, and flexible billing arrangements. Contact us to discuss enterprise requirements and pricing.

Troubleshooting & Support

All customers receive email and live chat support. During onboarding, you get a dedicated Slack channel for direct access to our engineering team. Enterprise customers receive SLA-backed support with guaranteed response times ranging from 10 minutes to 4 hours depending on severity. We're committed to rapid resolution of issues.

Hamming provides detailed evidence explaining why each test fails—whether from prompt issues, knowledge base gaps, latency problems, or other causes. Review the call recording, transcript, and metrics breakdown. Our team has helped debug thousands of agents and we're happy to assist with complex failures via Slack.

Use the 'Forgot Password' option on the login page at app.hamming.ai. Enter your email address to receive a secure password reset link. Links expire after 24 hours for security. If you don't receive the email, check spam folders or contact support for assistance with account access issues.

Yes. We actively incorporate customer feedback into our product roadmap. Submit feature requests through your Slack channel or dashboard. Our team evaluates requests based on customer needs and roadmap alignment. We release product updates weekly, so many requested features ship quickly. Major platform improvements are informed by customer input.

Response times depend on your support tier and issue severity. Standard support responds within 2 hours during business hours. Enterprise SLAs guarantee responses as fast as 10 minutes for critical production issues. We prioritize production-impacting issues to minimize customer disruption regardless of support tier.

Yes. We provide onboarding support for new team members joining your workspace. Training covers platform features, best practices for your use case, dashboard navigation, and API usage. Enterprise customers receive dedicated training sessions. All customers have access to documentation and our team via Slack for questions.

Yes, we provide hands-on integration support. Our team assists with SIP configuration, WebRTC setup (LiveKit, Daily, Pipecat), platform integrations (VAPI, Retell, ElevenLabs), and API integration into your CI/CD pipeline. Enterprise customers receive dedicated integration engineering support for complex custom integrations.

Contact us to discuss custom integration requirements. Our platform-agnostic architecture supports most voice agent infrastructures. We regularly add new integrations based on customer needs. For unique requirements, our engineering team can scope custom integration development. Many customer-requested integrations ship within weeks.