Resonance

Hear What's Really Being Said

About Resonance

Resonance is an adaptive AI conversational agent that detects emotional cues in real-time and self-corrects its behavior to maintain productive dialogue.

How It Works

1
Listen

You speak into the microphone (or capture a live call from Teams, Meet, Zoom, or any browser-based platform). Hume EVI transcribes the audio and returns prosody scores across 48 emotions.

2
Analyze

The browser extracts confusion, doubt, and frustration metrics from each turn and sends them to the server-side Policy Engine.

3
Evaluate

The Policy Engine compares current scores against calibrated thresholds and tracks momentum across a rolling window of the last 10 turns.

4
Adapt

If distress is detected, a new conversational strategy and system prompt are injected into the AI in real-time. The next response reflects the adaptation.

Strategy Catalog

Baseline

Standard empathetic assistant. No distress detected.

Simplification

Shorter sentences, step-by-step guidance, plain language. Triggered by confusion.

Authority

Confident tone, evidence-backed, decisive responses. Triggered by doubt.

De-escalation

Acknowledgment, calm pacing, solution-focused. Triggered by frustration.

Engagement

Warm check-ins and open questions. Triggered by blended composite distress.

Proactive

Pre-emptive clarification before distress peaks. Triggered by rising trend.

Tech Stack

Backend

ASP.NET Core MVC on .NET 10. Server-side Policy Engine with rolling momentum tracking.

AI Voice

Hume AI EVI v3 via WebSocket. Real-time prosody analysis across 48 emotion dimensions.

Frontend

Vanilla JavaScript (ES modules). Web Audio API for microphone and tab capture. CSS Grid card-based layout.