Debug And Analyze Your Vapi.ai Voice AI Agent Calls With Caller Journeys and Metrics
Most Voice AI agent developers are manually listening to call recordings. It is a tedious and slow method for improving your agent. And it’s impossible to listen to all your calls -- it’s like drinking from a firehose!
Canonical AI is a Voice AI analytics platform. It’s Mixpanel for Voice AI Agents. We map the journeys that your callers have with your Voice AI agent. We show you where and why your calls are not succeeding. We also provide audio metrics (e.g., latency) and conversational metrics (i.e., outcomes and anything you define).
In this blog post, we show you how to send your Vapi Voice AI agent calls to Canonical AI’s Voice AI analytics platform.
Voice AI Call Analytics
But wait, what is a call map? Maps of Voice AI agent calls give you visibility into how your agent is performing. In the call maps, we determine the stages of the conversation (you can also specify the call stages). Then, we assign each turn in the conversation to one of the stages.
Here’s an example of a call map. The Voice AI agent answers calls to car dealerships. The Voice AI should either schedule an appointment or arrange a callback from a technician. We can see that there are six calls total and five led to a successful outcome (i.e., appointment scheduled or callback arranged). In four of the calls, the caller wanted to schedule an appointment. At the appointment scheduling stage, the number of calls decreased because the caller in one of the calls hung up. This is an example of how a call map can help you understand your Voice AI agent and figure out what needs to be improved.
Voice AI Analytics API Key
First, you will need to generate a Canonical AI Voice Agent Analytics API key. Go to our website, click login on the top right, and authenticate as a first-time user. Once you’re logged in, you’ll see a screen with your API key. Copy it and put it somewhere safe. You can always access your API key again by clicking on your profile on the top right corner of our dashboard, then clicking on ‘Setup’.
Voice AI Analytics and Monitoring for Vapi Calls
We love Vapi! It’s the best end-to-end Voice AI orchestration platform out there! The Vapi team is phenomenal. And the developers in the Vapi community are building the most interesting and fastest-growing Voice AI agents in the Voice AI space. For our first integration with a third-party service, it was a no-brainer to choose Vapi!
Vapi Voice AI Analytics Integration Directly With Server URL
The simplest way to send call recordings and transcripts to Canonical AI is directly through the Vapi Server URL parameter. With this method, there’s no need to set up your own server or use low-code platforms like Make.
In Vapi, select the Voice AI agent whose calls you want to send to us. Then click on 'Advanced' and scroll down to Messaging. Paste https://voiceapp.canonical.chat/api/v1/webhooks/vapi
into the Server URL field. Then paste your Canonical AI API key into the Server URL Secret field. Lastly, make sure end-of-call-report
is checked in the Server Messages field.
That’s it, you’re all set! Our Vapi-specific webhook parses the Vapi end-of-call-report
so you don’t have to. Your call recordings and transcripts will be sent to our platform. You can log into our dashboard to see the call maps and metrics.
The downside of this method is that you cannot do anything else with the Vapi Server URL parameter. For this reason, we’ve also made it easy to integrate Canonical AI using Make or code.
Vapi Voice AI Analytics Integration With Make
Here we’ll show you how to get Vapi to send the end-of-call-report
to Make. In Make, the stereo call recording URL and the transcripts from within the end-of-call-report
are passed along to the Canonical AI platform.
First, add a custom webbook module to the Make scenario. Copy the webhook address.
Next, copy the webhook url from the Make webhook object. In Vapi, navigate to your assistant, click on 'Advanced', scroll down to 'Messages', then paste the webhook into the Server URL field. Make sure the end-of-call-report
is included in the Server Messages.
Back in Make, add a router module to your workflow. You’ll need this if you want to trigger other actions at the end of the call. After the router, we need to transform the JSON into a string so the Canonical AI servers can parse it. You'll want to pass in the entire end-of-call
report by pasting in {{1.message}}
.
After the JSON transformer, add a HTTP request object. Choose the ‘Make a request’ option among the different types of HTTP requests. In the URL field, paste https://voiceapp.canonical.chat/api/v1/call
. Set the method to POST. Add a header, and use X-Canonical-Api-Key
for the name and your Canonical AI API key for the value. For the body type, select Raw. For content type, set it to JSON (application/json)
. In the request content, paste in {{15.json}}
or click on the lavender JSON string
object in the helper window.
Be sure to turn on the scenario. Also, on your list of scenarios, make sure your scenario is turned on there as well.
You can download the Make blueprint for this scenario here.
Voice AI Analytics Programmatic Integration
If you would prefer to integrate with code, then our developer documentation here can get you started. You may also find the sample scripts for uploading calls in our GitHub repo helpful.
Voice AI Agent Analytics YouTube Tutorial
If you prefer to learn by watching, here is a YouTube tutorial.
Next Steps
Once your calls are flowing into our pipeline, you’ll be able to see the call maps and metrics for your Voice AI agents.
Adrian and I love the Voice AI space! We love meeting Voice AI developers and learning about the neat things they’re building! If you have any questions about integrating Canonical AI into your workflow, or just want to meet others working in Voice AI, please reach out to us!
Tom and Adrian
October 2024