How To Get Started With Voice AI Evaluation and Analytics Using Canonical AI
The biggest problem in Voice AI right now is that Voice AI buyers think the pool is too cold. They’re just dipping their toes in the water. They’ve signed contracts and bought credits from Voice AI builders like you, but they’re slowly using them.
It makes sense from the perspective of the Voice AI buyer. They have their brands and sales funnels on the line. They can’t afford for your Voice AI to mess up.
Again and again, we’ve seen what it takes to get the Voice AI buyer to jump in. Just show them what’s happening in the Voice AI calls. It’s that simple.
We’re building analytics and data visualizations for Voice AI developers. We make it easy for Voice AI builders to analyze and improve their produce -- and for Voice AI builders to show their clients what is happening in their calls. This blog post is a guide to getting started with Canonical AI.
Voice AI Analysis Examples
Here are a few examples of our Voice AI analyses and data visualizations.
Here is an example of our Call Flows. These give you a visual summary of what your Voice AI agent did yesterday.
Here is an example of our Failed Outcome Analysis. No matter how much you test your agent, no matter how many iterations you’ve made on your prompts and function calls and third-party integrations, some human somewhere is going to break your Voice AI. The above analysis makes it easy to find those rough edges.
Here is an example of our audio metric visualization. Voice AI buyers are concerned about latency. We make it easy to show your customer important audio descriptors of your agent’s performance, like call duration, percent silence, number of interruptions, latency, and more.
You can find more about our analytics and data visualizations here.
How To Upload Calls To Canonical AI With Our GUI
First, you will need to generate a Canonical AI Voice Agent Analytics API key. Go to our website, click login on the top right, and authenticate as a first-time user. Once you’re logged in, you’ll see a screen with your API key. Copy it and put it somewhere safe. You can always access your API key again by clicking on your profile on the top right corner of our dashboard, then clicking on ‘Setup’.
Next, click on sign up, then click on upload calls. We need a minimum of 15 calls to determine the Voice AI agent’s call stages and outcomes. You can later choose to edit the stages and outcomes. The process of determining the stages can take a few minutes. Once it’s done, you should be able to see your data on our platform.
Integrating Call Analytics Into Voice AI Pipeline
Server Code
After uploading a batch of calls using our GUI, and you like what you see, here is how to programmatically upload calls to our platform. In short, you make an asynchronous call to our API to send the recording URL and the transcript.
curl -X POST 'https://voiceapp.canonical.chat/api/v1/call' \
-H 'Content-Type: application/json' \
-H 'X-Canonical-Api-Key: YOUR_API_KEY_HERE' \
-d '{
"assistant": {
"id": "UNIQUE_ID_FOR_THE_ASSISTANT", # you will use this to select between different AI agents on our dashboard (e.g., Voicebot v0.1)
"speaksFirst": true, # if the AI agent speaks first in the call, then set to true
"description": "Outbound Sales Agent" # optional friendly name or description for the assistant
},
"location": "ACCESSIBLE_URL_TO_AUDIO_FILE",
"callId": "YOUR_CALL_ID", # use this to to map the call back to your system
"transcript": [] # array of objects containing the conversation transcript
}'
We have examples for calling our REST API in Python, Javascript, and TypeScript in our docs here.
Pipecat
If you’re building Voice AI with Pipecat, here is how to integrate using our native Pipecat integration.
Low Code
If you’re building with a low-code or no-code platform, like Make or GoHighLevel, the integration steps are different for each Voice AI orchestration platform.
- Synthflow AI: Here are the instructions for integrating with Make.com and Synthflow AI.
- Vapi: Here are the instructions for integrating with Make.com and Vapi.
- Retell AI: Here are the instructions for integrating with Make.com and Retell AI.
Embed The Data Visualizations In Your Customer-Facing Dashboard
So you’re uploading each Voice AI call to our platform and using our platform to understand and improve your agent. The next step is to share our Voice AI call analytics and data visualizations with your Voice AI customer.
Embeddable React Components
By embedding our analytics and data visualizations into your customer-facing dashboards, your customers can see the Canonical AI visualizations in your branded dashboard.
It’s easy to set up. You just put your Canonical AI API key and the assistant id in our React component.
import { CallFlowChart, CanonicalProviders } from "@canonicalai/voice";
const apiKey = ; // read the API key from the .env file or something
function App() {
return (
<CanonicalProviders apiKey={apiKey}>
<CallFlowChart
assistantId="YOUR_ASSISTANT_ID"
width={1000}
height={400}
/>
</CanonicalProviders>
);
};
You can learn more about embedding components here.
Creating A Login For Your Customers
For many of our Voice AI developer customers, they prefer to simply create a login for their customer to see their agents. Reach out to us if you’d like to take this approach. We will create accounts for each of your customers and map each of their Voice AI assistants to their Canonical AI accounts. You will be able to see all of your clients’ Voice AI assistants on our dashboard, but your customer will only see their Voice AI assistant.
Next Steps
Adrian and I love the Voice AI space! We love meeting Voice AI developers and learning about the neat things they’re building! If you have any questions about how to get started with Canonical AI, or just want to meet others working in Voice AI, please reach out to us!
Tom and Adrian
January 2025