Integrations Archives | Symbl.ai LLM for Conversation Data Thu, 13 Jun 2024 21:00:06 +0000 en-US hourly 1 https://symbl.ai/wp-content/uploads/2020/07/favicon-150x150.png Integrations Archives | Symbl.ai 32 32 Extract Insights using Symbl.ai’s Generative AI for Recall.ai Meetings https://symbl.ai/developers/blog/extract-insights-symbl-ai-generative-ai-recall-ai-meetings/ Wed, 26 Jul 2023 14:48:31 +0000 https://symbl.ai/?p=29591 The post Extract Insights using Symbl.ai’s Generative AI for Recall.ai Meetings appeared first on Symbl.ai.

]]>

We are thrilled to announce an exciting partnership between Symbl.ai, the leader in Understanding and Generative AI for Conversations, and Recall.ai, the universal API for meeting bots. This collaboration marks a significant milestone in empowering developers and organizations to extract actionable insights from their meeting data like never before. By combining the advanced capabilities of Recall.ai with the cutting-edge technology of Symbl.ai and Nebula, our recently announced conversation LLM, we are revolutionizing how businesses understand and leverage their conversational data.

On the heels of that announcement, we will walk you through the step-by-step process of integrating Symbl.ai with Recall.ai. By following these instructions, you can leverage the combined capabilities of Recall.ai and Symbl.ai to unlock actionable insights from your meeting, enabling you to generate new content, identify actions and insights.  

Integration Steps:

Step 1: Obtain the Symbl.ai API authorization creds

  • If you haven’t signed up for a free Symbl.ai account, navigate to the Symbl.ai Platform Sign Up page and create your free account. No credit card or payment information is needed!
  • Sign in to your Symbl.ai account and retrieve your Symbl.ai API credentials (App ID and App Secret) from the platform home page.
  • If you aren’t familiar with the Symbl Platform capabilities, be sure to take a look at the API Playground to take a number of our conversation insight capabilities for a spin, such as sentiment analysis, topic extraction, trackers, and more.
  • If you plan on following along to invoke the API calls via the cURL examples in a terminal window, set the environment variables for each of these as follows:
export SYMBLAI_APP_ID="<YOUR_SYMBLAI_APP_ID>"
export SYMBLAI_APP_SECRET="<YOUR_SYMBLAI_APP_SECRET>"

Step 2: Set up your Recall.ai account

  • Log in to your Recall.ai account and navigate to API Keys if needed.
  • Click the Generate API Key, provide a name for your key, and click Create.
  • Familiarize yourself with Recall.ai’s API endpoints and documentation, which will be used to process audio or video files and retrieve transcriptions.
  • Again, if you plan on exercising the cURL commands, set the environment variable for Recall.ai’s API Key as follows:
export RECALLAI_API_KEY="<YOUR_RECALLAI_API_KEY>"

Step 3: Start a Meeting and have a notetaking bot join

  • The Recall.ai platform supports numerous CPaaS platforms, but we will use Zoom for this example. Start a Zoom meeting and take note of the meeting URL. If you need help finding this, click the little green shield with the checkmark.
  • Instruct a Recall.ai bot to join the Zoom meeting by calling the following API:
curl -X POST https://api.recall.ai/api/v1/bot 
    -H 'Authorization: Token '$RECALLAI_API_KEY' 
    -H 'Content-Type: application/json' 
    -d '{
          "meeting_url": "https://symbl-ai.zoom.us/j/86101209066?pwd=Z2V1QzJLbGRUZEJrVlk5Zy9PbE1udz09",
          "bot_name": "Bot",
          "transcription_options": {
            "provider": "symbl"
          }
        }'

When you create your Recall.ai bot, you should get back some JSON about that bot. Take note of the Bot ID below on line 2.

{
    "id": "657365b8-04b7-41ba-bad6-3de0da346bb8",
    "video_url": null,
    "recording": "482d3192-6b4b-49ca-aadb-c0cf6957e187",
    "status_changes": [
        {
            "code": "ready",
            "message": null,
            "created_at": "2023-06-20T23:27:28.926697Z",
            "sub_code": null
        }
    ],
    "meeting_metadata": null,
    "meeting_participants": [],
    "meeting_url": null,
    "join_at": null,
    "calendar_meetings": []
}

And then export that ID to an environment variable called RECALLAI_BOT_ID in your terminal like this:

export RECALLAI_BOT_ID="<YOUR_RECALLAI_BOT_ID>"

 

  • Next, say a few sentences. Perhaps give a brief introduction of yourself, where you live, and what your hobbies are. Then instruct the Recall.ai bot to leave by invoking the API call below and terminating the meeting.
curl --request POST 
     --url 'https://api.recall.ai/api/v1/bot/'$RECALLAI_BOT_ID'/leave_call/' 
     --header 'Authorization: Token '$RECALLAI_API_KEY'' 
     --header 'accept: application/json'

Note: Instructing the bot to leave isn’t required. We include this step for completeness to exercise all the Recall.ai APIs.

Step 4: Transcribing the Meeting

There are two options for doing transcription for this meeting.

Option 1: You can either use Recall.ai API or the Symbl.ai API. To use the Recall.ai API: 

  • Give the platform a few moments to finish the transcription. Take the Bot ID from the previous API call and make the following execute the following to obtain the transcription for this conversation.
curl --request GET 
     --url 'https://api.recall.ai/api/v1/bot/'$RECALLAI_BOT_ID'/transcript/' 
     --header 'Authorization: Token '$RECALLAI_API_KEY'' 
     --header 'accept: application/json'

Option 2: Obtaining the transcription using Symbl.ai:

  • To obtain the transcription from the Symbl Platform, we will need the Recall.ai Recording ID first. You can get the ID by invoking this API:
curl --request GET 
     --url 'https://api.recall.ai/api/v1/bot/'$RECALLAI_BOT_ID'' 
     --header 'Authorization: Token '$RECALLAI_API_KEY'' 
     --header 'accept: application/json'

You should get back JSON that looks similar to this:

{
    "id": "657365b8-04b7-41ba-bad6-3de0da346bb8",
    "video_url": "...some URL…",
    "recording": "482d3192-6b4b-49ca-aadb-c0cf6957e187",
    "media_retention_end": "2023-06-28T00:20:26.904067Z",
    "status_changes": [
        {
            "code": "ready",
            "message": null,
            "created_at": "2023-06-20T23:27:28.926697Z",
            "sub_code": null
        },
       …
        {
            "code": "done",
            "message": null,
            "created_at": "2023-06-21T00:20:26.904067Z",
            "sub_code": null
        }
    ],
    "meeting_metadata": {
        "title": "David vonThenen's Zoom Meeting"
    },
    …
}

Take note of the Recording ID on line 4 and export that value to an environment variable like this:

export RECALLAI_RECORDING_ID="<YOUR_RECALLAI_RECORDING_ID>"

  • Then to retrieve the transcription, we will need the Symbl.ai conversation ID. You can retrieve that using the following Recall.ai API:
curl --request GET 
     --url 'https://api.recall.ai/api/v2/recordings/'$RECALLAI_RECORDING_ID'' 
     --header 'Authorization: Token '$RECALLAI_API_KEY'' 
     --header 'accept: application/json'

You should get some JSON back that looks something like this:

{
    "id": "482d3192-6b4b-49ca-aadb-c0cf6957e187",
    "outputs": [
        {
            "id": "d297b8c8-6990-4ab2-8030-0fc797a21d2d",
            "type": "active_speaker_diarized_transcription_symbl",
            "metadata": {
                "connection_id": "c5d6b9c4-3194-4126-b30c-b3f4ddcd40d4",
                "conversation_id": "6207479207952384"
            },
            "endpoints": []
        }
    ],
    "created_at": "2023-06-21T00:11:34.372026Z",
    "expires_at": null
}

 

Take note of the Conversation ID on line 9 and export that to an environment variable like this:

export SYMBLAI_CONVERSATION_ID="<YOUR_SYMBLAI_CONVERSATION_ID>"

  • Login to the Symbl Platform by running the following command:
curl --url https://api.symbl.ai/oauth2/token:generate 
  --header 'Content-Type: application/json' 
  --data '{
    "type": "application",
    "appId": "'"$SYMBLAI_APP_ID"'",
    "appSecret": "'"$SYMBLAI_APP_SECRET"'"
}'

The resulting JSON should look like this:

{
    "accessToken": "...YOUR_ACCESS_TOKEN…",
    "expiresIn": 86400
}

Take note of the accessToken and export it to an environment variable like this:

export SYMBLAI_ACCESS_TOKEN="<YOUR_SYMBLAI_ACCESS_TOKEN>"

 

  • Take that Access Token and Conversation ID and execute the following curl command to get the Symbl Platform transcription of the same meeting.
curl --request GET --url 'https://api.symbl.ai/v1/conversations/'$SYMBLAI_CONVERSATION_ID'/messages' 
     --header 'accept: application/json' 
     --header 'authorization: Bearer '$SYMBLAI_ACCESS_TOKEN''

Step 5 (BONUS): Get Conversation Intelligence with Symbl.ai

Now that you are logged into the Symbl Platform and have the Conversation ID, you can get other conversation insights.

  • If you want to obtain the Trackers discussed in the conversation, make the following API call:
curl --request GET --url 'https://api.symbl.ai/v1/conversations/'$SYMBLAI_CONVERSATION_ID'/trackers' 
     --header 'accept: application/json' 
     --header 'authorization: Bearer '$SYMBLAI_ACCESS_TOKEN''

  • If you want to obtain a Summary of the conversation using Symbl.ai’s summary AI models, execute the following API call:
curl --request GET --url 'https://api.symbl.ai/v1/conversations/'$SYMBLAI_CONVERSATION_ID'/summary' 
     --header 'accept: application/json' 
     --header 'authorization: Bearer '$SYMBLAI_ACCESS_TOKEN''

Step 6 (MEGA BONUS): Using Nebula LLM to Extract More Conversation Data

If you missed the announcement for Nebula’s Private Beta, you can obtain even more conversation data and insights by leveraging Nebula to make queries against Symbl.ai’s Generative AI. If you haven’t already requested beta access, you can do so via the landing page for the Nebula Playground.

  • Once you have your Nebula ApiKey, you can make API calls to Nebula. For example, if you used a prompt such as “Identify the main objectives or goals mentioned in this context concisely in less points. Emphasize on key intents.”, the cURL command would look like this:
curl --location 'https://api-nebula.symbl.ai/v1/model/generate' 
--header 'Content-Type: application/json' 
--header 'ApiKey: '$SYMBLAI_NEBULA_TOKEN'' 
--data '{
    "prompt": {
        "instruction": "Identify the main objectives or goals mentioned in this context concisely in less points. Emphasize on key intents.",
        "conversation": {
            "text": "DROP IN CONVERSATION TEXT HERE"
        }
    },
    "return_scores": false,
    "max_new_tokens": 2048,
    "top_k": 2,
    "penalty_alpha": 0.6
}'
  • The potential of the Nebula is endless. Some examples of other such prompts could be:

What could be the customer’s pain points based on the conversation?

What sales opportunities can be identified from this conversation?

What best practices can be derived from this conversation for future customer interactions?

Those are just some examples of prompts that could be used. In my opinion, aspects where Nebula really shines are in the areas of summarization, identifying critical pieces of conversation data from extremely large conversations, and using Nebula to act as a consultant to “interrogate” the conversation to extract even more insights.

Success! You have just integrated Recall.ai with Symbl.ai! Partaaaayy!

Remember to refer to the API documentation provided by both Symbl.ai and Recall.ai for additional features, parameters, and examples to further extend the capabilities of this integration. I highly recommend taking advantage of Symbl.ai’s conversation APIs and to get the most out of your conversations, leverage Nebula to elevate your conversation understanding.

Stay tuned for future updates and enhancements as both platforms continue to evolve, helping you stay ahead in the conversation analytics space. If you have any questions or need additional help, don’t hesitate to reach out to the Recall.ai and Symbl.ai support teams. Happy conversing!

The post Extract Insights using Symbl.ai’s Generative AI for Recall.ai Meetings appeared first on Symbl.ai.

]]>
Symbl.ai Nebula On-Prem Summary Deployment https://symbl.ai/developers/blog/symbl-ai-nebula-on-prem-summary-deployment/ Thu, 20 Jul 2023 03:46:43 +0000 https://symbl.ai/?p=29599 New Sales Intelligence APIs and UIs deliver auto-generated, context-aware call scores, summaries and conversation insights to immediately improve sales results

The post Symbl.ai Nebula On-Prem Summary Deployment appeared first on Symbl.ai.

]]>
Enhanced Conversation Intelligence, Data Control, and Optimized Performance with Proprietary Data Protection Peace of Mind

Overview

At Symbl.ai, we offer an on-prem deployment option for our Nebula large language model (LLM), enabling organizations to deploy and utilize our capabilities within their own infrastructure. This deployment includes the Summary feature, which provides four types of conversation summaries: short, long, list, and topic-based. By deploying Symbl.ai on-premise, organizations can have greater control over their data protection, ensure compliance with regulatory requirements, and customize the solution to fit their specific needs.

Key Advantages

Symbl.ai’s on-prem summary model stands out due to its unique strengths and advantages:

Optimized Performance

Our language model is based on the state-of-the-art transformer architecture, designed specifically for summarizing long, multi-party conversations across various domains. We have optimized our models to deliver LLM-level quality while utilizing 115 times fewer parameters than GPT-3. This optimization significantly enhances efficiency and processing speed, resulting in average latencies that are 5 times lower than comparable models.

Average Latency in seconds for each model on the same data and hardware configurations (for Symbl, Falcon, and MPT)

Supports long conversations

Our custom transformer-based model architecture addresses the high variance problem inherent in human conversations. This design ensures consistent performance even at longer sequence lengths, allowing you to process multi-party conversation transcripts up to 3 hours in length whereas mainstream comparable models can only process up to 30 mins of conversations.

ModelMax Conversation Length (approx)
Symbl180 mins (3 hours)
GPT-315 mins
GPT-3.530 mins
Falcon15 mins
MPT15 mins
Max conversation length supported by the model. Conversation length is calculated based on token length supported by the model averaged over the dataset.

Cost-effective

Symbl.ai on-prem summary deployment enables cost-effective hardware utilization. You can deploy our summarization model on single instances of cheaper GPUs, costing as low as a few dollars per day, while still processing up to thousands of conversations daily. This results in an effective hardware cost of fraction of a cent per conversation, delivering both efficiency and cost savings.

Data Protection

Our on-prem deployment provides enhanced data protection control. By deploying Symbl.ai on-premises, you can manage sensitive information within your own infrastructure, ensuring data protection control. This level of control enables safeguard against unintended data access, data leaks or data breaches.

Secure, Resilient, and Scalable

Our solution follows security best practices, ensuring that it does not require root user access. We provide secure containerization, while you are responsible for securing your infrastructure. The Symbl.ai container includes built-in health checks and logging mechanisms, enabling you to monitor the solution’s health and resilience. This built-in resilience ensures that the system remains robust and stable even during high-demand scenarios. Additionally, our container-based deployment architecture enables horizontal scaling, allowing you to handle increased workloads efficiently while maintaining optimal performance.

Conversation Intelligence

The on-prem deployment of Symbl.ai provides organizations with four types of conversation summaries:

  • Short Summary: Provides a concise overview of a conversation, capturing the key points and highlights. It is useful for quick reference and provides a snapshot of the conversation’s main themes.
  • Long Summary: Offers a detailed and comprehensive overview of the conversation, including deeper analysis and nuanced insights. It enables a thorough understanding of the conversation.
  • List Summary: Presents the conversation’s key points in a bullet-point format, making it easy to scan and extract important information. It provides a structured summary that allows for quick reference and review.
  • Topic-Based Summary: Organizes conversation insights based on the main topics or subjects discussed. It helps identify the primary themes covered in the conversation, making it easier to navigate and focus on specific areas of interest.

Getting Started

To get started with the on-prem deployment of Symbl.ai, organizations should ensure the following:

  • Host Environment: Set up a host environment with reliable internet access to facilitate communication with the Symbl server. This communication is necessary for access token generation, usage reporting, and downloading container updates.
  • Container Orchestration Platform: Choose a container orchestration platform that aligns with your organization’s infrastructure and requirements. Popular choices include Kubernetes or Docker Swarm, which provide the necessary tools for managing and deploying containers.
  • Hardware Requirements: Ensure that your infrastructure meets the hardware requirements for deploying and running Symbl.ai containers. This includes having sufficient CPU, memory, and storage resources to support the containerized deployment.

Next Steps

To explore the on-prem deployment options available with Symbl.ai, contact Sales.

The post Symbl.ai Nebula On-Prem Summary Deployment appeared first on Symbl.ai.

]]>
Using Symbl’s Generative AI for RevOps – Observability & Conversation Data Fidelity in your CRM https://symbl.ai/developers/blog/using-symbls-generative-ai-for-revops-observability-conversation-data-fidelity-in-your-crm/ Thu, 20 Jul 2023 03:45:26 +0000 https://symbl.ai/?p=29602 New Sales Intelligence APIs and UIs deliver auto-generated, context-aware call scores, summaries and conversation insights to immediately improve sales results

The post Using Symbl’s Generative AI for RevOps – Observability & Conversation Data Fidelity in your CRM appeared first on Symbl.ai.

]]>
Symbl.ai’s RevOps solution enhances productivity for businesses who use CRM platforms like Salesforce, Hubspot, Pipedrive. Powered by Symbl’s specialized language model Nebula, organizations can address call observability and CRM data fidelity gaps to strengthen their revenue operations.

One of the significant challenges faced by GTM teams is the lack of accurate information in CRM and the manual effort and time involved with effectively monitoring and capturing CRM data, especially in relation to sales calls.

Low CRM data fidelity frequently leads to

  • Missed sales opportunities
  • Obscured operational KPIs
  • Inaccurate business forecasts
  • Inability to make accurate data driven decisions

Traditional methods of tracking customer conversations often rely on manual entry or subjective feedback from the GTM teams involved. Such methods are vulnerable to data inaccuracies, human biases and information gaps. Organizations that rely on human efforts to improve CRM data integrity often end up with a low benefit cost ratio.

Symbl RevOps solution integrated in CRM can significantly increase the benefit cost ratio for organizations looking to address CRM data gaps. Our generative AI processing auto populates data to CRM or similar Martech and RevOps data platforms by leveraging AI to extract critical information from sales call records. Advantages of using Symbl’s generative AI to enrich CRM data includes consistency, unbiased data capture, hyperscale, and high cost efficiency.

Organizations that use Symbl quickly realize below observability impacts:

  • Increased visibility and accuracy with sales reports and revenue forecasts
  • Quality measurement of revenue, marketing and sales KPIs
  • Improved data ‘fidelity’ driven decisions
  • Enhanced sales process optimization through data-driven insights

From an executive lens effectively connecting the dots between customer calls, CRM data and revenue forecast and correctly forecasting revenue growth. Ensuring insights from calls are stitched as another source to your forecasts accelerates revenue growth. Capturing these insights in near real-time into CRM also empowers organizations to reduce blindspots from customer interactions and cross-functionally inform strategic decisions with market signal and competitive intelligence in addition to ensuring immediate coaching opportunities.    

Extracting call insights using Symbl’s Generative AI can be easily done with any CRM or RevOps platforms. Symbl’s RESTful API architecture embodies our philosophies of programmability and speed to value. Simple integration can be done in just a few lines of API calls. Symbl’s solution workflow is represented in below high level diagram:

When integrated within a RevOps workflow, Symbls’ generative AI augments the GTM stack by enabling following use cases:

  1. Fix critical data gaps critical to the operations of RevOps platform processes
  2. Generate insights and platform data optimization recommendations
  3. Real-time forecast Q&A ‘prompts’
  4. Automated new data record updates

To learn more about building generative AI powered revenue intelligence please contact Sales.

The post Using Symbl’s Generative AI for RevOps – Observability & Conversation Data Fidelity in your CRM appeared first on Symbl.ai.

]]>
Symbl.ai Introduces Programmable Sales Coaching for Sales Teams  https://symbl.ai/developers/blog/symbl-ai-introduces-programmable-sales-coaching-for-sales-teams/ Thu, 29 Jun 2023 12:58:00 +0000 https://symbl.ai/?p=28910 New Sales Intelligence APIs and UIs deliver auto-generated, context-aware call scores, summaries and conversation insights to immediately improve sales results

The post Symbl.ai Introduces Programmable Sales Coaching for Sales Teams  appeared first on Symbl.ai.

]]>
New Sales Intelligence APIs and UIs deliver auto-generated, context-aware call scores, summaries and conversation insights for sales and revenue operations leaders to immediately improve results in their CRM

Seattle, WA. June 29, 2023– Today, Symbl.ai announced Sales Intelligence, which provides a fully programmable, context-aware sales coaching experience for sales and revenue operations leaders to immediately improve sales results. Symbl.ai Sales Intelligence supports sales reps with auto-generated call scores and summaries to increase data fidelity in CRM, and real-time sales guidance during every call. Symbl.ai Sales Intelligence features easy-to-implement low-code APIs and flexible UIs that remove the limitations and rigidity of today’s revenue intelligence tools by empowering organizations to deliver company-specific coaching experiences for every sales rep – embedded into your existing tools and apps. 

With Symbl.ai Sales Intelligence, sales organizations can improve overall results by elevating the performance of sales reps in every conversation at each stage of the buying cycle – continuously adapting and fine-tuning performance based on the company’s specific sales methodologies, playbooks and processes.

“Our company has grown rapidly since the pandemic, leaving us with a mishmash of data and old processes in our Salesforce instance. Symbl.ai’s new deal insights and conversational intelligence changes all of that,” said M.H. Lines, CEO at Stack Moxie. “In new opportunities and in old ones, conversational intelligence grabs data we were missing so deals can close faster, and return old opportunities back to the pipeline. The best part is that everything is simple and integrated natively so I can reap the benefits without allocating limited sales ops resources.”

Symbl.ai Sales Intelligence helps sales organizations elevate sales rep performance during sales calls and multi-party conversations, while simplifying their sales environment to provide intelligence before, during and after every call, including:

  • Immediate, Unbiased Call Scores in CRM: Delivers immediate, unbiased coaching on sales rep performance during customer and prospect conversations with call scores instantly populated into the CRM system, enabling sales reps to view their performance after every call to take the right actions to improve. Symbl.ai call scores measure performance across key aspects of the conversation, including communications and engagement, questions answered, forward motion of the call, and sales process adherence.
  • Contextual Sales Conversation Insights: Auto-generates executive summaries, key questions, objections and next steps to ensure CRM data is accurate and complete, freeing sales reps to focus on effectively engaging with customers and prospects. 
  • Real-Time Sales Guidance: Enables sales organizations to configure and prescribe guidance for each sales rep based on evolving business environments, product roadmaps and sales processes, empowering sales reps with proven tactics during sales calls to overcome objections, effectively answer questions, deepen engagement, and emotionally connect with prospects and customers.

The example below showcases the immediate conversation insights that Symbl.ai provides, including call scores, next steps, and summaries, which can seamlessly integrate with existing CRM systems or coaching experiences using cutting-edge, applied APIs for sales intelligence:

Symbl.ai Sales Intelligence
Symbl.ai Insights API and UI

“Creating customized sales intelligence experiences with existing revenue intelligence tools, machine learning models and communication APIs is a tall order for businesses without extensive AI resources. It’s like trying to build your own revenue intelligence experience or augment an existing Gong-like platform. Consequently, sales organizations are being forced to settle for the general capabilities offered by existing point solutions,” said Scott Heimes, growth executive at PSG Equity and former revenue leader at ZipWhip and SendGrid. “As someone who has experienced the process of creating unique customer experiences first-hand, I find it truly inspiring to see how easy Symbl.ai has made it to get started with your existing CRM and meeting platforms, while providing programmability that can revolutionize the use of context and introduce fluidity into the user experience, giving sales teams superpowers to re-define coaching and performance based on their needs.”

Adaptive Intelligence Purpose-Built for Conversations

At the core of Sales Intelligence is the Symbl.ai Conversation Intelligence Platform, which manages millions of internal and external conversations monthly for enterprises, ISVs and communications platform providers. The Symbl.ai platform learns, adapts and improves as it encounters changes in the environment, and with business-specific feedback, it continues to evolve and support the changing needs of your product or organization. The platform is real-time and fully programmable, enabling enterprise developers and IT teams to rapidly add and extend AI into communication experiences and existing tools.   

Symbl.ai Sales Intelligence features can be programmed in a low-code experience with a simple configuration of the current revenue operations stack. The newly applied Symbl.ai APIs deliver a complete call scoring and sales conversation insights experience that continually adapts to your business needs and provides the foundation for continual coaching and sales performance improvement.  

“At Symbl.ai, we have relentlessly pursued our vision of transforming conversation intelligence with programmability and extensibility at the core of the platform. Our specialization in building AI infrastructure designed specifically for multi-party conversations gives us a unique advantage in delivering this to businesses as applied intelligence,” said Surbhi Rathore, CEO and co-founder at Symbl.ai. “With three years at the forefront of enabling distinct customer intelligence experiences, we are excited to introduce programmability for sales conversations, one of the most critical business processes. We look forward to empowering sales teams by supporting their ever-changing needs by reducing blindspots at every stage of deal execution, verifying revenue forecasting and ensuring high data fidelity in CRM systems.”

For more information, visit Symbl.ai Sales Intelligence or contact us for a demo.

About Symbl.ai

Symbl.ai provides conversation understanding and generative AI technology focused on making humans smarter – by unlocking the full potential of conversations. Symbl.ai’s Conversation Intelligence Platform empowers developers and enterprise builders to use AI to optimize a broad range of business conversations using purpose-built APIs and UIs. Together with its customers, ISV partners and cloud communications providers, Symbl.ai manages millions of minutes of conversation data for sales, customer service and HR every month.

The post Symbl.ai Introduces Programmable Sales Coaching for Sales Teams  appeared first on Symbl.ai.

]]>
Symbl.ai Training Series Expands with Videos on Redaction, Transcription https://symbl.ai/developers/blog/symbl-ai-training-series-expands-with-videos-on-redaction-transcription/ Thu, 06 Apr 2023 18:00:09 +0000 https://symbl.ai/?p=28195 Check out two new training series videos centered on Symbl.ai's Transcription and Redaction features.

The post Symbl.ai Training Series Expands with Videos on Redaction, Transcription appeared first on Symbl.ai.

]]>
We’re excited to announce that we have added two new training videos to our Symbl.ai video training series. These videos cover the topics of transcription and redaction, and are designed to help you better understand these features and how to use them in your applications.

The first video covers Symbl.ai’s Transcription feature; transcription is simply the process of converting speech into text. In the video, we discuss what transcription is, its common use cases, and how it can be applied in real-world scenarios. We also take a deep dive into the feature’s API and demonstrate how to use it in code via our SDKs.

The second video digs into Redaction, which is the process of removing sensitive information from a document or other piece of content. This feature is particularly useful for applications that manage personal data, such as healthcare- or finance-related information. In the video, we cover the basics of redaction, including its use cases and how to apply it to specific examples. We also show how to use the feature in code via our SDKs.

We believe that these videos will be extremely useful for anyone who wants to use transcription or redaction in their applications. Whether you’re building a healthcare app that needs to redact sensitive patient information or a customer service tool that requires transcription, these videos will give you the knowledge and skills you need to get started!

The upcoming chapters in the video training series are Topics, Questions, Follow-Ups, and Action Items. We’re committed to providing you with the best possible resources to help you succeed, so please don’t hesitate to let us know how we can support you.

We hope you enjoy these new videos. Cheers!

The post Symbl.ai Training Series Expands with Videos on Redaction, Transcription appeared first on Symbl.ai.

]]>
Introducing the Symbl.ai Video Training Series: Learn How to Unlock the Conversation Intelligence Platform’s Capabilities https://symbl.ai/developers/blog/introducing-the-symbl-ai-video-training-series-learn-how-to-unlock-the-conversation-intelligence-platforms-capabilities/ Thu, 23 Mar 2023 16:39:03 +0000 https://symbl.ai/?p=27997 Learn how to make the most out of the Symbl.ai platform with our new video training series.

The post Introducing the Symbl.ai Video Training Series: Learn How to Unlock the Conversation Intelligence Platform’s Capabilities appeared first on Symbl.ai.

]]>
Welcome to the first installment of our video training series about the Symbl.ai Platform!

Symbl.ai is a conversation analytics platform that helps you understand and improve customer conversations. Our platform can be used to analyze conversations ranging from online meetings to video conferencing to support calls. You will then gain access to insights that will help you make better decisions, improve the customer experience, and anticipate customer needs.

Conversation analytics is a powerful tool for businesses of all sizes. By understanding the ways that customers interact with your business, you can not only better understand the customer experience, but underlying customer needs as well. This can help you identify underperforming areas and improve upon your customer service strategies.

In this video training series, we will go over the basics of the platform, how it works, and how you can use it to enhance your customer conversations. We’ll also look at some best practices and strategies to optimize your use of the platform.

I plan to have a new training video released each week covering a wide variety of topics. We hope you find this series useful and look forward to helping you make the most of Symbl.ai!

The post Introducing the Symbl.ai Video Training Series: Learn How to Unlock the Conversation Intelligence Platform’s Capabilities appeared first on Symbl.ai.

]]>
Symbl.ai Enables Real-Time Sales Intelligence For Salesroom’s Video Platform https://symbl.ai/developers/blog/symbl-ai-enables-real-time-sales-intelligence-for-salesrooms-video-platform/ Thu, 16 Mar 2023 17:49:22 +0000 https://symbl.ai/?p=27913 Learn how Salesroom integrates Symbl.ai's conversation intelligence features to accelerate the sales cycle and increase conversion rates.

The post Symbl.ai Enables Real-Time Sales Intelligence For Salesroom’s Video Platform appeared first on Symbl.ai.

]]>
Salesroom is a leading video conferencing platform that provides advanced in-meeting AI coaching for salespeople, featuring airtime analysis, topic detection, objection handling, question detection, and next step detection. Salesroom powered by Symbl.ai gives salespeople and managers real-time signals, context, and insights during remote sales meetings—creating better, more human conversations that accelerate the sales cycle and increase conversion rates. The company’s platform coupled with Symbl.ai’s real-time conversational intelligence represents the next wave of sales intelligence solutions powered by AI.

CHALLENGE

Salesroom Co-Founder and CEO Roy Solomon noticed a glaring market opportunity for Meeting Intelligence software that gives sellers the ability to course correct in real time as they’re talking to customers during video sales meetings. With salespeople typically having a mere 5% of a customer’s time during the entire sales cycle, making the most intelligent and human connections on digital sales calls is more critical than ever.

When salespeople are made smarter in real time by receiving guidance on what to say and how buyers may be feeling—e.g., is the customer expressing disappointment? Are they mentioning competitors? Is the decision maker asking questions and seemingly engaged? Is the salesperson coming across negatively?—they can make deeper human connections with buyers and develop more meaningful relationships.

SOLUTION

Salesroom uses Symbl.ai’s AI-powered transcription, summarization, topic identification, and trackers features to provide salespeople with real-time signals, context, and conversation insights during meetings. According to Solomon, it’s the real-time, AI-driven component of the solution that makes it powerful.

“I think that transcription is getting commoditized,” Solomon explains, “It’s what you do with the transcription that matters now, and the combination of AI and real-time actions enables companies to overcome the loss of in-person sales engagements by helping buyers and sellers stay more engaged during video meetings.” 

The Salesroom platform brings sales playbooks directly into meetings with conversational and sales intelligence delivered in app, it calculates buyer engagement scores and next best actions, presents coaching cards in real time, and generates sales meeting summaries and video clips of key moments to save time doing after call work—all of which empowers salespeople to stay present and make more human connections during sales meetings. 

Specifically, the key conversational intelligence capabilities of the solution include:

  • Real-time coaching and playbooks “in-app” to improve engagement in sales meetings and conversations, i.e. alerting a salesperson that they are talking too fast and should slow down or stop using the term “like”—all in app, in real time.
  • Identifies key moments in sales conversations and actions to take, i.e., if the decision maker goes silent, the platform automatically provides the sales rep with suggested questions to “re-engage” and keep conversations going.
  • Enables conversation and sentiment scoring for both buyers and sellers with question detection—if scores go below customer-set thresholds, automated actions can be triggered to save the conversation.

Solomon continues, “We’re using Symbl.ai to detect in a matter of milliseconds what the key moments are in sales conversations in order for sellers to drive better engagement. The platform’s overall engagement score for the call is then updated in real time.” 

Powered by Symbl.ai, the Salesroom platform is also able to flag who’s asking questions so that salespeople can determine if certain decision makers are engaged on the call. Symbl.ai also supports the detection of next steps and customer sentiment. “If we see that the customer sentiment is going down and the engagement score is going down as well, then you have to save the conversation one way or the other. We also will tell the seller in real time if he or she is coming across as negative,” says Solomon. 

RESULT

By teaming up with Symbl.ai, Salesroom is managing thousands of conversations per month with AI-powered sales intelligence and actions provided to sellers in real-time. Benefits to Salesroom customers include:

  • Boosted conversion rates by 15%
  • Accelerated sales cycles by 20%
  • Accelerated onboarding time by 25%

If you’re interested in learning more about Symbl.ai’s capabilities for sales intelligence solutions–or know more about Salesroom–contact us here.

The post Symbl.ai Enables Real-Time Sales Intelligence For Salesroom’s Video Platform appeared first on Symbl.ai.

]]>
Symbl.ai Office Hours Session Recap: Trackers https://symbl.ai/developers/blog/symbl-ai-office-hours-session-recap-trackers/ Mon, 27 Feb 2023 19:15:55 +0000 https://symbl.ai/?p=27832 Watch the video recording of February's Symbl.ai Office Hours session focused on one of the platforms key features, Trackers.

The post Symbl.ai Office Hours Session Recap: Trackers appeared first on Symbl.ai.

]]>
Were you unable to attend the February edition of Symbl.ai’s Community Office Hours? Have no fear! The full video of the session is available for you to watch at the end of this article.

Check out the recording for an in-depth discussion about Trackers, a Symbl.ai platform feature that identifies specified vocabulary—as well as the meaning or intention of that vocabulary—in both real time and asynchronous applications.

Top use cases for Trackers include:

  • Sales enablement: Trackers can flag when a customer mentions a competitor, identify objections to proposals, and support pricing negotiations.
  • Coaching: Trackers can support empathy detection by identifying phrases that denote customer dissatisfaction and boost a call center employees’ perceived friendliness over the phone or on a video call.
  • Regulatory compliance: Confidential information such as personal details that are protected under HIPAA and PCI regulations can be identified to help businesses avoid legal trouble.
  • Matching human intention to your business: Trackers can notify employees when there is an opportunity to upsell to customers.

Find out how to access and use Trackers on the Symbl.ai platform in the video below:

The post Symbl.ai Office Hours Session Recap: Trackers appeared first on Symbl.ai.

]]>
Everything to Know About Enterprise Reference Implementation for Conversation Aggregation https://symbl.ai/developers/blog/everything-to-know-about-enterprise-reference-implementation-for-conversation-aggregation/ Thu, 19 Jan 2023 20:44:30 +0000 https://symbl.ai/?p=27590 This blog post offers comparisons between a simple design for a conversation application and what an enterprise conversation-based application architecture would look like.

The post Everything to Know About Enterprise Reference Implementation for Conversation Aggregation appeared first on Symbl.ai.

]]>
In case you missed my previous blog post entitled Understanding Enterprise Architecture for Conversation Aggregation, this blog builds on top of the eye-opening capabilities of an enterprise-type conversation application. It’s clear that large companies with deep pockets are heavily investing in technologies related to AI/ML and conversation analytics, such as the Symbl.ai Platform, OpenAI’s GPT-3, Amazon SageMaker, Kubeflow, etc.

Many of these enterprise companies are looking to solve problems such as predicting human decision-making, observing patterns related to behavior, and even strategically guiding people toward desired outcomes. My previous blog explained why we care about this. Today, we will delve into the “how” through examples using this Enterprise Reference Implementation repo on GitHub.

Breaking Down the Problem

If we are interested in segmenting conversations, a minimal set of requirements is necessary to achieve a predictive or associative form of conversation analytics.

The minimal set of requirements for higher-order conversation perception are listed below:

  • Conversation ingress (i.e. the input)
  • Extracting insights from conversations
  • Historical conversation and associated insights
  • Conversation aggregation and analytics
  • Action (i.e. the output or what we hope to achieve)

These topics are all important, and we can (and will in subsequent blog posts) do a deep dive into the nitty-gritty details of each one. For now, it’s essential to look at the ten-thousand-foot views of these requirements to level set on what we are talking about. We’re doing this because some of these topics, at face value, seem trivial but in fact have a great deal of nuance to them.

First, let’s take a look at a block-style architectural diagram.

We Love Architecture Diagrams

Personally, I love architecture diagrams. I like diagrams or pictures of any kind because I’m a visual learner. When discussing software systems and complex platforms, these diagrams are a great way to see all the components are involved and visualize the control and data path for these systems. Let’s look at the architecture for a traditional or simple system for deriving conversation insights, and then compare that with architecture attempting complex conversation aggregation.

Before we begin, this blog will focus on real-time conversation analytics in a streaming-type capacity, even though asynchronous conversation would look reasonably similar in terms of how the conversation data is ingested by the system.

Simplistic Architecture

The architecture diagram below represents how conversation insights are derived today across many applications. The design is simplistic and, more often than not, meets the needs of your typical real-time conversation use cases. Some goals are to coach, prompt individuals, or perform actions based on what is being said in a given conversation.

Simple Conversation Architecture Diagram

That last thought, analytics for this specific conversation, is often overlooked frequently (and by coincidence intentionally). Analytics for multiple conversations typically is out of scope. Why is that?

First, the system that does the conversation aggregation over time is complex to build. Second, to offset that design and implementation complexity we fall back on using humans to accomplish complex aggregation. As you might realize, relying on humans to perform this task can lead to inconsistent and error-prone results despite the best efforts to mitigate this using reporting hierarchy, generating reports, providing searchability, spreadsheets, etc. However, it generally gets the job done.

It turns out that these limitations are perfectly OK, because this model is able to address a good portion of in-demand use cases. These include systems such as chatbots, (simple) call center enablement, and applications creating simple triggers—to name just a few. Because these applications are simpler, most of them run client-side and, in turn, that’s the biggest reason one sees so many React, Angular, and Vue SDKs for these CPaaS platforms.

In summation:

  • It’s simple to build this type of application
  • They are transactional type applications (input mapped to output)
  • They offer Point-in-Time conversation analysis
  • The conversation insights are usually processed client-side
  • The conversations are isolated from other conversations

Conversation Aggregation with Enterprise Scale

If you examine the use cases in the previous architecture, you are looking at extracting conversation insights from a finite point in time. Those conversations are typically more transactional in nature. Let’s look at the call center enablement use case for an internet service provider; these conversations are framed in a very predictable and finite way.

In almost all cases, a customer initiates this transaction via phone, chat, etc., and then contacts the support technician with a particular problem. Let’s use “my internet connection is no longer working” as an example. The customer will provides some details about what they are experiencing, and the technical support staff might ask additional questions to refine remediation steps. Upon completing the conversation, the customer receives the desired output, which is being reconnected to the internet.

Enterprise Artichitecture Diagram

For more complex use cases where we are looking to make connections, establish patterns, and aggregate insights over many conversations throughout time, we need to address a critical difference between these two architectures: persisting conversation data. It is a straightforward, logical, and natural step if you want to make connections between what is said in real time and what was said in prior conversations.

This very simple realization has very significant implications. How do we store this conversation data? What kind of data storage do we use? What’s more important, access speed for our insights or association via putting the dots close enough together to aggregate insights intelligently? What mechanisms do we want to use to detect patterns in conversations?

To understand the complexities in this architecture captured via the Enterprise Reference Implementation repo, we must dive into and understand the minimal set of requirements we have been dancing around.

In summation:

  • More effort is required to build these types of applications
  • Conversations can be aggregated
  • One can build applications with a historical conversation context
  • One can have more control over the conversation data
  • They are better for building scalable conversation applications
  • The company’s business rules/logic are pushed into backend server microservices

If you’d like to get more detailed about both of these architectures, watch the informational video below:

Ingress Data for Conversations

This topic is the most trivial aspect of any conversation processing design, but it also happens to be the most incomplete or overlooked element because it seems simple enough. 

These days, we often think of our conversation data coming from an audio stream in a communication platform such as Zoom, Twilio, Vonage, etc. Still, in reality, there are many more forms of real-time conversation sources that we should remember. These include Telephony via Session Initiation Protocol (SIP) and Public Switched Telephone Network (PSTN), team collaboration applications such as Slack, the unending sea of video/chat platforms including Discord, the one friend who uses email like it’s an instant messenger application, and many more.

These data streams feed into this Analyzer component, which effectively takes the conversation that is embedded in these streams and extracts the contextual insights, as well as enacts some pre-processing of discovered insights. It so happens that the Symbl.ai platform plays a huge role in this component, and, because of that role, there are several Symbl.ai SDKs that can assist with ingesting these data streams in the form of Websocket SDKs for Streaming APIs, Telephony SDKs, and even Asynchronous APIs for handling text and other inputs.

Mining Conversation Data

Many complex details are associated with extracting conversation insights from these various real-time forms of communication. This Analyzer component aims to ingest the data, create any additional metadata to associate with these insights, and then save the context to recall later.

There happens to be an excellent platform that does all of the heavy lifting for us. We can extract these conversation insights without having to train models, have expertise in artificial intelligence or machine learning, or require a team of data scientists. Of course, I’m talking about using the real-time streaming capabilities on the Symbl.ai platform.

Some capabilities that would be invaluable to leverage within your application would be:

  • Trackers for honing in on topics specific to your business 
  • Custom Entity Detection to see how your products, capabilities, and perhaps even feature gates are discussed and utilized
  • Conversation Groups, which is a new feature on the Symbl.ai platform, and could be used to process lower-priority batch-style analytics
  • Summarization for distilling down larger conversations and creating tiered topics, because this would remove less relevant subjects

The second feature we alluded to with regard to this Analyzer component is being able to save all these insights and metadata. Because this is a vast topic, we will cover it in the module below.

Preserving Conversation Insights

Preserving insights represents the first of two significant pieces of work in this design. In order to aggregate conversation insights from external conversation sources and through historical data, we need to have a method for persisting this data to recall and make associations to conversations happening now.

This requirement naturally lends itself to using some form of database, but what kind? If we are talking about the aggregation of millions or even billions of conversations, we need scalable backend storage. This means that more storage or database nodes can be brought online to expand capacity. Because we are talking about enterprise applications, this form of expansion needs to be completed without interrupting the application’s availability (or performance), without (much) human intervention, and, as always, in as simple a manner as possible.

We want to note that, in terms of the application’s performance, storage is just one aspect of any data storage system. Another equally crucial dimension is access performance. There are different flavors of databases out there offering their unique take or capability for persisting and recalling data. Still, we need to break down the requirements even further to make better storage choices.

Processing millions of conversations at scale necessitates the ability to quickly store (AKA write) contextual insights in order to keep pace with each conversation. The querying or reading of these insights occurs at the same frequency as the database writes them down. This makes sense, because each new insight invokes a write operation to record what has been discovered, and it will also trigger a read operation to see if there are any prior incidents of the newly discovered insight.

Another data access requirement unique to this application and the conversation intelligence domain we are working with is that the data storage platform needs to be able to quickly and efficiently query for relationships between data points. It would make sense to either select a data storage platform that provides users with this capability natively, or build something that farms out the work to reduce the complexity of the search.

Below are several key takeaways from a storage perspective:

  • One needs extensible and scalable backend storage that doesn’t impact availability
  • Performant data access requires significant read/write access to a ratio roughly that of 50/50
  • The ability to query for relationships between data points is a must

Performing Real-time Analytics

The previous section discussed the need to archive conversation insights that can be recalled by real-time conversations happening in the present moment. This section expands on the final, but extremely significant, feature required in this Enterprise Architecture for Conversation Analysis: making associations or defining the relationships between contextual insights. That functionality happens in this Middleware component in the Enterprise Architecture diagram above.

The best way to visualize this Middleware component is via specific use cases. If we go back to our Internet Service Provider scenario, let’s say in this particular conversation a Tracker insight relating to a “flashing light on cable modem” is recognized and triggered by the Symbl.ai Platform. This information, a “flashing light”, isn’t surprising and is probably even expected in this technical support call involving a customer without internet access.

In an application based on our Simple Architecture definition, we could display a popup to the support technician advising the customer to reboot the cable modem by unplugging and plugging the modem back in. However, in an application based on our Enterprise Architecture that considers historical data, we could query to see which other conversations are associated with Tracker insight “flashing light”.

It could be that there has been a large number of conversations that have triggered this Tracker insight from users originating from Long Beach, California, in the past 30 minutes. Situations like this could indicate a local outage, and our application could dispatch a higher-tier support technician to look into the problem and notify the technician speaking with the customer within the application’s user interface that there is a possibility of a general outage in the area.

In the above example, the Tracker insight “flashing light,” or the data itself, wasn’t significant. However, the relationship or association with that particular Tracker to other conversations that had taken place recently is the noteworthy piece of information. That’s the value proposition for this type of application architecture.

As you can see, this Middleware component is deeply tied to what your business cares about. This component, either in code or interfacing with another external system, captures your company’s specific business rules. These business rules can then be used to notify others within the company to take action, create events that you might want to pass along to other software systems, or trigger actions you want to perform directly in this component.

Although there is a generic implementation provided in this Middleware component, the intent of this Enterprise Reference Implementation is only to be just that—a reference. This Middleware component contained in the repo should either, at minimum, be modified to capture your business rules or in practice, be re-implemented to fit your specific business needs.

Should you choose to use this Reference Implementation as a starting point, the interfaces into and out of this Middleware component use an industry-standard system, which means this Middleware component can be implemented in any language your organization has the most expertise with.

Next Up: A Deep Dive into Data Storage for Conversations

This blog post has shown you solid comparisons between a simple design for a conversation application and what an enterprise conversation-based application architecture would look like. The Enterprise Reference Implementation cited in this blog post is open source and open for use to serve as a template, fire off the creative energy to build your own implementation, and even be used as-is with no strings attached.

The next topic in this series will be a deep dive into the storage or archival aspects of the Analyzer component. Although this blog post has been an incredible start to describing the requirements and functionality necessary for this component, there are far more details that are beneficial to discuss. Those learnings will enable others to make highly informed decisions in terms of designing and selecting an intelligent storage platform to meet their needs.

I hope this has been an enlightening discussion. The big takeaway of this article is to understand the purpose of each component within this higher-level block diagram, and be able to extrapolate your own implementation to put the dots of knowledge close enough together to predict and create desired outcomes for your business. Cheers!

The post Everything to Know About Enterprise Reference Implementation for Conversation Aggregation appeared first on Symbl.ai.

]]>
Symbl.ai Go SDK Part 2: Real-Time Processing Via WebSockets https://symbl.ai/developers/blog/symbl-ai-go-sdk-part-2-real-time-processing-via-websockets/ Tue, 08 Nov 2022 17:01:17 +0000 https://symbl.ai/?p=27171 A new release of the Go SDK makes it easier to consume the SDK from other projects. Learn how to process conversation insights in real-time using WebSockets.

The post Symbl.ai Go SDK Part 2: Real-Time Processing Via WebSockets appeared first on Symbl.ai.

]]>
In our ongoing series on the Symbl.ai Go SDK, this post focuses on real-time processing of conversation insights using WebSockets. Part 1 discussed Async APIs, introducing methods to derive conversation insights asynchronously using the Symbl.ai Go SDK. The first thing I should mention is that we recently pushed out a new v0.1.1 release of the SDK.

This release makes a lot of usability enhancements so that it’s easier to consume the SDK from other projects, specifically around creating named structs. We also fixed an issue where Trackers weren’t exposed in the Streaming configuration; the feature was previously only exposed using managed Trackers via the Management API.

Why Streaming API?

The Streaming API is ideal for situations where real-time conversation is occurring and insights are required in low latency. By leveraging the WebSocket protocol, there is no need to poll the server for updates — events are streamed directly to your client as Symbl.ai processes the real-time conversation. Implement the Streaming API in your web app to provide your users with active support from Symbl.ai:

  • Entity detection for custom and managed entities such as PHI and PII
  • Trackers to automatically recognize intents, phrases and their meaning in conversations
  • Sentiment 
  • Speaker analytics (pace, silence time, and talk time)
  • Real time interim transcripts 

Streaming API, alongside our other products such as Nebula LLM & Embeddings, can enable real time generative AI use cases such as Real Time Assist for sales, customer service, and other frontline operations. For instance, detecting objections raised by a customer and assisting the agent in responding to that. Detecting moments of customer frustration and suggesting script changes. 

A Little More About WebSockets

For those that might not be familiar with WebSockets, it’s an interesting internet protocol that allows for bi-directional exchange of information between a client and server. Typically, a small amount of information is exchanged upfront to set up what this bi-directional exchange will look like. Then after that, data is exchanged asynchronously. If I drew a diagram of this process, it would look something like the picture below.

There are two types of exchanges included in the protocol: a configuration exchange or update and the raw “data” being exchanged between the client and server. The input to the server (i.e., what the client is sending) differs depending on the server type. In the case of the Symbl.ai Platform, we are talking about an audio data stream. In return we get back conversational insights in the form of transcription, results for topics, trackers, etc.

How to Use?

  1. Explore the Repository Examples:
  2. Install Required Libraries:
    • The examples utilize a microphone package that depends on the PortAudio library, a cross-platform open source audio library.
    • Linux Users: Install PortAudio using your system’s package manager (yum, apt, etc.).
    • macOS Users: Install Portaudio using Homebrew.
  3. Sign Up for Symbl.ai:
    • If you don’t already have a Symbl.ai account, sign up here for free, no credit card required.
  4. Obtain API Keys:
    • After signing up, obtain the API keys from your Symbl.ai account.
  5. Configure Environment Variables:
    •    – Add your API keys to your environment:
export APP_ID=YOUR-APP-ID-HERE
export APP_SECRET=YOUR-APP-SECRET-HERE
  • Using environment variables is beneficial as they are easy to configure, support Platform as a Service (PaaS) deployments, and work effectively in containerized environments like Docker and Kubernetes.

Next, you must add your APP_ID and APP_SECRET to your list of environment variables. We use environment variables because they are easy to configure, support PaaS-style deployments, and work very well in containerized environments such as Docker and Kubernetes.

Let’s Start Streaming Using WebSockets

As I mentioned in the previous section, we need to first log into the Symbl.ai platform (this is taken care of for you under the covers of the SDK), and the second step is to pass a configuration to set up the WebSocket protocol. You can do that by building a StreamingConfig object.

import (
   cfginterfaces "github.com/symblai/symbl-go-sdk/pkg/client/interfaces"
)
 
config := &cfginterfaces.StreamingConfig{
   InsightTypes: []string{"topic", "question", "action_item", "follow_up"},
   Config: cfginterfaces.Config{
       MeetingTitle:        "my-meeting",
       ConfidenceThreshold: 0.7,
       SpeechRecognition: cfginterfaces.SpeechRecognition{
           Encoding:        "LINEAR16",
           SampleRateHertz: 16000,
       },
   },
   Speaker: cfginterfaces.Speaker{
       Name:   "Jane Doe",
       UserID: "user@email.com",
   },
}

The next thing you need to do is define a struct that implements the InsightCallback interface.

type InsightCallback interface {
   RecognitionResultMessage(rr *RecognitionResult) error
   MessageResponseMessage(mr *MessageResponse) error
   InsightResponseMessage(ir *InsightResponse) error
   TopicResponseMessage(tr *TopicResponse) error
   TrackerResponseMessage(tr *TrackerResponse) error
   UnhandledMessage(byMsg []byte) error
}

This struct will feed into all of these conversational insights that the Streaming API defines. Suppose you are interested in understanding which messages you will receive from the Symbl.ai platform; in that case, you can always pass in the DefaultMessageRouter, which moves the conversation insight structs to the console.

Input Source What?!

The next thing we need to do is provide an input source for our audio or conversation. In this case, we’re going to use that PortAudio-based microphone library. To initialize and also start the microphone, just provide an AudioConfig but do the following:

// mic stuf
sig := make(chan os.Signal, 1)
signal.Notify(sig, os.Interrupt, os.Kill)
 
mic, err := microphone.Initialize(microphone.AudioConfig{
   InputChannels: 1,
   SamplingRate:  16000,
})
if err != nil {
   fmt.Printf("Initialize failed. Err: %v\n", err)
   os.Exit(1)
}
 
// start the mic
err = mic.Start()
if err != nil {
   fmt.Printf("mic.Start failed. Err: %v\n", err)
   os.Exit(1)
}

Microphone Meet WebSocket

Then, we need to pass the microphone to the Streaming interface, which is beyond simple because the Microphone library implements a Go streaming interface.

// this is a blocking call
mic.Stream(client)

That’s really it! When you connect the two, simply start talking into your microphone and you should begin seeing conversational insights being passed back to you.

If you would like to run the Streaming example in the repo, you can install AudioPort, add the Symbl.ai API environment variables, and then run the following commands:

$ cd examples/streaming/
$ go run streaming.go

If you want to see this process in action, watch the quick video below:

What’s Next?

To prove the Symbl.ai Go SDK, I created a project that would consume this SDK. This project just so happens to be the demo that was used in my API World presentation on Nov. 2, 2022, as well as the subject of the next Symbl.ai Go SDK blog.

In the meantime, please give the Symbl.ai Go SDK a try and provide feedback via issues, whether they are feature requests, enhancements, or bugs. The SDK is only as good as the feedback and ideas we receive from the people consuming it—so, please give it a try!

The post Symbl.ai Go SDK Part 2: Real-Time Processing Via WebSockets appeared first on Symbl.ai.

]]>