Zynab Ali, Author at Symbl.ai https://symbl.ai/developers/blog/author/zynab-ali/ LLM for Conversation Data Thu, 29 Aug 2024 18:05:48 +0000 en-US hourly 1 https://symbl.ai/wp-content/uploads/2020/07/favicon-150x150.png Zynab Ali, Author at Symbl.ai https://symbl.ai/developers/blog/author/zynab-ali/ 32 32 Implementing Retrieval-Augmented Generation (RAG) with Nebula: A Comprehensive Guide https://symbl.ai/developers/blog/implementing-retrieval-augmented-generation-rag-with-nebula-a-comprehensive-guide/ Mon, 13 Nov 2023 18:09:50 +0000 https://symbl.ai/?p=31886 Overview Retrieval-Augmented Generation (RAG)1 is revolutionizing the way we think about machine learning models for natural language processing. By combining retrieval-based and generative models, RAG offers highly contextual domain specific responses. This guide will walk you through the steps to implement RAG using Nebula LLM, Nebula Embedding API, and vector databases. Why RAG? Comparison to […]

The post Implementing Retrieval-Augmented Generation (RAG) with Nebula: A Comprehensive Guide appeared first on Symbl.ai.

]]>
Overview

Retrieval-Augmented Generation (RAG)1 is revolutionizing the way we think about machine learning models for natural language processing. By combining retrieval-based and generative models, RAG offers highly contextual domain specific responses. This guide will walk you through the steps to implement RAG using Nebula LLM, Nebula Embedding API, and vector databases.

Why RAG? Comparison to Traditional Models

Generative Models: Generative models, like GPT or BERT, are trained on a large corpus of data but do not have the ability to pull in real-time external information. They rely solely on the data they were trained on, which can sometimes be a limitation for tasks that require up-to-date or specialized information.

Retrieval Models: These models are good at pulling specific pieces of information from a large corpus but may not be adept at synthesizing or generating new text based on that information.

RAG: Combines the best of both worlds. It can retrieve real-time or specialized information from an external corpus and then use a generative model to create coherent and contextually relevant text.

Use-cases of RAG

Question Answering

  • Open-Domain QA: When you have a corpus of data that spans multiple subjects or domains, RAG can be used to answer questions that may require piecing together information from multiple documents
  • FAQ Systems: For businesses that have frequently asked questions, RAG can automatically pull the most up-to-date answers from a corpus of documents
  • Research Assistance: In academic or professional research, RAG can help find and compile information from various sources

Document Summarization

  • Executive Summaries: RAG can compile executive summaries of long reports by pulling key insights from latest data
  • News Aggregation: It can pull from multiple news sources to generate a comprehensive summary of a current event
  • Legal Document Summaries: In law, summarizing lengthy contracts or case histories can be efficiently performed

Chatbots

  • Customer Service: A RAG-based chatbot can pull from a knowledge base to answer customer queries, reducing the load on human agents
  • Technical Support: For software or hardware troubleshooting, RAG can pull from a database of common issues and solutions
  • Personal Assistants: RAG can make virtual personal assistants more versatile by enabling them to pull from an extensive database of information

Data Analysis

  • Market Research: RAG can analyze a large corpus of customer reviews, social media mentions, etc., to provide insights
  • Financial Analysis: It can pull historical data and analyst reports to generate insights into stock trends or company performance
  • Healthcare Analytics: In healthcare, RAG can analyze medical records, research papers, and clinical trial data for analytics

Content Generation

  • Article Writing: Journalists or content creators can use RAG to automatically draft articles that incorporate the latest data or references
  • Report Generation: In corporate settings, RAG can generate quarterly or annual reports by pulling data from various internal databases
  • Educational Content: For educational platforms, RAG can generate quizzes, summaries, or study guides based on a corpus of educational material

RAG’s Application in Specific Industries

  • Customer Support: In a customer support setting, RAG can assist agents by pulling from a knowledge base to provide more informed and precise answers to customer queries
  • Healthcare: In healthcare, RAG can assist in pulling patient histories, medical journals, or drug interactions to assist medical professionals
  • Finance: RAG can be used to pull real-time financial data or historical trends to assist in decision-making processes
  • Legal: RAG can assist in document discovery processes by retrieving relevant case law or statutes
  • Retail and E-commerce: RAG can be used to generate personalized product descriptions or recommendations based on user behavior and other external data points

Data Safety in Enterprises with RAG

  • Data Isolation: In an enterprise setting, the corpus of data from which the retriever pulls information can be isolated and secured within the company’s own infrastructure
  • Access Control: Fine-grained access controls can be applied to the data sources to ensure that only authorized models or users can access the sensitive information
  • Data Encryption: Data can be encrypted at rest and in transit to ensure confidentiality and integrity
  • Audit Trails: All data retrieval and generation events can be logged to provide an audit trail for compliance purposes
  • Data Masking: Sensitive information can be masked or redacted in the retrieved documents before they are used for generating responses

Implementation

Prerequisites

  • Check access to Nebula. If you do not have access, you can sign-up here
  • Generate an access token

Building a Vector Database of Transcripts

building-vector-database-transcripts"
  • Build the Database of Transcripts: Transcribe calls using Symbl.ai Async API and store transcriptions in a database, e.g., S3.
  • Chunk Conversations: Break each transcript into meaningful parts. This could be length-based or context-based. Here is a resource on the chunking strategies.
  • Create Embeddings: Use Nebula embedding API to create embeddings for each chunk. Use the below code to create embeddings.
curl --location 'https://api-nebula.symbl.ai/v1/model/embed' \
--header 'ApiKey: <api_key>' \
--header 'Content-Type: application/json' \
--data '{
    "text": "Dan: Definitely, John. The first feature we'\''re introducing ...."
}'
  • Store the Embeddings: Once the embeddings are created, store them in a vector database for retrieval based on queries to the LLMs along with transcript chunks and any required metadata useful for adding additional criterias while querying (e.g. customer id, date/time, etc.).

Note: Check Appendix for more details on Vector database2

For more information about using embeddings, see the Embedding API guide.

Generate Response from Nebula LLM

graph-nebula-response
  • Query Processing: Users can query directly from User’s UI to Nebula Embedding API. The query is processed by Nebula’s Embedding Model to create a query vector.
  • Vector Matching: This query vector is matched against stored vectors in the vector database to find similar data from conversations based on the query.
  • Context Construction: The matched vectors and the corresponding text chunks of transcripts stored in the vector database are used to build the prompt context for Nebula LLM. The prompt with this context is further passed to Nebula LLM by the User server. 
  • Response Generation: The pre-processed query string along with the newly created context from the vector database is sent to Nebula LLM to analyze the context and the query to generate a relevant response. This response is displayed on the User’s UI.

Appendix:

  1. Learn more about RAG: https://research.ibm.com/blog/retrieval-augmented-generation-RAG
  2. What kind of vector database?: Options include specialized vector databases like Weaviate, Milvus, Pinecone, Vespa.ai, Chroma, Nomic Atlas, and Faiss.Additional database like ElasticSearch, Redis which have additional metadata support, scale, speed and more, can also be considered which also provide support for vector embedding.
  3. Additional Pricing: Open-source databases like Faiss and Milvus are generally free but require manual setup and maintenance. Managed services like Pinecone and cloud-based solutions like Weaviate, Vespa.ai will have a cost based on usage
  • Here is how you can get started with Milvus 
  • Here is how you can get started with Pinecone

The post Implementing Retrieval-Augmented Generation (RAG) with Nebula: A Comprehensive Guide appeared first on Symbl.ai.

]]>
Real-Time Assist with Generative AI: Powered by Nebula LLM  https://symbl.ai/developers/blog/real-time-assist-with-generative-ai-powered-by-nebula-llm/ Wed, 01 Nov 2023 23:30:42 +0000 https://symbl.ai/?p=31843 Unleashing the Power of Conversations and Generative AI for Instant Support for Sales and Customer Support teams We are in a world where immediate, personalized support is not just desired but expected. Providing fast and personalized assistance has always been important across various industries and especially became critical in the realms of sales and customer […]

The post Real-Time Assist with Generative AI: Powered by Nebula LLM  appeared first on Symbl.ai.

]]>
Unleashing the Power of Conversations and Generative AI for Instant Support for Sales and Customer Support teams

We are in a world where immediate, personalized support is not just desired but expected. Providing fast and personalized assistance has always been important across various industries and especially became critical in the realms of sales and customer support where the stakes are high and every interaction counts. 

Sales representatives are often in high-stakes situations where they need to make quick decisions, access product details, or respond to customer queries on the fly. Any delay or inaccuracy can make prospects lose credibility and make or break a deal losing a potential customer. Meanwhile, in customer support, long wait times and impersonal interactions can lead to customer dissatisfaction and lost opportunities for upselling or retention.

What if you could eliminate these bottlenecks and elevate your sales and support experiences to new heights?

Here is Symbl.ai’s Real-Time Assist! This isn’t just another customer service tool; it’s an automated assistant designed to understand your specific needs—such as immediate answers to queries, real-time guidance during tasks, and efficient customer service—and provide tailored assistance in real-time.

The Technology Behind Real-Time Assist

Symbl.ai’s Real-Time is powered by Generative AI with Nebula LLM along with Web SDK and Trackers. From streaming conversations to indexing your knowledge base, Real-Time Assist is built to provide the most accurate and timely assistance.

How Does It Work?

Index the Knowledge Base

Break Down Documents: Your knowledge base may contain a wide range of topics. Break these down into smaller, meaningful chunks.

Vectorize Text: Use Symbl.ai’s Nebula Embedding API to convert these text chunks into vectors via the Nebula embedding model.

Data Storage: Store these indexed vectors and their associated content in a datastore for retrieval based on triggers.

For more on embeddings, check out the Embedding API guide.

Configure the Triggers

Automatic Detection: Symbl.ai detects questions and trackers during an ongoing conversation between the Customer Service Agent (CSA) and the customer.

Customization: Trackers can be configured by customer success managers to identify phrases like ‘competitor mentions’ or ‘overcharge’.

Out-of-the-Box Trackers: Symbl.ai provides 40 default trackers, both general and specific to contact centers.

For more on trackers, see the Trackers guide.

Stream the Conversation to Symbl.ai

Bi-Directional Stream: Use Symbl.ai’s Web SDK to stream the conversation and display knowledge base results to the CSA.

SDK Installation: Install the Web SDK with a simple npm command and import the latest version.

Event Identification: During the support conversation, when a question or tracker is identified, events are triggered along with the callback response object.

For more on Web SDK implementation, see the Web SDK reference.

Core Features:

Instant Feedback: Get immediate responses to your queries.

Contextual Assistance: Receive support that understands the context of your needs.

Problems Solved by Real-Time Assist:

User Friction: No more searching for help. Real-Time Assist is there when you need it.

Support Efficiency: Complete tasks faster with real-time guidance.

User Experience: Feel understood and supported, enhancing overall satisfaction.

Support Costs: Reduce the need for human intervention, saving on support costs.

Real-Time Assist in Action: Use Cases

For Sales Teams:

Instant Information Access: Get product details, pricing, and competitor information at your fingertips.

Reduced Response Time: Let the AI handle initial queries, freeing you to focus on closing deals.

For Customer Support Teams:

Knowledge Base Access: Instantly pull up articles or solutions, improving first-call resolution rates.

Scripting Assistance: Get AI-suggested scripts based on customer queries.

Compliance Monitoring: Ensure all conversations adhere to industry regulations.

Why Choose Real-Time Assist?

User Retention: Keep your customers coming back with an unmatched user experience.

Increased Revenue: Convert more leads and prospects with streamlined processes and instant access to right data..

Data-Driven Insights: Make informed decisions with valuable user data.

Scalability: Easily scale to accommodate a growing user base.

How Does It Work?

As interactions between customers and representatives unfold, Real-Time Assist identifies questions, topics, and pre-set markers—such as “payment issues” or “technical support”—that serve as triggers during the conversation.

Utilizing the Nebula Embedding API, these triggers are transformed into vectors, which are then matched against a pre-existing vector database from your knowledge base to find contextual similarities. Once a match is found, the associated content is sent to Nebula LLM. Nebula then synthesizes this information to generate the most relevant and accurate response based on the identified trigger.

This Real-Time, AI-generated guidance is then sent to your backend server via Web SDK and displayed directly on the representative’s dashboard, ensuring that they have the best possible information at their fingertips, exactly when they need it.

Interested to implement Real-Time Assist for your teams? Here is the step-by-step “How-To” guide for you.

The post Real-Time Assist with Generative AI: Powered by Nebula LLM  appeared first on Symbl.ai.

]]>
Generative AI for Business Intelligence https://symbl.ai/developers/blog/generative-ai-for-business-intelligence/ Mon, 18 Sep 2023 17:26:32 +0000 https://symbl.ai/?p=31018 In today’s fast-paced business landscape, harnessing the power of data is crucial for making informed decisions and driving growth. One of the cornerstones of this endeavor is the strategic utilization of Symbl.ai’s Generative AI powered by Nebula LLM—a treasure trove of customer sentiments, and interactions. Symbl.ai’s Generative AI goes beyond traditional analytics NPS/CSAT/CES scores by […]

The post Generative AI for Business Intelligence appeared first on Symbl.ai.

]]>
In today’s fast-paced business landscape, harnessing the power of data is crucial for making informed decisions and driving growth. One of the cornerstones of this endeavor is the strategic utilization of Symbl.ai’s Generative AI powered by Nebula LLM—a treasure trove of customer sentiments, and interactions. Symbl.ai’s Generative AI goes beyond traditional analytics NPS/CSAT/CES scores by meticulously parsing through customer dialogues across a multitude of channels—calls, emails, chatbots, social media—to derive valuable insights such as customer sentiment, preferences, and areas of concern. By analyzing the sentiment, tone, and content of these interactions, businesses can understand customer needs, preferences, and pain points more effectively. This blog talks about how Symbl.ai propels businesses’ decision making with Generative AI.

The challenge 

Lack of Understanding Reasoning: Modern businesses understand the importance of Generative AI as they provide an understanding of ‘What’, ‘Why’, and ‘How’ for customer interactions. However, managing insights from Generative AI can be a challenge due to its structure, volume and evolving capabilities. 

Complexities in Data Sharing: In today’s business landscape, not only understanding ‘What’, ‘Why’, and ‘How’ is key but sharing the knowledge and accurate data delivery in a timely manner is important for informed decision-making. However, challenges such as outdated data architectures, the spread of isolated data repositories, and the increasing variety of data formats in different teams in an enterprise hinder this process of sharing insights within the organization. Simplifying data pipelines and infrastructure is essential for businesses to efficiently unlock the value of their data.

Understanding Reasoning with Nebula LLM 

Nebula is a specialized large language model developed by Symbl.ai, designed to handle generative tasks in human conversations. It excels in capturing the subtle details of human conversations providing a reason of ‘What’, ‘Why’, and ‘How’ for customer interactions. Click here to learn more about Nebula LLM.

Generating Insights from Nebula LLM

Nebula LLM is available via an API. It accepts inputs in a prompt format containing the human conversation in a textual format and an instruction of the desired output. If you have a transcript of the conversation between the customer and the representative or other personas along with an instruction, Nebula understands the conversation and based on the instruction a response is provided. To learn more about designing a prompt, click here.

If you do not have the transcripts or the conversations in a textual format, not to worry. You can utilize Symbl.ai’s Async API and Streaming API to process audio and video conversations, and provide an API response in JSON format and a transcript format which you can pass to the Nebula LLM.

Managing data pipelines

Businesses who want to analyze conversations, start processing conversations with Symbl.ai’s Nebula LLM and get the insights via APIs. But parsing the data from APIs, building the data pipelines (can be batch or real-time) to take the insights from APIs, adding to data stores and sharing data is an extra effort for businesses which needs technical expertise and time. In an enterprise setup, as mentioned in the challenge, there could be multiple teams where each team has their own data and uses a different set of tools from others. To share insights across teams it involves ETL to a single data warehouse, providing and tracking access for batch data for processing, and for real-time data streaming services such as Kafka, PubSub, Kinesis are to be used. Similarly, enterprises need to build and manage data pipelines to share insights with customers. To handle these operational tasks, enterprises need data engineering teams and a significant amount of time and money.

Conversation Business Intelligence

Symbl.ai’s conversation business intelligence eliminates the need for businesses to put time and effort into building and maintaining data infrastructure. Conversation business intelligence offers a no-code solution where all the insights generated from Nebula LLM is available to businesses in a datastore. Symbl.ai does this by managing all the integration process building pipelines, cleaning and storing data in datastores such as Snowflake data cloud, Google BigQuery and AWS Redshift. With this businesses can save time and money from building POCs, developer effort and not bothering about maintaining infrastructure, rather than focus more on performing data analysis to understand root-cause for users problems and providing better user experience.   


Embrace the Data-Driven Excellence with Generative AI Insights. With Symbl.ai:

Eliminate Manual Data Management: Automate the entire process of conversation data collection, cleaning, and storage.

Gain Real-Time Insights: Leverage real-time and batch data pipelines to make informed decisions on the fly.

With all the insights available to businesses readily in the data stores, businesses can connect data stores with their preferred analytics tools such as Tableau, Looker and Power BI, Qlik, Sisense, ThoughtSpot with their in-built connectors, and query all the Generative AI insights associated with their account. Not only connect with analytics tools, enterprises can also connect these insights to their machine learning models and train models, for example to identify churn more accurately.

Symbl.ai’s transformative approach shapes an ecosystem known as Conversation Business Intelligence. This ecosystem revolutionizes how businesses leverage Generative AI insights, seamlessly integrating them into existing data stores and analytics platforms. This helps businesses identify  the  root  cause  of  customer  attrition, analyze customer experience during the journey, and helps in formulating and delivering strategies for exceptional customer satisfaction. 

With Symbl.ai as the orchestrator of data integration, businesses can focus on what matters most—interpreting insights, optimizing strategies to retain customers and enhancing customer experiences. Thinking about getting started? Here is the step-by-step guide on how to build a  Business Intelligence solution with Nebula.

The post Generative AI for Business Intelligence appeared first on Symbl.ai.

]]>
Introducing a New Gen AI Powered Pre-Built Experience for Call Insights https://symbl.ai/developers/blog/introducing-a-new-gen-ai-powered-pre-built-experience-for-call-insights/ Tue, 18 Jul 2023 17:44:03 +0000 https://symbl.ai/?p=29378 The post Introducing a New Gen AI Powered Pre-Built Experience for Call Insights appeared first on Symbl.ai.

]]>

The Symbl.ai Conversation Intelligence Platform empowers developers and enterprise builders to use AI to optimize a broad range of business conversations using purpose-built APIs and flexible UIs.  Our technology enables businesses to leverage AI augmented experiences to improve Enterprise productivity. Today we’re excited to announce a new programmable API addition, Insights UI, to the Symbl.ai REST API portfolio aimed at helping builders and developers innovate quickly with Symbl powered customizable solutions for Enterprise organizations.

To help developers with speed to value, we’re launching a new low code option “Insights UI” that is designed to work in conjunction with the recently released Call Score API.  Our new API embodies Symbl’s programmability and customizability API design philosophies — significantly reducing developer efforts while at the same time offering solutions that can be adapted to a broad range of business scenarios.

Individual APIs to Low Code APIs

Insights UI

Insights UI API invokes a customizable pre-built UI presentation layer that can be easily integrated into a front end application.  Insights UI can be invoked with or without the Call Score component.  Insights UI incorporates other data elements including Sentiment Analysis, Summary, and Questions, which together with Call Score provides deeper insights and transparency, and engagement analysis of the conversation.

Insights UI supports multiple out of the box UX customizations by giving developers control over the look and feel of the UI.  Besides choosing to include or exclude Call Score, developers can also choose between a record list page view versus a concise single record details view.

Insights UI List Page with Call Score

The list view of Insights UI serves as a repository for users’ call records, displaying summarized information in easy-to-navigate cards.  Insights UI API’s ‘list-page’ command returns a list of engagement records under the App ID account.

GET

https://api.symbl.ai/v1/conversations/experiences/insights/list?includeCallScore=true

Insights List with Call Score

Insights UI Details Page with Call Score

Insights UI API’s ‘details-page’ command returns a single specific conversation engagement record with detailed analysis information.

 

GET

https://api.symbl.ai/v1/conversations/experiences/insights/details/{:conversationId}?includeCallScore=true

Symbl.ai Insights Details with Call Score

Insights UI without Call Score

GET

https://api.symbl.ai/v1/conversations/experiences/insights/list?includeCallScore=false

Insight UI without Call Score

Analysis Components Included with Insights UI

Summary

Summary generates an accurate record of the key moments during a conversation.  The feature allows significant time savings to capture critical information from conversation records.

Symbl Summary

Sentiment

Sentiment measures and tracks over time a conversation’s speakers’ emotional engagements.  This feature enables customer facing organizations to better observe and respond to customers’ subtle signs of concern that may not be reflected as a direct and capturable verbal response.

Symbl Sentiment

Next Steps

Next Steps highlights the specific follow-up actions captured, such as scheduling the next meeting, commitments to send information materials or similar actions detected within an engagement record.

Symbl.ai Insights Next Steps

Objections

Objections is a unique component available only if the developer selects “conversationType”: “sales” as part of the API call.  Objections highlight statements within a customer’s conversation that the AI model deems to be forward motion blockers.

Symbl Insights Objections

Questions and Answers

Q&A highlights general questions and answers within a conversation record.

Symbl Insights Questions and Answers

We have more exciting news and information to share in our Introducing Call Score API blog, please check it out.

To learn more about Insights UI, please read our technical documentation.

Follow our API Reference and try out the APIs on Postman.

The post Introducing a New Gen AI Powered Pre-Built Experience for Call Insights appeared first on Symbl.ai.

]]>
Introducing Call Score API https://symbl.ai/developers/blog/introducing-call-score-api/ Tue, 18 Jul 2023 17:43:08 +0000 https://symbl.ai/?p=29369 The post Introducing Call Score API appeared first on Symbl.ai.

]]>

The Symbl.ai Conversation Intelligence Platform empowers developers and enterprise builders to use AI to optimize a broad range of business conversations using purpose-built APIs and flexible UIs.  Our technology enables businesses to leverage AI Augmented experiences to improve Enterprise productivity. Today we’re excited to announce a new programmable API addition, Call Score, to our REST API portfolio aimed at helping developers innovate quickly with Symbl’s platform.

Previously if a developer wanted to create a call analysis solution using Symbl’s API Library, the developer would have to start with Symbl’s individual APIs. While flexible and powerful, a downside of starting with individual APIs is the amount of coding and development time required.  Not any more. Today Symbl.ai makes available a new low code API option, Call Score, designed to expedite speed-to-value for developers. Our new API embodies Symbl.ai’s programmability and customizability API design philosophies — significantly reducing developer efforts while at the same time offering adaptive solutions for a broad range of business scenarios.

Symbl.ai APIs

Call Score 

Call Score API provides a numerical assessment as well as explanations of conversation quality and participant performance at scale. It provides a single numerical score for each conversation, making it easier for users to identify and compare similar conversations.

Key Benefits of Call Score:

  • Automation at Scale: Call Score API makes it easy for developers to automate call assessments at scale, significantly reducing labor, cost, and time required compared to human led reviews.
  • Adapts to Business Context: Powering Call Score API is Symbl’s Nebula large language model (LLM) that is capable of accepting additional input information to adapt how it processes different task instructions and business scenarios.  Furthermore, Call Score is capable of capturing continuously changing information context during a two-way or multi-party human conversation without the need for explicit instructions from app or developer.
  • Context Rich Output: Call Score API provides context rich, unbiased explanations accompanying each score in JSON format.

The biggest differentiating advantage of our Call Score API comes from our generative AI.  Symbl’s LLM immerses into each conversation engagement by analyzing multiple conversational dimensions described above to reach the most accurate human-like conclusion with speed, accuracy, and repeatability.  Similar AI solutions may only perform keywords matching, then present a score based on match rate; Symbl’s LLM actually goes deep into a conversation, understands the conversation context, then presents an evaluation score with clear and precise supporting explanations.  This level of human-like rationale behavior is one of the key Symbl.ai advantages.

Symbl.ai Call Score Enterprise Accelerator

High Level View of Call Score’s Workflow:

Symbl.ai Call Score API Diagram

We designed the Call Score API with an easy-to-understand structure.  Developers and builders can easily integrate Call Score with any application in just two steps:

Step 1 – Process Conversation

Step 2 – Get Call Score

Example: GET Call Score

GET

https://api.symbl.ai/v1/conversations/{conversationId}/callscore

Symbl.ai Call Score Code

    Call Score Criteria Explained

    Call Score API currently supports two types of conversations: Sales and General. Each type has its own set of criteria for evaluation.

    Criteria for Sales calls:
    • Communication and Engagement: Gauges the effectiveness of a sales representative’s communication style and their ability to collaboratively engage with the prospect. 
    • Question Handling: Assesses the sales representative’s ability to answer questions and handle objections from the prospect.
    • Sales Process: Measures how well the sales representative adheres to the organization’s sales process and protocols.  This involves checking if the representative is following the BANT methodology in qualifying a prospect.
    • Forward Motion: Evaluates the sales conversation regarding how it advanced a sales opportunity.
    Criteria for General calls:
    • Communication and Engagement: Evaluates the conversation based on parameters such as politeness, empathy, and active listening.
    • Question Handling: Assesses the participants’ ability to address questions effectively.
    Select ‘Sales’ Conversion Type with Call Score
    POST https://api.symbl.ai/v1/process/video/url
    {
        "url": "https://my-conversation-url",
        "conversationType" : "sales",
        "features": {
            "featureList": ["callScore"]
        }
    }

    To further increase precision in actionable insights generated by Call Score, Opportunity Stage is incorporated as another contextual layer into Call Score’s AI evaluation dimensions.  The Opportunity stage takes into consideration the business context and intent changes during a dynamic conversation engagement journey, then adapts the AI model’s scoring behavior accordingly. Five opportunity stages are supported at launch – Qualification, Discovery, Demo, Proposal, Negotiation and General – each with an adjusted scoring weight model tailored to perform the most objective and relevant score evaluation.

    Process Call Record with ‘Demo’ Opportunity Stage Bias
     POST https://api.symbl.ai/v1/process/audio/url
    {
        "url": "https://my-conversation-url",
        "conversationType" : "sales",
        "features": {
            "featureList": ["callScore"]
        },
        "metadata": {
            "salesStage": "Demo"
        }
    }

    We have more exciting news and information to share in our Introducing Insights UI API blog, please check it out.

    To learn more about Call Score, please read Symbl.ai Call Score’s technical documentation

    Follow our API reference and try out the APIs on our platform.

    The post Introducing Call Score API appeared first on Symbl.ai.

    ]]>