Developer Archives | Symbl.ai LLM for Conversation Data Thu, 29 Aug 2024 18:05:48 +0000 en-US hourly 1 https://symbl.ai/wp-content/uploads/2020/07/favicon-150x150.png Developer Archives | Symbl.ai 32 32 How to Build an AI Copilot with Symbl.ai https://symbl.ai/developers/blog/how-to-build-an-ai-copilot-with-symbl-ai/ Thu, 16 Nov 2023 00:27:23 +0000 https://symbl.ai/?p=31908 AI Copilots have revolutionized the way we work. They have become indispensable everyday companions for enhancing productivity, creativity, and skill sets across various domains. In this blog, we’ll cover how you can create your own AI Copilot using Symbl.ai’s Nebula LLM, Nebula Embeddings, and other platform capabilities such as ASR and Trackers.  To illustrate the […]

The post How to Build an AI Copilot with Symbl.ai appeared first on Symbl.ai.

]]>
AI Copilots have revolutionized the way we work. They have become indispensable everyday companions for enhancing productivity, creativity, and skill sets across various domains. In this blog, we’ll cover how you can create your own AI Copilot using Symbl.ai’s Nebula LLM, Nebula Embeddings, and other platform capabilities such as ASR and Trackers. 

To illustrate the process, we will focus on creating an AI Sales Copilot that is adapted to your domain and business needs. The AI Sales Copilot will automatically generate meeting notes, enrich CRM data on the go, evaluate sales reps’ performance, answer sales queries, and assist the sales team in real time during prospect calls.

Generate Meeting Notes

In the fast-paced world of sales, taking comprehensive meeting notes during client interactions can be a distracting chore. Moreover, the post-meeting process of converting scribbled notes into a format suitable for CRM entry compounds the workload. 

Having an AI Sales Copilot to automatically generate meeting notes addresses the above challenge. You have two methods to achieve this with Symbl.ai:

  1. Out of the box: As a swift and efficient solution, you can employ Symbl.ai’s prebuilt Insights UI, allowing you to generate meeting notes for your audio and video calls with minimal API calls. Powered by Symbl.ai’s Nebula LLM, Insights UI provides an intuitive interface replete with key details such as meeting summaries, action items, Q&A, objections, and sentiment analysis. Learn more from our previous blog that goes into details of Insights UI.
  2. Customization: For those who prefer a tailored approach, Nebula LLM offers flexibility. Using Nebula, just with a few prompts, you can craft meeting notes that precisely match your preferred format and style, ensuring that your notes align seamlessly with your sales process and workflows.

Enrich CRM data

Maintaining an accurate and up-to-date Sales CRM system is paramount for successful sales teams. Sales representatives are often burdened with the time-consuming task of manually updating and managing customer data resulting in incomplete and inaccurate CRM data.

Bring in your own AI Sales Copilot to automate CRM enrichment, effectively extracting vital information from sales calls about contacts within prospect companies and generate concise opportunity summaries. This automation significantly alleviates the manual effort and ensures that your CRM remains a reliable and current resource.

The following architecture diagram illustrates how you can use multiple components from Symbl.ai to automate CRM enrichment:

CRM enrichment with Nebula LLM

Evaluate Sales Performance

Evaluating the performance of sales representatives on an ongoing basis is challenging, as it involves manually tracking numerous variables, from communication skills to adherence to the sales process, which can be time-consuming and prone to biases. 

Automating the evaluation process streamlines the task, saving time and providing consistent, data-driven insights, enabling quicker identification of areas for improvement and more efficient coaching.

With Symbl.ai, you have two approaches to enable the AI Sales Copilot do automatic performance evaluations:

  1. Out of the box: Symbl.ai’s Call Score API provides a pre-configured scoring system for assessing sales reps’ performance based on predefined criteria such as Communication & Engagement, Forward Progression, Sales Process, and Question Handling. The results are also available on the pre-built Insights UI. To learn more about Call Score API, refer to our previous blog on Call Score.
  2. Customization: If the criteria or the scoring methodology of Call Scoe API does not work for you, use Nebula LLM to score your sales calls as per your custom criteria and sales process. For instance, if your organization adheres to a particular sales methodology like MEDDIC, you can craft a prompt tailored to the MEDDIC framework, resulting in a call score that aligns precisely with your unique criteria.

Sales Q/A bot

Sales representatives often grapple with delays and inefficiencies when responding to prospect inquiries. Such challenges can lead to missed opportunities, reduced customer satisfaction, and diminished chances of closing deals successfully. 

A Sales Q/A bot as part of your Copilot will serve as an everyday companion to answer all queries that sales representatives have on a day to day basis. To build a Q/A bot, the first step is to construct a knowledge base capable of efficiently addressing a myriad of prospect inquiries, spanning product information, customer interactions, internal meetings, and other pertinent data. Symbl.ai’s Embeddings API plays a pivotal role in converting this wealth of information into vectors, which in turn enables efficient semantic search as shown in the architecture diagram below: 

Vectorize domain knowledge with Nebula Embeddings

Once you have your domain knowledge available in an efficient retrieval system, employ Nebula LLM to utilize the domain knowledge when answering sales representatives’ questions. This technique is popularly known as Retrieval Augmented Generation (RAG):

RAG with Nebula

Real-time Assistant

Building on the knowledge base that underpins the Sales Q/A bot above, the subsequent step is to infuse real-time conversations between sales representatives and prospects with this invaluable resource. 

Symbl.ai’s Trackers come into play here, capturing instances where sales representatives may require immediate assistance, such as mentions of competitors, objections, and new product questions. By querying the knowledge base in real time, the AI-driven assistant can supply instant answers, enabling sales representatives to capitalize on the crucial moment when the prospect’s attention is most engaged.

To dive deeper into how to build a real-time assistant as part of your AI Sales Copilot, refer to our detailed blog on Real Time Assist.

Conclusion

To sum it up, we’ve explored how to build an AI Sales Copilot that is tailored to suit the specific needs of your business. Each component of the Copilot can be developed independently, granting you the flexibility to begin where you find most compelling. The architectural frameworks we’ve discussed in this blog can be readily adapted to diverse use cases, spanning contact centers, support teams, and recruitment. If you’re eager to learn more about how to construct your very own AI Copilot, contact us.

The post How to Build an AI Copilot with Symbl.ai appeared first on Symbl.ai.

]]>
Implementing Retrieval-Augmented Generation (RAG) with Nebula: A Comprehensive Guide https://symbl.ai/developers/blog/implementing-retrieval-augmented-generation-rag-with-nebula-a-comprehensive-guide/ Mon, 13 Nov 2023 18:09:50 +0000 https://symbl.ai/?p=31886 Overview Retrieval-Augmented Generation (RAG)1 is revolutionizing the way we think about machine learning models for natural language processing. By combining retrieval-based and generative models, RAG offers highly contextual domain specific responses. This guide will walk you through the steps to implement RAG using Nebula LLM, Nebula Embedding API, and vector databases. Why RAG? Comparison to […]

The post Implementing Retrieval-Augmented Generation (RAG) with Nebula: A Comprehensive Guide appeared first on Symbl.ai.

]]>
Overview

Retrieval-Augmented Generation (RAG)1 is revolutionizing the way we think about machine learning models for natural language processing. By combining retrieval-based and generative models, RAG offers highly contextual domain specific responses. This guide will walk you through the steps to implement RAG using Nebula LLM, Nebula Embedding API, and vector databases.

Why RAG? Comparison to Traditional Models

Generative Models: Generative models, like GPT or BERT, are trained on a large corpus of data but do not have the ability to pull in real-time external information. They rely solely on the data they were trained on, which can sometimes be a limitation for tasks that require up-to-date or specialized information.

Retrieval Models: These models are good at pulling specific pieces of information from a large corpus but may not be adept at synthesizing or generating new text based on that information.

RAG: Combines the best of both worlds. It can retrieve real-time or specialized information from an external corpus and then use a generative model to create coherent and contextually relevant text.

Use-cases of RAG

Question Answering

  • Open-Domain QA: When you have a corpus of data that spans multiple subjects or domains, RAG can be used to answer questions that may require piecing together information from multiple documents
  • FAQ Systems: For businesses that have frequently asked questions, RAG can automatically pull the most up-to-date answers from a corpus of documents
  • Research Assistance: In academic or professional research, RAG can help find and compile information from various sources

Document Summarization

  • Executive Summaries: RAG can compile executive summaries of long reports by pulling key insights from latest data
  • News Aggregation: It can pull from multiple news sources to generate a comprehensive summary of a current event
  • Legal Document Summaries: In law, summarizing lengthy contracts or case histories can be efficiently performed

Chatbots

  • Customer Service: A RAG-based chatbot can pull from a knowledge base to answer customer queries, reducing the load on human agents
  • Technical Support: For software or hardware troubleshooting, RAG can pull from a database of common issues and solutions
  • Personal Assistants: RAG can make virtual personal assistants more versatile by enabling them to pull from an extensive database of information

Data Analysis

  • Market Research: RAG can analyze a large corpus of customer reviews, social media mentions, etc., to provide insights
  • Financial Analysis: It can pull historical data and analyst reports to generate insights into stock trends or company performance
  • Healthcare Analytics: In healthcare, RAG can analyze medical records, research papers, and clinical trial data for analytics

Content Generation

  • Article Writing: Journalists or content creators can use RAG to automatically draft articles that incorporate the latest data or references
  • Report Generation: In corporate settings, RAG can generate quarterly or annual reports by pulling data from various internal databases
  • Educational Content: For educational platforms, RAG can generate quizzes, summaries, or study guides based on a corpus of educational material

RAG’s Application in Specific Industries

  • Customer Support: In a customer support setting, RAG can assist agents by pulling from a knowledge base to provide more informed and precise answers to customer queries
  • Healthcare: In healthcare, RAG can assist in pulling patient histories, medical journals, or drug interactions to assist medical professionals
  • Finance: RAG can be used to pull real-time financial data or historical trends to assist in decision-making processes
  • Legal: RAG can assist in document discovery processes by retrieving relevant case law or statutes
  • Retail and E-commerce: RAG can be used to generate personalized product descriptions or recommendations based on user behavior and other external data points

Data Safety in Enterprises with RAG

  • Data Isolation: In an enterprise setting, the corpus of data from which the retriever pulls information can be isolated and secured within the company’s own infrastructure
  • Access Control: Fine-grained access controls can be applied to the data sources to ensure that only authorized models or users can access the sensitive information
  • Data Encryption: Data can be encrypted at rest and in transit to ensure confidentiality and integrity
  • Audit Trails: All data retrieval and generation events can be logged to provide an audit trail for compliance purposes
  • Data Masking: Sensitive information can be masked or redacted in the retrieved documents before they are used for generating responses

Implementation

Prerequisites

  • Check access to Nebula. If you do not have access, you can sign-up here
  • Generate an access token

Building a Vector Database of Transcripts

building-vector-database-transcripts"
  • Build the Database of Transcripts: Transcribe calls using Symbl.ai Async API and store transcriptions in a database, e.g., S3.
  • Chunk Conversations: Break each transcript into meaningful parts. This could be length-based or context-based. Here is a resource on the chunking strategies.
  • Create Embeddings: Use Nebula embedding API to create embeddings for each chunk. Use the below code to create embeddings.
curl --location 'https://api-nebula.symbl.ai/v1/model/embed' \
--header 'ApiKey: <api_key>' \
--header 'Content-Type: application/json' \
--data '{
    "text": "Dan: Definitely, John. The first feature we'\''re introducing ...."
}'
  • Store the Embeddings: Once the embeddings are created, store them in a vector database for retrieval based on queries to the LLMs along with transcript chunks and any required metadata useful for adding additional criterias while querying (e.g. customer id, date/time, etc.).

Note: Check Appendix for more details on Vector database2

For more information about using embeddings, see the Embedding API guide.

Generate Response from Nebula LLM

graph-nebula-response
  • Query Processing: Users can query directly from User’s UI to Nebula Embedding API. The query is processed by Nebula’s Embedding Model to create a query vector.
  • Vector Matching: This query vector is matched against stored vectors in the vector database to find similar data from conversations based on the query.
  • Context Construction: The matched vectors and the corresponding text chunks of transcripts stored in the vector database are used to build the prompt context for Nebula LLM. The prompt with this context is further passed to Nebula LLM by the User server. 
  • Response Generation: The pre-processed query string along with the newly created context from the vector database is sent to Nebula LLM to analyze the context and the query to generate a relevant response. This response is displayed on the User’s UI.

Appendix:

  1. Learn more about RAG: https://research.ibm.com/blog/retrieval-augmented-generation-RAG
  2. What kind of vector database?: Options include specialized vector databases like Weaviate, Milvus, Pinecone, Vespa.ai, Chroma, Nomic Atlas, and Faiss.Additional database like ElasticSearch, Redis which have additional metadata support, scale, speed and more, can also be considered which also provide support for vector embedding.
  3. Additional Pricing: Open-source databases like Faiss and Milvus are generally free but require manual setup and maintenance. Managed services like Pinecone and cloud-based solutions like Weaviate, Vespa.ai will have a cost based on usage
  • Here is how you can get started with Milvus 
  • Here is how you can get started with Pinecone

The post Implementing Retrieval-Augmented Generation (RAG) with Nebula: A Comprehensive Guide appeared first on Symbl.ai.

]]>
Real-Time Assist with Generative AI: Powered by Nebula LLM  https://symbl.ai/developers/blog/real-time-assist-with-generative-ai-powered-by-nebula-llm/ Wed, 01 Nov 2023 23:30:42 +0000 https://symbl.ai/?p=31843 Unleashing the Power of Conversations and Generative AI for Instant Support for Sales and Customer Support teams We are in a world where immediate, personalized support is not just desired but expected. Providing fast and personalized assistance has always been important across various industries and especially became critical in the realms of sales and customer […]

The post Real-Time Assist with Generative AI: Powered by Nebula LLM  appeared first on Symbl.ai.

]]>
Unleashing the Power of Conversations and Generative AI for Instant Support for Sales and Customer Support teams

We are in a world where immediate, personalized support is not just desired but expected. Providing fast and personalized assistance has always been important across various industries and especially became critical in the realms of sales and customer support where the stakes are high and every interaction counts. 

Sales representatives are often in high-stakes situations where they need to make quick decisions, access product details, or respond to customer queries on the fly. Any delay or inaccuracy can make prospects lose credibility and make or break a deal losing a potential customer. Meanwhile, in customer support, long wait times and impersonal interactions can lead to customer dissatisfaction and lost opportunities for upselling or retention.

What if you could eliminate these bottlenecks and elevate your sales and support experiences to new heights?

Here is Symbl.ai’s Real-Time Assist! This isn’t just another customer service tool; it’s an automated assistant designed to understand your specific needs—such as immediate answers to queries, real-time guidance during tasks, and efficient customer service—and provide tailored assistance in real-time.

The Technology Behind Real-Time Assist

Symbl.ai’s Real-Time is powered by Generative AI with Nebula LLM along with Web SDK and Trackers. From streaming conversations to indexing your knowledge base, Real-Time Assist is built to provide the most accurate and timely assistance.

How Does It Work?

Index the Knowledge Base

Break Down Documents: Your knowledge base may contain a wide range of topics. Break these down into smaller, meaningful chunks.

Vectorize Text: Use Symbl.ai’s Nebula Embedding API to convert these text chunks into vectors via the Nebula embedding model.

Data Storage: Store these indexed vectors and their associated content in a datastore for retrieval based on triggers.

For more on embeddings, check out the Embedding API guide.

Configure the Triggers

Automatic Detection: Symbl.ai detects questions and trackers during an ongoing conversation between the Customer Service Agent (CSA) and the customer.

Customization: Trackers can be configured by customer success managers to identify phrases like ‘competitor mentions’ or ‘overcharge’.

Out-of-the-Box Trackers: Symbl.ai provides 40 default trackers, both general and specific to contact centers.

For more on trackers, see the Trackers guide.

Stream the Conversation to Symbl.ai

Bi-Directional Stream: Use Symbl.ai’s Web SDK to stream the conversation and display knowledge base results to the CSA.

SDK Installation: Install the Web SDK with a simple npm command and import the latest version.

Event Identification: During the support conversation, when a question or tracker is identified, events are triggered along with the callback response object.

For more on Web SDK implementation, see the Web SDK reference.

Core Features:

Instant Feedback: Get immediate responses to your queries.

Contextual Assistance: Receive support that understands the context of your needs.

Problems Solved by Real-Time Assist:

User Friction: No more searching for help. Real-Time Assist is there when you need it.

Support Efficiency: Complete tasks faster with real-time guidance.

User Experience: Feel understood and supported, enhancing overall satisfaction.

Support Costs: Reduce the need for human intervention, saving on support costs.

Real-Time Assist in Action: Use Cases

For Sales Teams:

Instant Information Access: Get product details, pricing, and competitor information at your fingertips.

Reduced Response Time: Let the AI handle initial queries, freeing you to focus on closing deals.

For Customer Support Teams:

Knowledge Base Access: Instantly pull up articles or solutions, improving first-call resolution rates.

Scripting Assistance: Get AI-suggested scripts based on customer queries.

Compliance Monitoring: Ensure all conversations adhere to industry regulations.

Why Choose Real-Time Assist?

User Retention: Keep your customers coming back with an unmatched user experience.

Increased Revenue: Convert more leads and prospects with streamlined processes and instant access to right data..

Data-Driven Insights: Make informed decisions with valuable user data.

Scalability: Easily scale to accommodate a growing user base.

How Does It Work?

As interactions between customers and representatives unfold, Real-Time Assist identifies questions, topics, and pre-set markers—such as “payment issues” or “technical support”—that serve as triggers during the conversation.

Utilizing the Nebula Embedding API, these triggers are transformed into vectors, which are then matched against a pre-existing vector database from your knowledge base to find contextual similarities. Once a match is found, the associated content is sent to Nebula LLM. Nebula then synthesizes this information to generate the most relevant and accurate response based on the identified trigger.

This Real-Time, AI-generated guidance is then sent to your backend server via Web SDK and displayed directly on the representative’s dashboard, ensuring that they have the best possible information at their fingertips, exactly when they need it.

Interested to implement Real-Time Assist for your teams? Here is the step-by-step “How-To” guide for you.

The post Real-Time Assist with Generative AI: Powered by Nebula LLM  appeared first on Symbl.ai.

]]>
Extract Insights using Symbl.ai’s Generative AI for Recall.ai Meetings https://symbl.ai/developers/blog/extract-insights-symbl-ai-generative-ai-recall-ai-meetings/ Wed, 26 Jul 2023 14:48:31 +0000 https://symbl.ai/?p=29591 The post Extract Insights using Symbl.ai’s Generative AI for Recall.ai Meetings appeared first on Symbl.ai.

]]>

We are thrilled to announce an exciting partnership between Symbl.ai, the leader in Understanding and Generative AI for Conversations, and Recall.ai, the universal API for meeting bots. This collaboration marks a significant milestone in empowering developers and organizations to extract actionable insights from their meeting data like never before. By combining the advanced capabilities of Recall.ai with the cutting-edge technology of Symbl.ai and Nebula, our recently announced conversation LLM, we are revolutionizing how businesses understand and leverage their conversational data.

On the heels of that announcement, we will walk you through the step-by-step process of integrating Symbl.ai with Recall.ai. By following these instructions, you can leverage the combined capabilities of Recall.ai and Symbl.ai to unlock actionable insights from your meeting, enabling you to generate new content, identify actions and insights.  

Integration Steps:

Step 1: Obtain the Symbl.ai API authorization creds

  • If you haven’t signed up for a free Symbl.ai account, navigate to the Symbl.ai Platform Sign Up page and create your free account. No credit card or payment information is needed!
  • Sign in to your Symbl.ai account and retrieve your Symbl.ai API credentials (App ID and App Secret) from the platform home page.
  • If you aren’t familiar with the Symbl Platform capabilities, be sure to take a look at the API Playground to take a number of our conversation insight capabilities for a spin, such as sentiment analysis, topic extraction, trackers, and more.
  • If you plan on following along to invoke the API calls via the cURL examples in a terminal window, set the environment variables for each of these as follows:
export SYMBLAI_APP_ID="<YOUR_SYMBLAI_APP_ID>"
export SYMBLAI_APP_SECRET="<YOUR_SYMBLAI_APP_SECRET>"

Step 2: Set up your Recall.ai account

  • Log in to your Recall.ai account and navigate to API Keys if needed.
  • Click the Generate API Key, provide a name for your key, and click Create.
  • Familiarize yourself with Recall.ai’s API endpoints and documentation, which will be used to process audio or video files and retrieve transcriptions.
  • Again, if you plan on exercising the cURL commands, set the environment variable for Recall.ai’s API Key as follows:
export RECALLAI_API_KEY="<YOUR_RECALLAI_API_KEY>"

Step 3: Start a Meeting and have a notetaking bot join

  • The Recall.ai platform supports numerous CPaaS platforms, but we will use Zoom for this example. Start a Zoom meeting and take note of the meeting URL. If you need help finding this, click the little green shield with the checkmark.
  • Instruct a Recall.ai bot to join the Zoom meeting by calling the following API:
curl -X POST https://api.recall.ai/api/v1/bot 
    -H 'Authorization: Token '$RECALLAI_API_KEY' 
    -H 'Content-Type: application/json' 
    -d '{
          "meeting_url": "https://symbl-ai.zoom.us/j/86101209066?pwd=Z2V1QzJLbGRUZEJrVlk5Zy9PbE1udz09",
          "bot_name": "Bot",
          "transcription_options": {
            "provider": "symbl"
          }
        }'

When you create your Recall.ai bot, you should get back some JSON about that bot. Take note of the Bot ID below on line 2.

{
    "id": "657365b8-04b7-41ba-bad6-3de0da346bb8",
    "video_url": null,
    "recording": "482d3192-6b4b-49ca-aadb-c0cf6957e187",
    "status_changes": [
        {
            "code": "ready",
            "message": null,
            "created_at": "2023-06-20T23:27:28.926697Z",
            "sub_code": null
        }
    ],
    "meeting_metadata": null,
    "meeting_participants": [],
    "meeting_url": null,
    "join_at": null,
    "calendar_meetings": []
}

And then export that ID to an environment variable called RECALLAI_BOT_ID in your terminal like this:

export RECALLAI_BOT_ID="<YOUR_RECALLAI_BOT_ID>"

 

  • Next, say a few sentences. Perhaps give a brief introduction of yourself, where you live, and what your hobbies are. Then instruct the Recall.ai bot to leave by invoking the API call below and terminating the meeting.
curl --request POST 
     --url 'https://api.recall.ai/api/v1/bot/'$RECALLAI_BOT_ID'/leave_call/' 
     --header 'Authorization: Token '$RECALLAI_API_KEY'' 
     --header 'accept: application/json'

Note: Instructing the bot to leave isn’t required. We include this step for completeness to exercise all the Recall.ai APIs.

Step 4: Transcribing the Meeting

There are two options for doing transcription for this meeting.

Option 1: You can either use Recall.ai API or the Symbl.ai API. To use the Recall.ai API: 

  • Give the platform a few moments to finish the transcription. Take the Bot ID from the previous API call and make the following execute the following to obtain the transcription for this conversation.
curl --request GET 
     --url 'https://api.recall.ai/api/v1/bot/'$RECALLAI_BOT_ID'/transcript/' 
     --header 'Authorization: Token '$RECALLAI_API_KEY'' 
     --header 'accept: application/json'

Option 2: Obtaining the transcription using Symbl.ai:

  • To obtain the transcription from the Symbl Platform, we will need the Recall.ai Recording ID first. You can get the ID by invoking this API:
curl --request GET 
     --url 'https://api.recall.ai/api/v1/bot/'$RECALLAI_BOT_ID'' 
     --header 'Authorization: Token '$RECALLAI_API_KEY'' 
     --header 'accept: application/json'

You should get back JSON that looks similar to this:

{
    "id": "657365b8-04b7-41ba-bad6-3de0da346bb8",
    "video_url": "...some URL…",
    "recording": "482d3192-6b4b-49ca-aadb-c0cf6957e187",
    "media_retention_end": "2023-06-28T00:20:26.904067Z",
    "status_changes": [
        {
            "code": "ready",
            "message": null,
            "created_at": "2023-06-20T23:27:28.926697Z",
            "sub_code": null
        },
       …
        {
            "code": "done",
            "message": null,
            "created_at": "2023-06-21T00:20:26.904067Z",
            "sub_code": null
        }
    ],
    "meeting_metadata": {
        "title": "David vonThenen's Zoom Meeting"
    },
    …
}

Take note of the Recording ID on line 4 and export that value to an environment variable like this:

export RECALLAI_RECORDING_ID="<YOUR_RECALLAI_RECORDING_ID>"

  • Then to retrieve the transcription, we will need the Symbl.ai conversation ID. You can retrieve that using the following Recall.ai API:
curl --request GET 
     --url 'https://api.recall.ai/api/v2/recordings/'$RECALLAI_RECORDING_ID'' 
     --header 'Authorization: Token '$RECALLAI_API_KEY'' 
     --header 'accept: application/json'

You should get some JSON back that looks something like this:

{
    "id": "482d3192-6b4b-49ca-aadb-c0cf6957e187",
    "outputs": [
        {
            "id": "d297b8c8-6990-4ab2-8030-0fc797a21d2d",
            "type": "active_speaker_diarized_transcription_symbl",
            "metadata": {
                "connection_id": "c5d6b9c4-3194-4126-b30c-b3f4ddcd40d4",
                "conversation_id": "6207479207952384"
            },
            "endpoints": []
        }
    ],
    "created_at": "2023-06-21T00:11:34.372026Z",
    "expires_at": null
}

 

Take note of the Conversation ID on line 9 and export that to an environment variable like this:

export SYMBLAI_CONVERSATION_ID="<YOUR_SYMBLAI_CONVERSATION_ID>"

  • Login to the Symbl Platform by running the following command:
curl --url https://api.symbl.ai/oauth2/token:generate 
  --header 'Content-Type: application/json' 
  --data '{
    "type": "application",
    "appId": "'"$SYMBLAI_APP_ID"'",
    "appSecret": "'"$SYMBLAI_APP_SECRET"'"
}'

The resulting JSON should look like this:

{
    "accessToken": "...YOUR_ACCESS_TOKEN…",
    "expiresIn": 86400
}

Take note of the accessToken and export it to an environment variable like this:

export SYMBLAI_ACCESS_TOKEN="<YOUR_SYMBLAI_ACCESS_TOKEN>"

 

  • Take that Access Token and Conversation ID and execute the following curl command to get the Symbl Platform transcription of the same meeting.
curl --request GET --url 'https://api.symbl.ai/v1/conversations/'$SYMBLAI_CONVERSATION_ID'/messages' 
     --header 'accept: application/json' 
     --header 'authorization: Bearer '$SYMBLAI_ACCESS_TOKEN''

Step 5 (BONUS): Get Conversation Intelligence with Symbl.ai

Now that you are logged into the Symbl Platform and have the Conversation ID, you can get other conversation insights.

  • If you want to obtain the Trackers discussed in the conversation, make the following API call:
curl --request GET --url 'https://api.symbl.ai/v1/conversations/'$SYMBLAI_CONVERSATION_ID'/trackers' 
     --header 'accept: application/json' 
     --header 'authorization: Bearer '$SYMBLAI_ACCESS_TOKEN''

  • If you want to obtain a Summary of the conversation using Symbl.ai’s summary AI models, execute the following API call:
curl --request GET --url 'https://api.symbl.ai/v1/conversations/'$SYMBLAI_CONVERSATION_ID'/summary' 
     --header 'accept: application/json' 
     --header 'authorization: Bearer '$SYMBLAI_ACCESS_TOKEN''

Step 6 (MEGA BONUS): Using Nebula LLM to Extract More Conversation Data

If you missed the announcement for Nebula’s Private Beta, you can obtain even more conversation data and insights by leveraging Nebula to make queries against Symbl.ai’s Generative AI. If you haven’t already requested beta access, you can do so via the landing page for the Nebula Playground.

  • Once you have your Nebula ApiKey, you can make API calls to Nebula. For example, if you used a prompt such as “Identify the main objectives or goals mentioned in this context concisely in less points. Emphasize on key intents.”, the cURL command would look like this:
curl --location 'https://api-nebula.symbl.ai/v1/model/generate' 
--header 'Content-Type: application/json' 
--header 'ApiKey: '$SYMBLAI_NEBULA_TOKEN'' 
--data '{
    "prompt": {
        "instruction": "Identify the main objectives or goals mentioned in this context concisely in less points. Emphasize on key intents.",
        "conversation": {
            "text": "DROP IN CONVERSATION TEXT HERE"
        }
    },
    "return_scores": false,
    "max_new_tokens": 2048,
    "top_k": 2,
    "penalty_alpha": 0.6
}'
  • The potential of the Nebula is endless. Some examples of other such prompts could be:

What could be the customer’s pain points based on the conversation?

What sales opportunities can be identified from this conversation?

What best practices can be derived from this conversation for future customer interactions?

Those are just some examples of prompts that could be used. In my opinion, aspects where Nebula really shines are in the areas of summarization, identifying critical pieces of conversation data from extremely large conversations, and using Nebula to act as a consultant to “interrogate” the conversation to extract even more insights.

Success! You have just integrated Recall.ai with Symbl.ai! Partaaaayy!

Remember to refer to the API documentation provided by both Symbl.ai and Recall.ai for additional features, parameters, and examples to further extend the capabilities of this integration. I highly recommend taking advantage of Symbl.ai’s conversation APIs and to get the most out of your conversations, leverage Nebula to elevate your conversation understanding.

Stay tuned for future updates and enhancements as both platforms continue to evolve, helping you stay ahead in the conversation analytics space. If you have any questions or need additional help, don’t hesitate to reach out to the Recall.ai and Symbl.ai support teams. Happy conversing!

The post Extract Insights using Symbl.ai’s Generative AI for Recall.ai Meetings appeared first on Symbl.ai.

]]>
July 26 Community Meeting: Meet Nebula, Symbl.ai’s LLM https://symbl.ai/developers/blog/july-26-community-meeting-meet-nebula-symbl-ais-llm/ Thu, 20 Jul 2023 14:18:08 +0000 https://symbl.ai/?p=29616 The post July 26 Community Meeting: Meet Nebula, Symbl.ai’s LLM appeared first on Symbl.ai.

]]>

Mark your calendars for the upcoming Symbl.ai Community Meeting this Wednesday, July 26th! We have an exciting announcement to make: the availability of the Symbl.ai LLM called Nebula. If you’re curious about what Nebula has to offer, this is the perfect opportunity to join our vibrant community and expand your knowledge. Don’t miss out on this groundbreaking session!

Symbl.ai Nebula Large Language Model

The discussion topics will include:

  • The Announcement of Nebula, the Symbl.ai LLM
  • What can Nebula do for me?
  • Demo: Walkthrough of the UI
  • Demo: Walkthrough of the API
  • Discuss the benefits of using Nebula
  • Demo: Integration ideas with Nebula

That’s right! We plan on having three demos in this meeting as a sneak preview of what you can expect in this unbelievable private beta release.

If you want more information before the Community Meeting, I invite you to look at this blog post where Symbl.ai’s CTO Toshish Jawale talks about Nebula, its use cases, and highlights of this announcement. If you are interested in participating in the private beta, you can request access through the following link: https://nebula.symbl.ai/playground.

My goal is to do the proverbial “mic drop” with the demo on integration ideas. The aim is to enable others and get their creative juices flowing with unique community-driven integration ideas! We want to see your integrations in action. If you have an exciting integration idea for Nebula, please tweet with the hashtag #NebulaLLM or drop us a line in the Community Slack under #general.

How Do I Join and Participate in the Meeting?

This month’s meetings will take place Wednesday, July 26th at 11 am PST – Invite (Convert to your Timezone). We invite you to learn more about Nebula, ask questions, and join the discussion!

How do I join the Community Meeting, you ask? Just…

I hope to see you all there!

The post July 26 Community Meeting: Meet Nebula, Symbl.ai’s LLM appeared first on Symbl.ai.

]]>
Symbl.ai Nebula On-Prem Summary Deployment https://symbl.ai/developers/blog/symbl-ai-nebula-on-prem-summary-deployment/ Thu, 20 Jul 2023 03:46:43 +0000 https://symbl.ai/?p=29599 New Sales Intelligence APIs and UIs deliver auto-generated, context-aware call scores, summaries and conversation insights to immediately improve sales results

The post Symbl.ai Nebula On-Prem Summary Deployment appeared first on Symbl.ai.

]]>
Enhanced Conversation Intelligence, Data Control, and Optimized Performance with Proprietary Data Protection Peace of Mind

Overview

At Symbl.ai, we offer an on-prem deployment option for our Nebula large language model (LLM), enabling organizations to deploy and utilize our capabilities within their own infrastructure. This deployment includes the Summary feature, which provides four types of conversation summaries: short, long, list, and topic-based. By deploying Symbl.ai on-premise, organizations can have greater control over their data protection, ensure compliance with regulatory requirements, and customize the solution to fit their specific needs.

Key Advantages

Symbl.ai’s on-prem summary model stands out due to its unique strengths and advantages:

Optimized Performance

Our language model is based on the state-of-the-art transformer architecture, designed specifically for summarizing long, multi-party conversations across various domains. We have optimized our models to deliver LLM-level quality while utilizing 115 times fewer parameters than GPT-3. This optimization significantly enhances efficiency and processing speed, resulting in average latencies that are 5 times lower than comparable models.

Average Latency in seconds for each model on the same data and hardware configurations (for Symbl, Falcon, and MPT)

Supports long conversations

Our custom transformer-based model architecture addresses the high variance problem inherent in human conversations. This design ensures consistent performance even at longer sequence lengths, allowing you to process multi-party conversation transcripts up to 3 hours in length whereas mainstream comparable models can only process up to 30 mins of conversations.

ModelMax Conversation Length (approx)
Symbl180 mins (3 hours)
GPT-315 mins
GPT-3.530 mins
Falcon15 mins
MPT15 mins
Max conversation length supported by the model. Conversation length is calculated based on token length supported by the model averaged over the dataset.

Cost-effective

Symbl.ai on-prem summary deployment enables cost-effective hardware utilization. You can deploy our summarization model on single instances of cheaper GPUs, costing as low as a few dollars per day, while still processing up to thousands of conversations daily. This results in an effective hardware cost of fraction of a cent per conversation, delivering both efficiency and cost savings.

Data Protection

Our on-prem deployment provides enhanced data protection control. By deploying Symbl.ai on-premises, you can manage sensitive information within your own infrastructure, ensuring data protection control. This level of control enables safeguard against unintended data access, data leaks or data breaches.

Secure, Resilient, and Scalable

Our solution follows security best practices, ensuring that it does not require root user access. We provide secure containerization, while you are responsible for securing your infrastructure. The Symbl.ai container includes built-in health checks and logging mechanisms, enabling you to monitor the solution’s health and resilience. This built-in resilience ensures that the system remains robust and stable even during high-demand scenarios. Additionally, our container-based deployment architecture enables horizontal scaling, allowing you to handle increased workloads efficiently while maintaining optimal performance.

Conversation Intelligence

The on-prem deployment of Symbl.ai provides organizations with four types of conversation summaries:

  • Short Summary: Provides a concise overview of a conversation, capturing the key points and highlights. It is useful for quick reference and provides a snapshot of the conversation’s main themes.
  • Long Summary: Offers a detailed and comprehensive overview of the conversation, including deeper analysis and nuanced insights. It enables a thorough understanding of the conversation.
  • List Summary: Presents the conversation’s key points in a bullet-point format, making it easy to scan and extract important information. It provides a structured summary that allows for quick reference and review.
  • Topic-Based Summary: Organizes conversation insights based on the main topics or subjects discussed. It helps identify the primary themes covered in the conversation, making it easier to navigate and focus on specific areas of interest.

Getting Started

To get started with the on-prem deployment of Symbl.ai, organizations should ensure the following:

  • Host Environment: Set up a host environment with reliable internet access to facilitate communication with the Symbl server. This communication is necessary for access token generation, usage reporting, and downloading container updates.
  • Container Orchestration Platform: Choose a container orchestration platform that aligns with your organization’s infrastructure and requirements. Popular choices include Kubernetes or Docker Swarm, which provide the necessary tools for managing and deploying containers.
  • Hardware Requirements: Ensure that your infrastructure meets the hardware requirements for deploying and running Symbl.ai containers. This includes having sufficient CPU, memory, and storage resources to support the containerized deployment.

Next Steps

To explore the on-prem deployment options available with Symbl.ai, contact Sales.

The post Symbl.ai Nebula On-Prem Summary Deployment appeared first on Symbl.ai.

]]>
Using Symbl’s Generative AI for RevOps – Observability & Conversation Data Fidelity in your CRM https://symbl.ai/developers/blog/using-symbls-generative-ai-for-revops-observability-conversation-data-fidelity-in-your-crm/ Thu, 20 Jul 2023 03:45:26 +0000 https://symbl.ai/?p=29602 New Sales Intelligence APIs and UIs deliver auto-generated, context-aware call scores, summaries and conversation insights to immediately improve sales results

The post Using Symbl’s Generative AI for RevOps – Observability & Conversation Data Fidelity in your CRM appeared first on Symbl.ai.

]]>
Symbl.ai’s RevOps solution enhances productivity for businesses who use CRM platforms like Salesforce, Hubspot, Pipedrive. Powered by Symbl’s specialized language model Nebula, organizations can address call observability and CRM data fidelity gaps to strengthen their revenue operations.

One of the significant challenges faced by GTM teams is the lack of accurate information in CRM and the manual effort and time involved with effectively monitoring and capturing CRM data, especially in relation to sales calls.

Low CRM data fidelity frequently leads to

  • Missed sales opportunities
  • Obscured operational KPIs
  • Inaccurate business forecasts
  • Inability to make accurate data driven decisions

Traditional methods of tracking customer conversations often rely on manual entry or subjective feedback from the GTM teams involved. Such methods are vulnerable to data inaccuracies, human biases and information gaps. Organizations that rely on human efforts to improve CRM data integrity often end up with a low benefit cost ratio.

Symbl RevOps solution integrated in CRM can significantly increase the benefit cost ratio for organizations looking to address CRM data gaps. Our generative AI processing auto populates data to CRM or similar Martech and RevOps data platforms by leveraging AI to extract critical information from sales call records. Advantages of using Symbl’s generative AI to enrich CRM data includes consistency, unbiased data capture, hyperscale, and high cost efficiency.

Organizations that use Symbl quickly realize below observability impacts:

  • Increased visibility and accuracy with sales reports and revenue forecasts
  • Quality measurement of revenue, marketing and sales KPIs
  • Improved data ‘fidelity’ driven decisions
  • Enhanced sales process optimization through data-driven insights

From an executive lens effectively connecting the dots between customer calls, CRM data and revenue forecast and correctly forecasting revenue growth. Ensuring insights from calls are stitched as another source to your forecasts accelerates revenue growth. Capturing these insights in near real-time into CRM also empowers organizations to reduce blindspots from customer interactions and cross-functionally inform strategic decisions with market signal and competitive intelligence in addition to ensuring immediate coaching opportunities.    

Extracting call insights using Symbl’s Generative AI can be easily done with any CRM or RevOps platforms. Symbl’s RESTful API architecture embodies our philosophies of programmability and speed to value. Simple integration can be done in just a few lines of API calls. Symbl’s solution workflow is represented in below high level diagram:

When integrated within a RevOps workflow, Symbls’ generative AI augments the GTM stack by enabling following use cases:

  1. Fix critical data gaps critical to the operations of RevOps platform processes
  2. Generate insights and platform data optimization recommendations
  3. Real-time forecast Q&A ‘prompts’
  4. Automated new data record updates

To learn more about building generative AI powered revenue intelligence please contact Sales.

The post Using Symbl’s Generative AI for RevOps – Observability & Conversation Data Fidelity in your CRM appeared first on Symbl.ai.

]]>
Introducing a New Gen AI Powered Pre-Built Experience for Call Insights https://symbl.ai/developers/blog/introducing-a-new-gen-ai-powered-pre-built-experience-for-call-insights/ Tue, 18 Jul 2023 17:44:03 +0000 https://symbl.ai/?p=29378 The post Introducing a New Gen AI Powered Pre-Built Experience for Call Insights appeared first on Symbl.ai.

]]>

The Symbl.ai Conversation Intelligence Platform empowers developers and enterprise builders to use AI to optimize a broad range of business conversations using purpose-built APIs and flexible UIs.  Our technology enables businesses to leverage AI augmented experiences to improve Enterprise productivity. Today we’re excited to announce a new programmable API addition, Insights UI, to the Symbl.ai REST API portfolio aimed at helping builders and developers innovate quickly with Symbl powered customizable solutions for Enterprise organizations.

To help developers with speed to value, we’re launching a new low code option “Insights UI” that is designed to work in conjunction with the recently released Call Score API.  Our new API embodies Symbl’s programmability and customizability API design philosophies — significantly reducing developer efforts while at the same time offering solutions that can be adapted to a broad range of business scenarios.

Individual APIs to Low Code APIs

Insights UI

Insights UI API invokes a customizable pre-built UI presentation layer that can be easily integrated into a front end application.  Insights UI can be invoked with or without the Call Score component.  Insights UI incorporates other data elements including Sentiment Analysis, Summary, and Questions, which together with Call Score provides deeper insights and transparency, and engagement analysis of the conversation.

Insights UI supports multiple out of the box UX customizations by giving developers control over the look and feel of the UI.  Besides choosing to include or exclude Call Score, developers can also choose between a record list page view versus a concise single record details view.

Insights UI List Page with Call Score

The list view of Insights UI serves as a repository for users’ call records, displaying summarized information in easy-to-navigate cards.  Insights UI API’s ‘list-page’ command returns a list of engagement records under the App ID account.

GET

https://api.symbl.ai/v1/conversations/experiences/insights/list?includeCallScore=true

Insights List with Call Score

Insights UI Details Page with Call Score

Insights UI API’s ‘details-page’ command returns a single specific conversation engagement record with detailed analysis information.

 

GET

https://api.symbl.ai/v1/conversations/experiences/insights/details/{:conversationId}?includeCallScore=true

Symbl.ai Insights Details with Call Score

Insights UI without Call Score

GET

https://api.symbl.ai/v1/conversations/experiences/insights/list?includeCallScore=false

Insight UI without Call Score

Analysis Components Included with Insights UI

Summary

Summary generates an accurate record of the key moments during a conversation.  The feature allows significant time savings to capture critical information from conversation records.

Symbl Summary

Sentiment

Sentiment measures and tracks over time a conversation’s speakers’ emotional engagements.  This feature enables customer facing organizations to better observe and respond to customers’ subtle signs of concern that may not be reflected as a direct and capturable verbal response.

Symbl Sentiment

Next Steps

Next Steps highlights the specific follow-up actions captured, such as scheduling the next meeting, commitments to send information materials or similar actions detected within an engagement record.

Symbl.ai Insights Next Steps

Objections

Objections is a unique component available only if the developer selects “conversationType”: “sales” as part of the API call.  Objections highlight statements within a customer’s conversation that the AI model deems to be forward motion blockers.

Symbl Insights Objections

Questions and Answers

Q&A highlights general questions and answers within a conversation record.

Symbl Insights Questions and Answers

We have more exciting news and information to share in our Introducing Call Score API blog, please check it out.

To learn more about Insights UI, please read our technical documentation.

Follow our API Reference and try out the APIs on Postman.

The post Introducing a New Gen AI Powered Pre-Built Experience for Call Insights appeared first on Symbl.ai.

]]>
Introducing Call Score API https://symbl.ai/developers/blog/introducing-call-score-api/ Tue, 18 Jul 2023 17:43:08 +0000 https://symbl.ai/?p=29369 The post Introducing Call Score API appeared first on Symbl.ai.

]]>

The Symbl.ai Conversation Intelligence Platform empowers developers and enterprise builders to use AI to optimize a broad range of business conversations using purpose-built APIs and flexible UIs.  Our technology enables businesses to leverage AI Augmented experiences to improve Enterprise productivity. Today we’re excited to announce a new programmable API addition, Call Score, to our REST API portfolio aimed at helping developers innovate quickly with Symbl’s platform.

Previously if a developer wanted to create a call analysis solution using Symbl’s API Library, the developer would have to start with Symbl’s individual APIs. While flexible and powerful, a downside of starting with individual APIs is the amount of coding and development time required.  Not any more. Today Symbl.ai makes available a new low code API option, Call Score, designed to expedite speed-to-value for developers. Our new API embodies Symbl.ai’s programmability and customizability API design philosophies — significantly reducing developer efforts while at the same time offering adaptive solutions for a broad range of business scenarios.

Symbl.ai APIs

Call Score 

Call Score API provides a numerical assessment as well as explanations of conversation quality and participant performance at scale. It provides a single numerical score for each conversation, making it easier for users to identify and compare similar conversations.

Key Benefits of Call Score:

  • Automation at Scale: Call Score API makes it easy for developers to automate call assessments at scale, significantly reducing labor, cost, and time required compared to human led reviews.
  • Adapts to Business Context: Powering Call Score API is Symbl’s Nebula large language model (LLM) that is capable of accepting additional input information to adapt how it processes different task instructions and business scenarios.  Furthermore, Call Score is capable of capturing continuously changing information context during a two-way or multi-party human conversation without the need for explicit instructions from app or developer.
  • Context Rich Output: Call Score API provides context rich, unbiased explanations accompanying each score in JSON format.

The biggest differentiating advantage of our Call Score API comes from our generative AI.  Symbl’s LLM immerses into each conversation engagement by analyzing multiple conversational dimensions described above to reach the most accurate human-like conclusion with speed, accuracy, and repeatability.  Similar AI solutions may only perform keywords matching, then present a score based on match rate; Symbl’s LLM actually goes deep into a conversation, understands the conversation context, then presents an evaluation score with clear and precise supporting explanations.  This level of human-like rationale behavior is one of the key Symbl.ai advantages.

Symbl.ai Call Score Enterprise Accelerator

High Level View of Call Score’s Workflow:

Symbl.ai Call Score API Diagram

We designed the Call Score API with an easy-to-understand structure.  Developers and builders can easily integrate Call Score with any application in just two steps:

Step 1 – Process Conversation

Step 2 – Get Call Score

Example: GET Call Score

GET

https://api.symbl.ai/v1/conversations/{conversationId}/callscore

Symbl.ai Call Score Code

    Call Score Criteria Explained

    Call Score API currently supports two types of conversations: Sales and General. Each type has its own set of criteria for evaluation.

    Criteria for Sales calls:
    • Communication and Engagement: Gauges the effectiveness of a sales representative’s communication style and their ability to collaboratively engage with the prospect. 
    • Question Handling: Assesses the sales representative’s ability to answer questions and handle objections from the prospect.
    • Sales Process: Measures how well the sales representative adheres to the organization’s sales process and protocols.  This involves checking if the representative is following the BANT methodology in qualifying a prospect.
    • Forward Motion: Evaluates the sales conversation regarding how it advanced a sales opportunity.
    Criteria for General calls:
    • Communication and Engagement: Evaluates the conversation based on parameters such as politeness, empathy, and active listening.
    • Question Handling: Assesses the participants’ ability to address questions effectively.
    Select ‘Sales’ Conversion Type with Call Score
    POST https://api.symbl.ai/v1/process/video/url
    {
        "url": "https://my-conversation-url",
        "conversationType" : "sales",
        "features": {
            "featureList": ["callScore"]
        }
    }

    To further increase precision in actionable insights generated by Call Score, Opportunity Stage is incorporated as another contextual layer into Call Score’s AI evaluation dimensions.  The Opportunity stage takes into consideration the business context and intent changes during a dynamic conversation engagement journey, then adapts the AI model’s scoring behavior accordingly. Five opportunity stages are supported at launch – Qualification, Discovery, Demo, Proposal, Negotiation and General – each with an adjusted scoring weight model tailored to perform the most objective and relevant score evaluation.

    Process Call Record with ‘Demo’ Opportunity Stage Bias
     POST https://api.symbl.ai/v1/process/audio/url
    {
        "url": "https://my-conversation-url",
        "conversationType" : "sales",
        "features": {
            "featureList": ["callScore"]
        },
        "metadata": {
            "salesStage": "Demo"
        }
    }

    We have more exciting news and information to share in our Introducing Insights UI API blog, please check it out.

    To learn more about Call Score, please read Symbl.ai Call Score’s technical documentation

    Follow our API reference and try out the APIs on our platform.

    The post Introducing Call Score API appeared first on Symbl.ai.

    ]]>
    Symbl.ai LLM – Nebula Private Beta Invitation https://symbl.ai/developers/blog/llm-nebula-private-beta/ Fri, 14 Jul 2023 16:00:00 +0000 https://symbl.ai/?p=29213 Symbl.ai announces the Private Beta launch of Nebula, a large language model for natural human conversations. Sign up today!

    The post Symbl.ai LLM – Nebula Private Beta Invitation appeared first on Symbl.ai.

    ]]>

    Symbl.ai is excited to announce a Private Beta launch of Nebula, our LLM for natural human conversations. Nebula is intended for businesses and developers who are interested in building generative AI powered experiences and workflows that involve human conversations including sales calls, meetings, customer calls, interviews, emails, chat sessions and other scenarios.

    In mid July Symbl.ai will start making the Nebula LLM available to developer communities through our Private Beta program.  During this Private Beta program, developers can experience a hands-on preview of Nebula’s performance under a variety of use case scenarios (visit our technical documentation for a full list of use case scenario descriptions here).

    Use Case Examples:

    Prompt: “What could be the customer’s pain points based on the conversation?”

    Response:

    Prompt: “What sales opportunities can be identified from this conversation?”

    Response:

    Prompt: “What best practices can be derived from this conversation for future customer interactions?”

    Response:

     

    The Symbl.ai Nebula model takes two inputs in its prompt – an instruction which is a command or question to model, and a conversation transcript.  Nebula is a large language model (LLM) trained to understand nuances in human conversations and perform instructed tasks in the context of the conversation.  Human conversations typically involve multiple participants and complex interactions between them across long and distant dialogues.

    Key Highlights:

    • Nebula takes into consideration input instructions, questions, conversation transcripts, and context to generate the output that reflects the intended task result provided in the instruction. This enables the model to generate responses in context of the provided conversation.
    • Nebula can create human-like text, based on the inputs provided, which makes Nebula effective in handling a wide variety of use case scenarios.
    • Nebula provides developers the ability to adjust the generation parameters of the model that control characteristics such as diversity, randomness, and repetition to change the model’s behavior to satisfy a specific use case.
    • Developers can easily integrate Nebula through an easy REST API 

    import requests
    import json
    
    url = 'https://api-nebula.symbl.ai/v1/model/generate'
    
    headers = {
        'Content-Type': 'application/json',
        'ApiKey': 'YOUR_API_KEY',
    }
    
    data = {
        "prompt": {
            # Your question or instruction
            "instruction": "What are the customer pain points based on this conversation?",
            # Your conversation transcript
            "conversation": {
                "text": "Representative: Hello, How are you?nCustomer: Hi, good. I am trying to get access to my account but ..."
            }
        }
    }
    
    response = requests.post(url, headers=headers, data=json.dumps(data))
    
    print(response.text)

    Model Playground

    We’ve created a Model Playground that allows developers to test the Nebula LLM, without writing any code, against various conversations and tasks. Model Playground is a great way to start exploring Nebula’s capabilities.

    • Get started with already available conversation transcripts and prompt suggestions.
    • Use your transcript by pasting your transcript or uploading a text file.
    • Fine tune parameters to find the right combination of generation parameters for your use case.

    To request access to Model Playground, please visit our sign up page here

    Using Nebula in your applications

    Nebula can analyze conversation transcripts and generate responses based on the conversation along with the instruction or question in the prompt. Furthermore, developers can provide Nebula with instructions such as request summaries, follow-up questions, draft emails, issues to review, qualify sales leads, identify and recommend resolutions to customer issues, or even recommend business opportunities. Nebula can also respond to specific questions about the conversation as part of the instruction. Check out docs to learn more.


    For example, Nebula powers generative AI capabilities in Symbl.ai’s Sales Intelligence solution that provides context-aware sales coaching experiences to business organizations. In this example, Nebula performs various tasks via Model API to analyze a conversation with a prospect, identifying themes, sentiments, next steps, questions and answers, and objections, to help generate follow-up responses and identify potential sales opportunities.

    Call for Developers

    At Symbl.ai, we believe that there’s tremendous value in human conversations especially in businesses, and Nebula can help harness that value across various conversation types. We are excited to see what you build with Nebula. We have been working with developers and businesses who have been pushing boundaries by leveraging advanced generative AI for conversation intelligence and we’re eager to offer the power of Nebula to this inspired community. In the same spirit, we encourage you to apply for our Startup program if you are working at a startup or sign up to become a model tester and accelerate your access to the Nebula model. We’d love to see our current and future developers push the boundaries of conversation understanding with Nebula use cases in following areas:

    – Sales

    – Customer Support

    – Meeting Productivity

    – Recruitment

    – Training and Education

    – Workflow Automation

    – Data Analytics and Sciences

    – Healthcare Technology

    – Finance and Insurance Technology

    To register for the Nebula LLM Private Beta Preview, please visit our sign up page.

    For more information on use case scenarios, see our technical API documentation page.

    We’re very excited to work with more AI developers and businesses to bring your vision to life.

    The post Symbl.ai LLM – Nebula Private Beta Invitation appeared first on Symbl.ai.

    ]]>