Surbhi Rathore, Author at Symbl.ai https://symbl.ai/developers/blog/author/surbhi/ LLM for Conversation Data Wed, 28 May 2025 15:39:10 +0000 en-US hourly 1 https://symbl.ai/wp-content/uploads/2020/07/favicon-150x150.png Surbhi Rathore, Author at Symbl.ai https://symbl.ai/developers/blog/author/surbhi/ 32 32 Symbl.ai Is Joining Invoca to Power the Future of Revenue Conversations https://symbl.ai/developers/blog/symbl-ai-is-joining-invoca/ Wed, 28 May 2025 15:39:03 +0000 https://symbl.ai/?p=33561 Today, I’m excited to share an important milestone in Symbl.ai’s journey: we are joining Invoca, the AI leader in revenue execution platforms. Since day one, our mission has been to understand and elevate human conversations — moving from automation toward intelligence that acts with context, empathy, and purpose. That mission now continues on a much […]

The post Symbl.ai Is Joining Invoca to Power the Future of Revenue Conversations appeared first on Symbl.ai.

]]>
Today, I’m excited to share an important milestone in Symbl.ai’s journey: we are joining Invoca, the AI leader in revenue execution platforms.

Since day one, our mission has been to understand and elevate human conversations — moving from automation toward intelligence that acts with context, empathy, and purpose. That mission now continues on a much larger stage.

A Shared Vision for Human-Centered AI

Symbl was founded on a simple belief: conversations are the richest source of human insight. Our platform doesn’t just transcribe or summarize — it understands nuance, intent, emotion, and action across channels and in real time.

Invoca has long shared this perspective. Their platform powers critical revenue conversations for some of the world’s most recognized brands, right at the point of decision. Together, we’re combining deep conversation intelligence with operational execution — enabling revenue teams to deliver smarter, faster, more adaptive customer experiences.

We’re not just enhancing the stack. We’re redefining how people and AI work together to drive outcomes.

Expanding the Platform with Deeper Intelligence

Symbl’s contextual and Agentic AI can now help power Invoca across voice, SMS, and chat. This includes:

  • Agentic AI that understands and responds in real time across modalities
  • Multi-channel conversation intelligence that continuously adapts
  • Real-time decision support for revenue teams

This is what the next generation of AI in revenue looks like — proactive, embedded, and built for action.

To Our Community: Thank You

As Symbl becomes part of Invoca, our focus remains on building intelligence that enhances real-time interactions. Now, we have the opportunity to take that work deeper into the systems where conversations directly impact outcomes — helping teams not just understand what’s happening, but act on it with clarity and speed.

To the developers who built with us — whether you prototyped with our APIs, integrated us into your stack, or shared feedback that shaped our roadmap — thank you. Your input helped push the boundaries of what Symbl could do, and that impact is carrying forward.

We’re bringing your ideas, use cases, and technical insights into the next phase of our journey — where context-aware, agentic intelligence becomes a core part of revenue workflows at scale.

If you’ve been part of the Symbl community, we’re grateful. If you’re new to our work, welcome. We’re still building — and now, with even greater reach and momentum.

Learn more about the acquisition here.

The post Symbl.ai Is Joining Invoca to Power the Future of Revenue Conversations appeared first on Symbl.ai.

]]>
AI for Performance Marketing https://symbl.ai/developers/blog/ai-for-performance-marketing/ Thu, 07 Sep 2023 20:34:47 +0000 https://symbl.ai/?p=30740 Integrating the power of Artificial Intelligence (AI) can optimize performance marketing strategies and transform inbound calling. It can create unprecedented opportunities to enhance efficiency by driving workflow automation and creating actionable intelligence for enterprises. AI for Performance Marketing Performance marketing is a marketing approach where businesses or enterprises pay a fee to an affiliate or […]

The post AI for Performance Marketing appeared first on Symbl.ai.

]]>
Integrating the power of Artificial Intelligence (AI) can optimize performance marketing strategies and transform inbound calling. It can create unprecedented opportunities to enhance efficiency by driving workflow automation and creating actionable intelligence for enterprises.

AI for Performance Marketing

Performance marketing is a marketing approach where businesses or enterprises pay a fee to an affiliate or a partner for specific actions or outcomes such as clicks, inbound calls, or leads. It depends on channels such as pay-per-call (PPC), display advertising, social media and others. AI is empowering businesses to efficiently target and convert customers through analysis of large volumes of data. It can identify top-performing campaigns through analytics, boosting marketing ROI. Moreover, businesses can offer 24*7 support to customers by building AI-based chatbots and digital assistants. But there is still a lot of untapped potential. 

Transforming performance marketing using real time intelligence and the power of generative AI

Personalize customer interactions: Performance marketing campaigns focus on quantitative data – for instance highlighting keywords that deliver best traction or using call analytics and tracking capabilities. With conversation intelligence, companies can level up and tap into qualitative intelligence: the pain points of customers, preferences, and the intent when customers reach out in real time using APIs, making every customer interaction personalized. Post call, using generative AI, companies can dissect conversations and build customer profiles based on past conversations to drive targeted efforts in optimizing sales pipeline for better lead qualification and conversion.  

Evaluate best distributors to work with: Using generative AI, companies can uncover affiliate or distribution partner specific intelligence for example, identify which partners provide best results – calls with business relevant leads, leads with highest conversion, or lowest fraud/scam customers.   

Assist agents in real time: In fast-paced industries such as healthcare or e-commerce, customers have urgent requirements and need immediate assistance – an opportunity to convert on the spot. Using conversation intelligence, businesses can take advantage of that by accelerating knowledge article sharing to convert leads quicker and automating after call work such as CRM data entry or creating summaries to make agents more productive and handle more calls.

Ensure compliance: campaigns need to ensure compliance with regulations such as HIPAA, GDPR, and PCI when handling sensitive customer data including personal details or credit card details. With conversation intelligence APIs, companies can automate identification and redaction of sensitive data from conversations in real time. Using generative AI, companies can understand the regulatory requirement and identify which part of the conversation was non-compliant to take corrective actions.

Scale coaching for agents: periodic call monitoring for quality assurance is done on a sample basis and is time-consuming. With conversational AI, companies can enable targeted coaching for their agents by analyzing conversations at scale. Companies can evaluate performance of agents over historic calls to identify strengths and weaknesses and design a tailored coaching plan for better performance.

Competitor Analysis: Leveraging generative AI, companies can break down conversations into parts and see how customers perceive competitors -their strengths and weaknesses. Moreover, evaluate how consumer perception about the brand has evolved over a period by analyzing past conversations.

Symbl redefines the way you approach performance marketing

Symbl’s Conversation Intelligence Platform provides state of the art AI models that are purpose built for conversations and convert unstructured conversations across any format: audio, video, and text into actionable intelligence. It generates intelligence in real-time and asynchronously, is customized to a domain, and offered in a Virtual Private Cloud (VPC) environment to ensure data privacy. You can enable following using Symbl: 

Entity extraction:

Identify key entities within conversations for further actions. For example, a real estate agency using performance marketing to connect with potential property buyers can use Symbl’s entity detection to detect and auto populate customer name, occupation, address, and appointment date and time. Moreover, it can capture feedback for its services and competitors mentioned to drive competitive advantage. 

Evaluate sentiment, intent and engagement:

Track sentiment shifts and engagement changes during conversations. For example, a telecom company using a pay-per-call strategy can detect a caller’s negative sentiment when discussing data limits. Take corrective actions by providing a detailed plan explanation and suggesting suitable alternatives to reignite customer interest for higher conversion chances.

Advanced call analytics beyond numbers:

Get call scores for a specific call or a group of calls. See the reasoning behind the numbers used for evaluation, identify specific areas where call quality dropped, and get recommendations on corrective steps using Nebula. For instance, a travel agency attracting vacationers can analyze historic calls to detect drops during discussions about travel insurance and identify corrective steps such as simplifying explanations of the benefits, improving conversion rates.

Real-time agent assistance:

Monitor the evolution of customer intent and unanswered questions during dialogues and tailor responses to adapt the conversation. Retrieve relevant articles from the knowledgebase based upon customer pain points, or topics/themes discussed to accelerate effective issue resolution in real-time. Detect instances of any sensitive data such as PII, PHI, and PCI in a conversation as it happens and replace it using a custom method of your choice.

Value created for business

Improved lead conversion:

Uncovering customer sentiment, intent, and pain points enables you to adapt conversations and boost conversion rates.

Pipeline management:

Analyze conversations to prioritize leads and determine their potential, ensuring that sales teams focus on leads with the highest probability of conversion, optimizing resource allocation.

Informed Decision-Making:

Comprehensive call scores and analytics empower data-driven choices by understanding what type of strategies work better and accordingly refining campaign messaging or partner selection, maximizing impact..

Competitive intelligence:

By analyzing competitor mentions, businesses gain consumer insights for refining marketing strategies and differentiating their offerings.

The post AI for Performance Marketing appeared first on Symbl.ai.

]]>
Elevating Human Performance with AI: The Best is Yet to Come https://symbl.ai/developers/blog/elevating-human-performance/ Wed, 17 May 2023 12:16:28 +0000 https://symbl.ai/?p=28313 A look at the Symbl.ai journey for its first five years, including key milestones and plans for the future,

The post Elevating Human Performance with AI: The Best is Yet to Come appeared first on Symbl.ai.

]]>
Symbl.ai is Five Years Old Today

Today marks five years since we started Symbl.ai to change how businesses adopt AI and transform experiences centered around human conversations. We are grateful to our developer community, partners, and customers who recognized the unique value of conversation understanding and played a pivotal role in the progress of our industry. The rapid evolution in the AI ecosystem continually fuels our passion at Symbl, and keeps us constantly pushing forward. As we celebrate our fifth anniversary, it’s fitting to pause, look back on our journey, and share our exciting vision of the (near) future. 

In 2020, the launch of our self-service developer platform revolutionized real-time AI experiences by providing streaming and async intelligence for key insights in conversations. Then, we introduced the groundbreaking Summarization API, enabling the generation of meeting summaries and short thematic summaries. This breakthrough powered automated note-taking and capturing key moments in real-time. Our release of Trackers empowered customers to semantically and contextually search for different intents, phrases, and themes, leading to applications such as live NPS tracking, intelligent call routing, and real-time assistance. With the feedback from the builder community, we evolved our developer platform with SDKs, a new playground experience, new docs, and a managed library of over 50 intents within the trackers, solidifying our position as an industry leader.

Now, we are on the verge of unlocking the next generation of conversation understanding in a whole new way to elevate human performance, and the best is yet to come.

Going Beyond Transcription

Symbl was one of the first companies to democratize business-specific understanding of conversations via our API platform. We’ve had the privilege to work at the forefront of how AI is used to accelerate outcomes and performance for business conversations. During this time, developers, customers, and partners were wowed by transcription as a new and exciting way to deliver improved human conversation results. The world has changed a lot since then. Exciting new applications of generative AI unlocks deep conversation understanding and supercharges human performance. At Symbl, we’ve been at the center of this innovation since the beginning and we’re more excited than ever to be at the center of this revolution! 

Unraveling the Complexity of Conversations

Understanding conversations is comparable to peeling an onion. It has multiple layers that we may not fully realize, and our understanding depends on the outcomes we aim to achieve. The main problem arises when there is a miscommunication between what is said and how it is interpreted. This issue becomes more significant as it happens on a larger scale during multi-party and virtual conversations, leading to stagnant business growth and increased customer turnover. However, by aligning people’s understanding of the intended meaning, eliminating biases in interpretation, and preventing information loss, we can enhance human performance and generate value at every step.

At Symbl, we are dedicated to innovating and discovering the various layers of intelligence within conversations that directly impact business outcomes. This includes organizing unstructured conversation data, uncovering new insights, extracting essential information and actions, identifying intents and their relationships to other aspects of the conversation, understanding emotions and sentiments, and ensuring the identification of personal information for security and compliance purposes. By tapping into this knowledge in real-time, Symbl empowers businesses to improve their outcomes more rapidly than ever before.

The Best Communication Experiences are Built

Programmability has been core to Symbl since our inception, and we know that today’s builders are not limited to developers. AI evolution will give rise to Builder 2.0, a creative thinker, problem solver, and visionary, irrespective of software-building skills or knowledge. Personalized workflows and applications will be built with machine learning and natural language. Every business will deliver unique value to their customers in every human interaction.

Business-Ready Approach to AI that Makes Humans Smarter

We understand that the journey to perfecting AI that can understand human conversations better than humans is a marathon, not a sprint. This means delivering targeted, business-ready solutions that bring immediate value to businesses while paving the path for advanced achievements.

Building products that can be rapidly adopted at large scale has enabled us to deliver meaningful value to our customers without extensive upfront training cycles. This involves focus on domain specificity, low latency, optimizing for memory, building with foundations for privacy, security, and bias mitigation at the core. Building the AI infrastructure from the ground up also enables us to automatically tune the system with feedback specific to businesses and deploy it in their own cloud infrastructure.

Symbl: What to Expect in 2023

Staying true to our core mission to provide the most advanced AI platform for identifying and generating deeper conversation understanding to elevate human performance, we’re excited to rollout a series of new updates throughout 2023 that focus on:

  • Rapidly incorporating domain and business-specific processes and knowledge to deliver highly contextual insights 
  • Increasing the depth of intelligence, expanding value with real-time and asynchronous AI
  • Building blocks and experiences to reduce time to value for specific roles and business outcomes 
  • Access to core AI models for developers to unlock greater creativity in application building

We are extremely excited to share what’s in store.

Community and Collaboration

Humans speak and interact in business processes differently compared to how they write or comment on posts. Business conversations are highly contextual to business goals, with multiple participants, defined by objectives (or not), and involving several modalities. Models built for general-purpose text understanding often struggle to deliver accurate and predictable understanding of human dialog without continued fine-tuning.

To truly comprehend the nuances and subtleties of human conversation, AI systems should be built and evaluated on multiple other foundational and emergent abilities, such as reasoning, adaptation, context, memory, and domain specificity. We believe in the power of collaboration and community involvement. While these are challenging problems, achieving true understanding of multi-party natural human conversations will be a collective effort, and we’re excited to continue pushing the boundaries to unlock iterative value for businesses that can scale.

Onwards and upwards

We are tremendously grateful for the customers, partners, employees, and the wider community for being a part of this incredible journey. At Symbl we thrive on the responsibility to continually deliver innovation centered to amplify human interactions. 

We look forward to continuing to work together towards a future of limitless possibilities.

The post Elevating Human Performance with AI: The Best is Yet to Come appeared first on Symbl.ai.

]]>
What Is Named Entity Recognition (NER), and What Can You Do with It? https://symbl.ai/developers/blog/what-is-named-entity-recognition/ Tue, 16 Aug 2022 19:46:00 +0000 https://symbl.ai/?p=26377 From chatbots to coaching to document processing, there are many use cases of named entity recognition (NER). And with the Natural Language (NLP) market estimated to be valued at $43 billion by 2025, there will be many more. But before that, let’s look into exactly how NLP fuels the named entity recognition process and how […]

The post What Is Named Entity Recognition (NER), and What Can You Do with It? appeared first on Symbl.ai.

]]>
From chatbots to coaching to document processing, there are many use cases of named entity recognition (NER). And with the Natural Language (NLP) market estimated to be valued at $43 billion by 2025, there will be many more.

But before that, let’s look into exactly how NLP fuels the named entity recognition process and how NER works.

Named Entity Recognition, a Subset of NLP

NER is a subset of NLP. And NLP works based on AI. NLP is the technology that helps machines understand the way humans speak. 

It works by applying calculations to the specific features of words and phrases, such as word types and capitalizations. Based on that, the AI can identify sentiments and discern what the context of the text means.

In the case of NER, NLP and machine learning (ML) serve two different purposes. NLP studies the structure and the language’s rules and interprets the context. ML helps the machines (or algorithms) learn based on the data being fed and improves their ability to understand over time. Combining these two, you have a program to identify and categorize texts.

Types of NER Systems

Currently, four different named entity recognition systems or approaches are used to identify these entities.

Dictionary-Based Systems

In this case, the algorithm is trained based on a dictionary of select words or phrases. Here, the basic string-matching algorithms find relevant entities based on what’s present in the dictionary. 

This system has several limitations. The dictionary has to be updated repeatedly to cover everything you need. Additionally, these systems can’t detect misspelled words and phrases, which might lead to inaccuracies in the output.

Rule-based Systems

The algorithm relies on specific rules in rule-based systems to identify and extract necessary information. It’s based on two types of rules:

  • Pattern-based: Morphology of the word being used.
  • Context-based: Context of the word being used.

For example, if the rule says that the word is present after a title like Mr./ Ms./ Mrs./ Dr., the word is a person’s last name.

Machine Learning-based Systems

Here, statistical models are used to identify entities. First, the model is trained using annotated documents, after which it can identify specific entities in other documents. 

The time it takes to train the model depends on how complex the terms are, but other than that, it’s a much more practical approach. This approach is preferred as it can identify entities when a word or phrase is misspelled, giving a better output. 

Deep Learning-based Systems

Deep learning (DL) is a fairly new approach used for NER, but it’s a lot more powerful than other approaches. DL-based systems use several models to achieve the desired output. Some of them include Bidirectional Encoder Representations from Transformers (BERT, Bi-directional Long-Short Time Memory (BiLSTM), Convoluted Neural Network (CNN), Generative Pre-trained Transformer (GPT-2 & GPT-3), Pathways Language Model (PaLM), and XLNet. These models read the annotated text to understand its context and train accordingly.

It can understand the text with more depth and provide an accurate output. It also saves time in feature engineering, which is a huge bonus.

How Does NER Work?

Ideally, there are two steps in the named entity recognition extraction process: detecting and categorizing entities. 

Detection of Entities

You need to train the algorithm to identify the entity based on your chosen approach. Let’s say you want the algorithm to identify three parameters—name, organization, and location. It’ll have to identify these entities first.

The algorithm can identify where these entities begin and end using the Inside-outside-beginning tagging approach. It can determine the boundaries and pull the information accordingly. 

To ensure accuracy, the model needs to be trained with the right data.

Categorization of Entities

Once the models are trained, you need to test them on different documents to identify their accuracy. Here, it reads the text and assigns the category to specific words if it meets the criteria. These criteria are predefined and depending on the approach, the model can get better with time — and it can be as complex or simple as you’d like it to be.

For a more granular understanding, we need to look at what kinds of blocks a typical NER model would have and how that works. 

There are three different blocks that a NER model has, and they are:

  • Noun Phrase Identification: This block includes all the nouns and can be identified using dependency parsing and speech tagging.
  • Phrase Classification: Once the nouns are extracted, they are classified into different categories depending on what you need. Examples include name, location, dates, money, time, and organization.
  • Entity Disambiguation: If the algorithm misclassifies entities, you can add a validation layer to ensure accuracy. For this purpose, you can use public knowledge graphs like IBM Watson, Wikipedia, and so on.

For example:

Sentence: “Bill Gates is the co-founder of Microsoft, a technology corporation based in Washington, United States.”

Tagging:

(“person”: “Bill Gates”),

(“org”: “Microsoft”),

(“location”: “Washington”)

Output:

Person = Bill Gates

Organization = Microsoft

Location = Washington

Use Cases of NER in Business

NER is extremely useful in any context that requires extracting information from text, audio, or video documents. There’s a potential application of NER in all these cases, be it historical documents, medical transcription, or sales and marketing. Here are a few:

Real-time Agent Coaching

You can use NER-based applications to train call centers or customer service employees. They can use the recordings of real conversations and identify which conversations resulted in the most sales. 

Using these recordings, you can identify keywords or phrases that repeatedly appear in these calls and classify them under different categories (positive intent, product issue, churn risk, consideration, and so on). Then, you can use custom trackers to identify these entities and generate suggestions.

Coupling it with Sentiment Analysis — another ML-based approach, you can identify the caller’s intent and direct the conversation accordingly. For example, if the application identifies a keyword that indicates positive buyer intent or potential for churn, it can make recommendations on tackling that sales call.

Customer Support

Many customers leave feedback through chat conversations, review sites, or emails. Using NER, companies can identify feedback relevant to a specific department, support personnel, or product and route it to them.

This helps keep all the support requests and feedback in check and automates the process while increasing the quality of service.

Human Resources

The applicant tracking system (ATS) is a widespread human resource (HR) tool. But did you know that these tools identify and filter out resumes using named entity recognition? NER can be used to determine the mention of specific skill sets, degrees, or experience (designation) and pull relevant resumes. 

You can also train managers on specific interview processes and questions to ask by showing them previous call recordings. 

Compliance and Moderation

With the anonymity of the Internet, moderation is getting harder by the day in online communities and forums. With the onslaught of spam and bullying, using NER moderators can filter out problematic posts and remove them as needed.

All moderators have to do is use a NER-based application that is trained on specific trigger words and images to look out for, and the application can do the rest for them.

Search and Indexing

Usually, webinars and other events aren’t as accessible as they should be because of the lack of transcription facilities. 

Using named entity recognition, conferences and similar events can be transcribed and translated into different languages in real-time so that attendees from anywhere can access them. 

In this case, NER is used to identify topic clusters based on specific keywords to create a topic hierarchy — using a parent-child hierarchy. This helps in clipping relevant clips for different topics and distributing them. The clips of the questions asked during the webinar can be stored in a database for future reference.

You can also monitor entities like emoji reactions to understand which topics strike a chord with the audience — and plan future webinars accordingly. The transcripts can also be made available online by indexing them so that you can measure the success of the content over time.

Live Captions and Meeting Notes

Companies can integrate NER-based applications with their Zoom or any other video calling application to transcribe conversations in real-time. 

Even better, these call notes could be summarized and recorded in a shared library to be accessible throughout the organization. This fosters collaboration and makes finding reference points for specific tasks and projects easier.

Final Word

Named entity recognition can be a handy tool for any business that wants to automate speech and text recognition. It can identify the text specified, understand its context, and provide an output based on how you want it classified. Additionally, NER can save thousands of hours that are typically wasted by manually sifting through text and speech records. It also helps you make data-driven decisions in varying contexts, ensuring your processes are as efficient as possible. 

By harnessing the power of artificial intelligence and machine learning, NER is blazing the road to make communication easier and more accessible for enterprises and individuals.

The post What Is Named Entity Recognition (NER), and What Can You Do with It? appeared first on Symbl.ai.

]]>
What Is Conversation Intelligence? Definition and Top Benefits https://symbl.ai/developers/blog/what-is-conversation-intelligence/ Tue, 09 Aug 2022 07:00:43 +0000 https://symbl.ai/?p=26309 Conversation intelligence isn’t just indexing or keyword matching, nor is it merely paraphrasing engines, transcript analysis, or rudimentary chatbots. These could be considered part of conversation intelligence, but the whole is much greater than the sum of its parts. At Symbl.ai, you can essentially add a layer of AI ML on top of multi-modal, unstructured […]

The post What Is Conversation Intelligence? Definition and Top Benefits appeared first on Symbl.ai.

]]>
Conversation intelligence isn’t just indexing or keyword matching, nor is it merely paraphrasing engines, transcript analysis, or rudimentary chatbots. These could be considered part of conversation intelligence, but the whole is much greater than the sum of its parts.

At Symbl.ai, you can essentially add a layer of AI ML on top of multi-modal, unstructured human communication, enabling you to perform and manage key actions without needing to build and scale your own ML model. These actions include generating automated transcripts, recognizing elements within the conversation, performing analytics, and combining all of that into an actionable pipeline. This type of holistic conversation intelligence system can offer your company invaluable insights that you couldn’t have hoped to gain even six or seven years ago.

Below is a non-exhaustive list of Symbl.ai’s conversation intelligence capabilities:

  • Leverage speech-to-text to generate transcripts with speaker diarization, even in real time.
  • Recognize topics in conversations as well as topic hierarchies.
  • Perform sentiment analysis.
  • Recognize and suggest action items and even follow-ups.
  • Identify questions.
  • Recognize the use for and supply trackers.
  • Identify conversation groups.
  • Perform conversation analytics.

Request a demo here

So what would an actionable pipeline look like if you’re using Symbl.ai? Imagine you have a meeting that a key decision-maker can’t attend. You can let Symbl.ai:

  1. Run real-time transcription.
  2. Generate an interface where you can view the transcript (as well as who’s speaking), the topics discussed, any questions raised, and any action points and follow-ups required — among other things.
  3. Send all this annotated, neatly categorized information in a fully-packaged bundle.

Better yet, with asynchronous and batch processing, you can always pull up a transcript, an audio recording, or even a video call and do the same thing almost instantly. Additionally, with the right application programming interface (API), you can plug in these AI ML capabilities and automate workflows. Following the above example, you can forward action items or follow-ups directly into Trello or Jira, for instance, as tasks to be completed.

How Conversation Intelligence Works

At a fundamental level, conversation intelligence at Symbl.ai is straightforward. After plugging in via an API or software development toolkit (SDK), you send recorded conversation data or feed real-time information into the system, and you extract the conversation intelligence you require.

This simplicity, however, belies complex programmable layers stacked one another. In the foundational layer, there’s natural language processing (NLP) and speech recognition, as well as accompanying approaches to formatting and ensuring accuracy. Next are the extraction and abstraction of insights from the recorded conversations. Finally, there’s more advanced domain intelligence that can help developers directly build on the data extracted.

And the speed, scale, and effectiveness of all of this are underpinned by AI and machine learning algorithms. The staple neural networks that power modern-day NLP underwent significant growth in 2010, then again in 2014 and 2015. Finally, a definitive breakthrough in 2016 occurred in the field of machine translation. Since then, technologies like NLP have supercharged conversation intelligence as a practical application of machine learning.

Benefits of Conversation Intelligence

While that all sounds very impressive on paper, there are real, ongoing use cases that illustrate how conversation intelligence can benefit your team and give you an insight into the technology that underpins it.

Optimize Sales Conversations

Conversation intelligence can surface insights and recommend actions to your sales reps to not only close deals faster but also increase average deal size. Sales reps can even make use of real-time coaching during calls with customers. Additionally, some of the more recent innovations in sales involve advanced chatbots that automate structured conversations.

Improve the Customer Experience

Some of the first examples of speech-to-text productization were in contact centers. To this day, real-time agent assistance and coaching use conversational intelligence to improve agent handling and issue resolution times in contact centers — all of which directly impact customer experience and customer loyalty.

Save Time Qualifying Leads

The impressive indexing, sentiment analysis, and conversation analytics capabilities of conversation intelligence can significantly reduce the time it takes to qualify leads. Sentiment analysis can show your sales teams which leads are primed. Identified topics, action items, and follow-ups can direct your pitches by automatically recognizing key opportunities for pushing sales — even at scale.

Enhance Remote Collaboration

Conversation intelligence fosters enhanced collaboration both in real time and asynchronously, in scope and at scale. It can automatically generate even real time transcripts, process batches of meeting minutes and other documentation, and analyze multi-modal, unstructured human conversations. It can then present structured insight or forward the data to automate workflows that support teamwork.

Increase ROI Through Various Function and Process Improvements

Conversation intelligence clearly provides a number of function and process improvements. All put together, these directly affect your bottom line from the perspective of increasing breadth of scale and depth of effort — or typically both — depending on how it’s applied.

Common Challenges of Conversation Intelligence

At Symbl.ai, we use highly customized conversation intelligence solutions to mitigate some of the common challenges faced by conversation intelligence:

  • Technical limitations in technology — Some key examples of hard boundaries when it comes to conversation intelligence, include the difficulty to parse complex human language and maintaining accuracy in sentiment analysis. Interestingly, both of these issues stem from similar factors: linguistic elements like intonation, jargon, and accents, among other things. Fortunately, ML researchers around the world are continuously improving these technical capabilities.
  • Developing a downstream, task-specific application — You’ll need to program a task-specific downstream application from a generic ML model, which is too high a hurdle for many companies. Symbl.ai addresses this concern by letting you use an extensive collection of prepared APIs — or you can DIY with SDKs. You can literally add Symbl’s ML layer on top of your data so you won’t need to program your own downstream application.
  • Unstructured, unscripted conversations — In the same way templated chatbots and interactive voice response systems can only address scripted concerns, generic conversational intelligence can only process structured data. One of the critical technological leaps made by AI ML was the ability to parse unstructured, and even multi-modal data sources. This is what Symbl.ai does.

It’s true that conversion intelligence is an extremely complex (and constantly evolving) technology that is well outside the scope of most organizations’ abilities to develop in-house. That’s why Symbl.ai addresses these challenges and removes the major barriers to entry.

Examples of Conversation Intelligence 

We’ve successfully deployed conversation intelligence at Symbl.ai to provide our clients with solutions tailored to their individual needs. There are many ways you can translate this success to your own organization. 

For example, to bolster revenue and support sales enablement, conversation intelligence can:

  • Offer key conversation drivers during real-time calls.
  • Analyze call behavior and perform agent coaching in real-time or asynchronously.
  • Analyze trends and reveal insights by crunching thousands of calls.

To increase productivity in remote collaboration platforms, conversation intelligence works to:

To build supercharged recruiting processes and gather better HR intelligence, conversation intelligence works to:

And these are just three use cases. Practically every part of every vertical can leverage conversation intelligence to solve unique challenges.

How Symbl.ai Can Help 

Symbl.ai is a comprehensive, domain-agnostic platform of conversation intelligence that makes it easy for developers to support use cases like:

  • Real-time agent coaching
  • Compliance and moderation
  • Search and indexing
  • Accessibility and live captions
  • Meeting notes and summarization

The developer-friendly platform doesn’t require extensive data training or manual labeling, which reduces the time needed to implement solutions at scale.
If you want to learn how your team can make use of Symbl.ai’s powerful conversation intelligence capabilities, you can request a demo today.

The post What Is Conversation Intelligence? Definition and Top Benefits appeared first on Symbl.ai.

]]>
Symbl.ai Achieves SOC 2 Type II Certification https://symbl.ai/developers/blog/soc-2-type-2-certification/ Thu, 28 Jul 2022 07:00:29 +0000 https://symbl.ai/?p=26091 Delivering a highly available and secure conversation intelligence service is core to the Symbl.ai brand promise. As such, all our systems, controls and processes have always been designed with security and availability as critical design points. We have now gone one step further and are pleased to announce we have achieved SOC 2 Type II […]

The post Symbl.ai Achieves SOC 2 Type II Certification appeared first on Symbl.ai.

]]>
Delivering a highly available and secure conversation intelligence service is core to the Symbl.ai brand promise. As such, all our systems, controls and processes have always been designed with security and availability as critical design points. We have now gone one step further and are pleased to announce we have achieved SOC 2 Type II certification.

What Is SOC 2 Type II Certification?

SOC 2 is an information security compliance standard designed to test and demonstrate the cybersecurity of an organization. To get a SOC 2, companies must create a compliant cybersecurity program and complete an audit with an acredited CPA. The auditor reviews and tests the cybersecurity controls to the SOC 2 standard, and writes a report documenting their findings. 

Here are a few examples of what you will find in the report:

  • Symbl.ai security policies
  • Symbl.ai encryption protocols for data at rest and in motion
  • Symbl.ai logical and physical access controls
  • Symbl.ai change management process
  • Symbl.ai data backup and disaster recovery strategies
  • Symbl.ai system monitoring, alerts and alarms
  • Protecting the data and privacy of our customers is a non-negotiable aspect of what we do.

Our SOC 2 Type II certification provides additional assurance that we have all the right controls in place to protect your data and ensure the availability of our service.

To request a copy of Symbl’s SOC 2 Type II report, please email trust@symbl.ai 

Learn about SOC 2

SOC (System and Organization Controls) is a voluntary compliance standard for service organizations. It was developed and is maintained by the American Institute of CPAs (AICPA), which specifies how organizations should manage customer data. The standard is based on the certain Trust Services Criteria: security, availability, processing integrity, confidentiality, privacy. 

There are two types of SOC 2 reports. Type I describes an organization’s systems and whether the system design complies with the relevant trust principles. Type II details the operational efficiency of these systems.

Click here for an overview of Symbl’s other security, compliances and encryption practices.

The post Symbl.ai Achieves SOC 2 Type II Certification appeared first on Symbl.ai.

]]>
Build Zoom Integration with Symbl on AWS Serverless Infrastructure https://symbl.ai/developers/blog/build-zoom-integration-aws-serverless-infrastructure/ Tue, 26 Jul 2022 07:00:00 +0000 https://symbl.ai/?p=25942 The “Zoom Symbl” Integration describes how to ingest or consume Zoom Cloud Recordings before sending data into Symbl for processing. The aim is to easily have a low-effort, end-to-end integration of Symbl’s Intelligence for Zoom’s Recordings. The outcome would be meaningful insights such as Topics, Questions, Action Items, Follow-ups, etc.  The current system architecture is […]

The post Build Zoom Integration with Symbl on AWS Serverless Infrastructure appeared first on Symbl.ai.

]]>
The “Zoom Symbl” Integration describes how to ingest or consume Zoom Cloud Recordings before sending data into Symbl for processing. The aim is to easily have a low-effort, end-to-end integration of Symbl’s Intelligence for Zoom’s Recordings. The outcome would be meaningful insights such as Topics, Questions, Action Items, Follow-ups, etc. 

The current system architecture is based on serverless components. Therefore, it is easy to build and scale things in a cloud deployment. 

The components described in this document are serverless. These are automated using the AWS Serverless Application Model (SAM) which is an open-source framework for building serverless applications. The SAM template in this document provides a mechanism to build and deploy solutions with ease, as it follows the Infrastructure as Code approach.

High-Level Architecture

The following diagram provides you with the High-Level Architecture used during the application runtime described in this document.

  1. When the meeting is concluded and the recording process is completed by Zoom, a notification event is sent to the AWS Hosted API Gateway front-ending the application. The host is the user responsible for setting up the Zoom Meeting and enabling the cloud recording.
  2. The AWS Hosted API Gateway initiates a lambda request which makes an asynchronous call to the “Producer” lambda.
  3. The AWS API Gateway Lambda responds to the HTTP request back to Zoom.
  4. The “Producer” lambda executes several things. It uses FFmpeg to merge the audio recordings per participant and uploads the multichannel file to the AWS S3 bucket for further processing.
    1. Persists the participant audio into the Zoom Input AWS S3 bucket.
    2. Saves the Zoom Recordings metadata into AWS DynamoDB.
    3. Builds the multichannel single audio file by using the FFmpeg lambda layer.

It saves the multichannel audio file into the Zoom Output AWS S3 bucket. So it enables Symbl to process speaker-separated audio via a single multichannel file.

  1. Initiates an AWS SQS Message for further processing.
  2. The “Consumer” lambda executes the following steps:
    1. Read messages from the AWS SQS queue and perform the concurrency checks.
    2. Read the audio recording files from the AWS S3 bucket.
    3. Read the Zoom recording metadata from the AWS DynamoDB.
    4. Submit the audio recordings to Symbl using the Async Audio API call with the metadata read from the AWS DynamoDB.
  3. When Symbl processes the Async Audio job request, it will keep notifying the AWS API Gateway Webhook URL for every status change. The following values are sent to the Webhook URL when the job requests changes: scheduled, in_progress, completed or failed. For more info, please refer to the Symbl Webhook documentation.
  4. The AWS API Gateway executes the following steps:
    1. Invokes the “Notifier” lambda for further processing.
    2. Responds to the HTTP request back to Symbl.
  5. The “Notifier” lambda executes the following steps:
    1. It updates the job status on the AWS DynamoDB and the concurrent request count for concurrency handling.
    2. If the job status is reported as “completed” then a request to the Experience Video Summary UI API and a request to the Conversation Summary API is made.
    3. Sends an email to the meeting host with the results of the Conversation Summary API and the Experience Video Summary UI URL. You can see an example of the Experience Video Summary UI here.

Prerequisites

Step 1: Create a Custom Zoom App

Now you will create a custom Zoom app. The instructions provided below are required to be completed in order.

This step is mandatory to set up or configure the custom app webhook endpoint for handling the Zoom recording completed event. The ‘Recording Completed‘ event consists of information on the speaker-separated audio per participant. Make sure to follow the below-mentioned steps for creating your custom Zoom App. Note – This integration uses the JWT authorization mechanism.

Steps:

  1. Register Your App
  2. App Information
  3. Generate App Credentials
  4. Set App Features
  5. Activation

Step 2: Download and Build Source Code

In this step, you’ll see how to download and build the source code for accomplishing the Zoom Symbl Integration on AWS.

  1. Clone or Download the Zoom Symbl Integration Source Code
  2. Use Visual Studio Code or any other editor of your choice to open the source code
  3. On VS Code → Terminal → Select New Terminal
  4. Type the command sam build and press enter key to build the serverless apps  

Below is the screenshot showing the output of the “sam build” command.

(test_venv) ranjandailata@Ranjans-MacBook-Pro Sam % sam build

Building codeuri: /Users/ranjandailata/Downloads/Sam/ZoomSymblWebhook runtime: python3.9 metadata: {} architecture: x86_64 functions: [‘ZoomSymblWebhook/ZoomSymblWebhook’]

Build Succeeded

Built Artifacts   : .aws-sam/build

Built Template  : .aws-sam/build/template.yaml

Commands you can use next

=========================

[*] Invoke Function: sam local invoke

[*] Test Function in the Cloud: sam sync –stack-name {stack-name} –watch

[*] Deploy: sam deploy –guided

You should be able to see the below structure with the “build” folder consisting of a list of applications and the template file that helps with the application deployment.

Step 3: Deploying the Application

In this step, you’ll see how to deploy the serverless application using the “sam deploy” command. The application deployment is handled in a guided manner by making use of the command “sam deploy --guided”.

Note – The “Serverless Application” stack deployment incorporates several components ex: Lambda Layers, S3, SQS, DynamoDB, Lambda Functions, etc. required for accomplishing the Zoom Symbl Integration. These aspects of deployments are taken care of by the SAM-guided deployment.

(test_venv) ranjandailata@Ranjans-MacBook-Pro SAM % sam deploy --guided                                     

Configuring SAM deploy

======================

        Looking for config file [samconfig.toml] :  Found

        Reading default arguments  :  Success

        Setting default arguments for ‘sam deploy’

        =========================================

        Stack Name [ZoomSymblStack]: 

        AWS Region [us-east-1]: 

        #Shows you resources changes to be deployed and require a ‘Y’ to initiate deploy

        Confirm changes before deploy [Y/n]: Y

Continue with the stack deployment, and you should be able to see the below one. Please confirm with “y” to deploy the change set.

Step 4: Configuring the Custom Zoom App Webhook Endpoint

The custom Zoom Application has to be configured with the recording completed webhook endpoint, so the application can receive the necessary “recording completed” event information that can be utilized for processing the recordings.

  1. Make sure to log in to the Zoom Developer Account
  2. Navigate to the Created Apps Section
  3. Select the existing app that you wish to configure. Here’s an example.
  1. Navigate to the “Feature” section
  2. Under the Event Subscription, Click on the Add Event Subscription
  1. Select the Event Types → Recording
  2. Check-mark on the “All Recordings have completed” option
  1. Click on Done.
  2. Specify the Event Notification URL endpoint with the Zoom Recording Complete Lambda API Gateway endpoint.
  1. Click on the Save button

Step 5: Configuring the Lambda for Asynchronous Execution

Background – The SAM deploy command will help in building the Zoom Symbl Integration Stack. Every time you perform a clean and deploy, you’ll notice the lambdas getting created with a unique name.

Based on the specified architecture, two of the lambdas are executed in an asynchronous manner. Hence, you’ll have to configure the lambda for making a call to the respective lambdas. Please make sure to update the lambda function name for “Zoom Recording Completed” and for the “Zoom Symbl Webhook Notifier” lambda.

Step 6: Configuring the Lambda Environment Variables

This step is dedicated to configuring the lambda environment variables with the AWS Region, App Secret, Zoom JWT Token, etc. You’ll learn how to configure them.

Here’s the high-level summary of the lambda function and its environment variables with the description that will help you to understand and update the required aspects of this integration. Up-next, you’ll see a screenshot explaining how to update the environment variables.

Lambda FunctionEnvironment VariableDescription
CopyZoomRecordingToS3

This lambda function is responsible for copying the zoom recordings to S3, persisting the metadata on DynamoDB, and sending a message to SQS for processing. This one is the “Producer” lambda as mentioned in the architecture – Step 4
ZOOM_JWT_TOKEN

SNS_SYMBL_ZOOM_QUEUE
Specify your custom Zoom app JWT token

Set with the appropriate Region and Account ID

https://sqs.${AWS::Region}.amazonaws.com/${AWS::AccountId}/symbl-zoom

Alternatively, you can get into the SQS queue named symbl-zoom and then copy the ARN
SubmitZoomRecordingToSymbl

This lambda function is responsible for submitting or posting the “Zoom” recordings to Symbl. This one is the “Consumer” lambda as mentioned in the architecture – Step 5
SYMBL_APP_ID

SYMBL_APP_SECRET

DYNAMO_DB_REGION

SYMBL_WEBHOOK_URL
ReplaceWithYourAppId

ReplaceWithYourAppSecret

Set with the ${AWS::Region} ex: us-east-1

ReplaceWithZoomSymblWebhookNotifierApiGatewayUrl
ZoomSymblWebhook

This lambda function is responsible for post-processing of Symbl webhook notifications. This one is the “Notifier” lambda as mentioned in the architecture – Step 8
SYMBL_APP_ID

SYMBL_APP_SECRET

DYNAMO_DB_REGION

GMAILID

GPASS
ReplaceWithYourSymblAppId

ReplaceWithYourAppSecret

Set with the ${AWS::Region} ex: us-east-1

ReplaceWithYourGmailEmailId

ReplaceWithYourGmailPassword

Lambda – CopyZoomRecordingToS3 

Lambda – SubmitZoomRecordingToSymbl

Lambda – ZoomSymblWebhook

Step 7: Running the Application

In this step, you’ll see how to run the application. Please follow the below-mentioned steps for initiating the Zoom meeting, adding participants, recording, and ending the meeting.

  1. Create a Zoom Meeting
  2. If you wish, you may invite participants
  3. Make sure to record the meeting
  4. End the meeting

Post-zoom meetings, you’ll have to wait for a couple of minutes, so the processing of the recordings can happen. Various factors matter, for example, the number of participants, meeting duration, etc. You should be receiving an email notification consisting of the “summary” and the meeting URL where you can visually see the transcription and insights.

Limitations

  1. Please keep in mind the Zoom Cloud Recording Limits
    1. Zoom Cloud Recording Limitation
    2. Zoom Cloud Recordings Per Participant
  2. Zoom License Limit. Depending on the license that you have for Zoom, there are restrictions regarding the number of active participants in a given meeting. Here’s an example. Follow the Zoom Pricing to get some insights on the Zoom Pricing.
  1. The cloud recording(s) processing time varies by the number of audio files and the meeting length. That said, there’s a max execution time for lambda. i.e. 15 mins. You cannot handle anything beyond that. In addition to the execution time, there are other constraints like the Max RAM and Storage. It cannot go beyond 10 GB. The CopyZoomRecordingToS3 lambda deals with producing the multi-channel stereo audio file and hence, the lambda downloads the cloud recording and then merges the file using ffmpeg. Keep in mind these limits while you test the integration.

Troubleshooting

How to roll back the deployment?

In case of errors, if you are unable to deploy and decide to roll back, please run the following command.

aws cloudformation rollback-stack --stack-name ZoomSymblStack

How can I delete a stack?

Please run the following command.

aws cloudformation delete-stack --stack-name ZoomSymblStack

Wish to change the default stack name?

Open the file named “samconfig.toml” and look for stack_name = "ZoomSymblStack". You may specify the relevant stack name that you wish to use.

Unable to deploy the stack due to the capabilities issue?

Open the file named “samconfig.toml” and look for the “capabilities”. Please make sure that the capabilities are specified with "CAPABILITY_IAM CAPABILITY_AUTO_EXPAND"

How to deal with the Gmail authentication error?

Below is one common error that you might face. However, you might encounter something similar issues while programmatically sending emails via Gmail.

Error: (534, b’5.7.14 <https://accounts.google.com/signin/continue?sarp=1&scc=1&plt=AKgnsbu\n5.7.14 YpFVq7pyRFp0RTWibzW52rsRb6u6s44cd5x0VtpGJuYCZynSZRrrDkv5kK1R8D3Smp1sQ\n5.7.14 VkGBcvGmbXvv4v1Guv6jGLZCjKYGyilqL-zEp71dBFjZVU4zowNLpMtgKcDE_V4G>\n5.7.14 Please log in via your web browser and then try again.\n5.7.14  Learn more at\n5.7.14  https://support.google.com/mail/answer/78754 d1-20020a37b401000000b0069fc13ce21esm2500410qkf.79 – gsmtp’)!

How to deal with the Lambda Layer Version mismatch issues?

This integration deals with two Lambda Layers i.e. ffmpeg and requests and those were set up as part of the stack. When it comes to the layer version, which is something that is automatically incremented by AWS. Let’s say, if you delete the layer and again try to set it up, the version number will not reset. You’ll see the layer dependency on “SubmitZoomRecordingToSymbl” and “CopyZoomRecordingToS3”. Please make a note on the Lambda Layer Version and make sure to use that as part of the stack deployment.

How can I monitor the Symbl Job status?

Log in to AWS and then Search for DynamoDB. You’ll see the below-mentioned DynamoDB tables that are being used for processing the recordings.

zoom_recordings_jobs is the table that you need to look for. It keeps track of the job_id, conversation_id, meeting_uuid, and status.

How to get insights after lambda processing? Are there logs?

Log in to AWS and Search for CloudWatch. Once on CloudWatch, navigate the log groups section and search for the keyword zoom. Select the log group of your interest to know more about the ongoing activities on the lambda function.

Moving Forward with Zoom Symbl Integration on AWS Serverless Infrastructure

The Zoom Symbl Integration with the “Serverless Architecture” demonstrates how to consume or inject Zoom Cloud Recordings and asynchronously process them by merging all audio recordings. Finally, submit the audio recordings to Symbl using the Async Audio mechanism to extract various intelligence aspects like Topics, Action Items, Follow-ups, summaries, etc. In building a full-scale production-ready system, you should have a dedicated infrastructure setup such as EC2 and handle the workflow outlined in the “High-Level Architecture” section.

When it comes to storing the recordings and handling the Symbl concurrency aspects, the recommended best practice is to go with relational databases such as MySQL, Microsoft SQL Server (MSSQL), etc. The reason is the ACID properties support it by default. Also, it is easy to extract data and build some analytics on top of the relational data.

READ MORE: Your Most Common AWS Lambda Challenges: Integrating with Zoom APIs

The post Build Zoom Integration with Symbl on AWS Serverless Infrastructure appeared first on Symbl.ai.

]]>
Symbl.ai’s Postman Workspace for a Sales Intelligence https://symbl.ai/developers/blog/symbl-postman-workspace-sales-intelligence/ Thu, 07 Jul 2022 04:50:24 +0000 https://symbl.ai/?p=25415 Symbl.ai is one of the most extensible API platforms in the market for adding machine learning to your sales workflows. Symbl.ai’s Conversation Intelligence API platform analyzes voice, video, or text conversations that enables developers to create new experiences throughout the entire communication touchpoints of the sales process. Using Symbl to process meeting or call recordings […]

The post Symbl.ai’s Postman Workspace for a Sales Intelligence appeared first on Symbl.ai.

]]>
Symbl.ai is one of the most extensible API platforms in the market for adding machine learning to your sales workflows. Symbl.ai’s Conversation Intelligence API platform analyzes voice, video, or text conversations that enables developers to create new experiences throughout the entire communication touchpoints of the sales process. Using Symbl to process meeting or call recordings using Async APIs, fetching a pre-built customizable video summary experience summarizing the call in a few lines with context enables developers to use existing recordings in their product or solutions and convert it into intelligence for their sales team.

  • Symbl.ai’s Postman Public Workspace for a Sales Automation Pipeline groups these different API calls together for developers to git, clone, or fork into their own development environments for automating sales processes from cold calls to follow ups.
  • Symbl.ai’s support for APIs is part of a broad initiative to help developers throughout their journeys. Our APIs are designed to facilitate the reduction of what is often described as the “Time to First API Call”—which is how long it takes a user to successfully make an API call with a response.
  • Symbl.ai’s public workspace on Postman is meant to help you make that first API call right out of the gate. The following public workspace is designed to reduce the “Time to First API Call” for all of those developers edging to build the next generation of automated sales pipelines.

Here is the link to the workspace.

Symbl.ai's Postman Workspace for a Sales Intelligence

How to Get Started in the Postman Workspace with Symbl.ai

Register for a Symbl.ai account. Grab both your appId and your appSecret to authenticate with Postman so that you receive your x-api-key. Below is an example of how to authenticate with the Symbl.ai public workspace.

The Postman collection for Sales Intelligence Use Case on Postman is shared here.

Async API 

Symbl.ai’s Async API enables developers who are recording and looking to process the sales calls or meetings asynchronously after a call is completed. Already enabled to operate on all of the leading real-time communications platforms from Zoom to Telnyx, Dolby.io to Agora.io, Symbl.ai’s Async API is one of the few APIs on the market for rapidly processing recorded calls anywhere anytime.

Experience API 

After a recorded call is captured, Symbl.ai’s APIs return a conversation ID with which to make additional calls to Symbl.ai’s APIs. Among these APIs is Symbl.ai’s Experience API (i.e.: https://api.symbl.ai/v1/conversations/{conversationId}/experience)  provides a readily accessible visualization to any automated sales enablement pipeline. The Experience API provides not just a summary of the transcribed voice, video or text exchanges between the speakers but a text summary.

Symbl.ai’s Experience API groups together the results of multiple API calls from the Conversation API.Through individual calls to the Conversation API, you receive follow-ups, questions, action-items, or topics, key elements in a map of different points of interest during a discussion on what to buy or sale, its price, or how much. With the Experience API you receive the result of these API calls together in a single pre-built User Interface.

In that respect the results of Symbl.ai’s Conversation APIs are all displayed as one of the Summary UI’s experiences, enabling speakers from a sales call to figure out the previous call’s points of interest during the discussion. Since the pre-built Summary UI is fully customizable, you can add your brand, company logo, or stylization.

Summary API

Symbl.ai’s Conversation Intelligence API platform provides developers with more than just pre-built experiences. You only need to process a conversation once to get access to all the AI and understanding tasks on conversations. Developers can use Symbl Async APIs to process all the recording content once to synthesize massive amounts of information in different use cases by using the Conversation API.

Unlike AWS, Google or IBM, you do not need to ingest your data again for accessing the different language understanding tasks. 

For a sales conversation: one or many conversations with customer and sales executive may cover a vast amount of information about a product integration, a request for a new feature, an update, an Software Licensing Agreement or many, many, many more important topics. Symbl.ai’s Summary API simplifies the vast amount of information from conversations into raw, extensible, intelligence for automation.

After a recorded call is processed with Symbl.ai’s Async API, a developer who seeks to synthesize those vast amounts of conversation data, may invoke Symbl.ai’s Summary API for a summary of the call’s description in less than 20% of the size of the call transcript.

This is an abstractive summarization that regenerates information based on understanding and context unlike other extractive methods that are not well suited for conversation data. 

Conclusion

The Async API together with both the Experience API as well as the Summary API enable Zoom, Telnyx, Dolby.io, or Voice API developers to double down on the way that recorded calls are used as data for the call’s objective. Symbl.ai’s Conversation Intelligence API platform provides developers with the ability to group its APIs together to transform the process of analyzing calls for post-meeting summaries or summaries.

Symbl.ai’s Workspace for a Sales Automation Pipeline is only one of many ways Symbl.ai’s APIs may be grouped together to provide augmented conversation intelligence to voice, video, or message interchanges.

If you are a developer who just tried this workflow with Postman and Symbl, here are some of the other things you are do beyond this:

  • Using the same conversation ID, call the other endpoints in Conversation API that lets you fetch sentiments, topics, analytics in json.
  • Push the data fetched using the Conversation API to other tools used in the sales pipeline like Salesforce CRM or a B.I Dashboard
  • Run aggregated analytics and queries across multiple conversations to identify patterns on what works in a sales call (i.e.: most trending topic before closing a deal, no. of customer touchpoints regarding pricing before signing etc.)

The post Symbl.ai’s Postman Workspace for a Sales Intelligence appeared first on Symbl.ai.

]]>
Building Hybrid Experiences in Conversation AI with Larry Heck https://symbl.ai/developers/blog/building-hybrid-experiences-in-conversation-ai-with-larry-heck/ Fri, 03 Sep 2021 01:13:32 +0000 https://symbl.ai/?p=14766 We spoke to Larry Heck, the former CEO of Viv Labs and SVP/Head of Bixby North America at Samsung, and now the Rhesa S. Farmer, Jr, Advanced Computing Concepts Chair, Georgia Research Alliance Eminent Scholar, and Professor of Electrical and Computer Engineering at the Georgia Institute of Technology. Read his thoughts on everything from virtual […]

The post Building Hybrid Experiences in Conversation AI with Larry Heck appeared first on Symbl.ai.

]]>
We spoke to Larry Heck, the former CEO of Viv Labs and SVP/Head of Bixby North America at Samsung, and now the Rhesa S. Farmer, Jr, Advanced Computing Concepts Chair, Georgia Research Alliance Eminent Scholar, and Professor of Electrical and Computer Engineering at the Georgia Institute of Technology. Read his thoughts on everything from virtual assistants and natural language processing, to the evolution of speech recognition and how to build feedback loops into your conversation intelligence products.

At Symbl.ai, our mission is to help developers easily build and deploy conversation intelligence with ready-to-go toolkits. Part of that mission involves sharing our knowledge so developers can truly understand the magic of conversation-centric AI and all its exciting applications. So, we’re going beyond the tech stack and exploring the exceptional minds at the forefront of conversation intelligence to bring you fresh perspectives and a peek inside the industry. 

To start, we spoke with one of the most accomplished contributors in speech processing, contextual AI, and deep neural networks: Larry Heck

For a bit more background, Larry Heck was recently the CEO of Viv Labs and SVP & Head of Bixby North America at Samsung. He also led a dialogue research effort for Google Assistant, founded Microsoft’s Cortana, held research and executive roles at Yahoo! and Nuance Communications, and has been on the Symbl.ai Technology Advisor board for two years helping our team scale conversation intelligence and unlock its business potential. 

As a pioneer in speech technology and with decades of industry experience to share, we tapped into Larry Heck’s thoughts on all things conversation AI — from the evolution of context and speech recognition, to the art of building conversation intelligence products. Let’s dig in.

What are your thoughts on human to human and human to machine conversations?

The human to human conversation is something I‘ve been thinking about for a long time. Back in my early days at Microsoft, the team I assembled was looking at the problem of learning conversations in a different way, which was more like how a child learns to have a conversation by sitting at the dinner table with their parents. 

The child primarily learns through observation, and they can choose when to join the conversation. That ability to opt in when you have something relevant to say is important from a technology perspective. 

Right now, personal assistants don’t have the option to observe how people interact or decide when to join in. All that understanding happens offline. So the problem of understanding meetings opens up entirely new opportunities to teach technology how to gradually understand those conversations over time.

It’s just a wonderful problem to me. I think human to human conversation, particularly in meetings, is going to have a big impact, not only on solving industry problems today, but also on conversational learning on the human-machine side.

Why build human to human understanding with AI in the first place?

There’s a number of applications where you’d want to bring technology into the conversation — to augment it, to help it, to assist it. But without that technology being in the critical path or getting in the way.

Let’s say you’re talking to a friend about where you want to eat. The assistant is invited to listen, but it doesn’t get in the way. It’s just there and available. And when I say to my friend, “I wonder if there are any good Chinese restaurants near Los Altos,” the assistant can contribute to your human-to-human conversation by proactively pulling up a map and showing you Chinese restaurants near Los Altos. It could even suggest a place that’s highly rated. 

So, in this case, my friend and I benefit from the machine being a quiet participant in the meeting until it has something to add. I think it’s super compelling: human to human conversations with augmentation.

“Augmentation technology is enriching the communication channel between humans, it’s like a whole new way of thinking about how people talk with each other.”

What do you think about passive vs. active conversation intelligence systems?

I think, over time, systems can be a bit more interruptive because they’ll know when it’s appropriate to interrupt the conversation or provide relevant information, but until the technology is ready, conversation systems should be more passive.

In your opinion, why hasn’t that evolution in speech technology happened yet?

I think part of it comes down to technical challenges and the other part is business focus. In the early days of Cortana, we weren’t really thinking of going out mobile-first. But then Siri was acquired by Apple and launched on the iPhone, so there was a lot of business pressure to respond to that.

I believe a similar scenario happened at Google when Alexa launched. That’s kind of why the technology has gone this way, rather than on the human to human side and augmentation of meetings. But there are definitely some technical challenges that have been in the way. One of those is open microphone.

Open microphone is a close cousin of open conversation, where you have a lot of disfluencies and partial sentences and all kinds of complexity. Not only does it make natural language understanding difficult, but also speech recognition.

What are your thoughts on advancing speech recognition? 

When I was working on it in the ‘80s, there was a debate on how to advance speech recognition technology. One side of that debate said, well, we can barely recognize digits properly so we should just focus on digits. The other side said we should work on the broader problem and eventually that will help digit recognition. 

Happily, the latter won. And I think it was a lesson learned early on in speech recognition, which continues to this day. As you go up the tech stack and go broader with all the different kinds of context, it can all be brought back to improve speech recognition.

Speaking of context, how do you view its progression in AI?

I think things are really starting to accelerate in terms of the technology stack. Primarily because we’re getting better at recognizing what are the important forms of context and how to leverage them without requiring a lot of manual supervision and data labeling.

So I think we’re going to continue to expand. Especially in terms of climbing up the technology stack, away from low level word transcriptions to understanding conversations and predicting where the conversation will (and should) go next through inference and reasoning. Much of this will be enabled by injecting knowledge into the technology stack — maybe business knowledge that two people are talking about in a meeting.

How should a business go about building robust conversation intelligence systems?

In a business setting, you have to be able to effectively communicate that this type of technology can provide value from day one — but it’s always evolving. If you can provide value early on, even when the technology is perhaps not as polished as it needs to be, you can finetune it and provide more and more value over time. That’s a win. 

When I led the R&D team at Nuance, we learned the importance of that feedback loop pretty quickly. The core technology team needed to be connected directly to the customers and work on building that feedback loop first. 

“Anybody that wants to get into this space of deploying conversation technology has to recognize that the problem is never solved. It’s always evolving.”

How should businesses think about getting that feedback?

It’s important to note that we can’t exclusively rely on feedback to drive the innovation of the product. The product has to be good enough out of the box that the customer gets some value from it, and they feel like it’s worth investing time to provide that feedback.

So spend time on actually getting them motivated to use the product, then make sure the feedback loop goes really fast. You have to show them an incremental value from the feedback they provided about your technology. Otherwise, again, they lose motivation.

In my experience, you can do a lot with a small number of customers. Even in the single digits. You can learn what’s really important and then scale your product from there. It creates a network effect where your core technology gets better from all the different participants, and that network effect is magical when it starts to happen.

Lastly, what are you most excited about over the next two to three years? 

When I was in grad school, my goal was to be able to have a natural conversation with my computer every day when I came into the office. After 29 years, I’m going back to Georgia Tech as a Professor and shifting gears into long term research. So, personally, I’m excited about actually realizing that dream from grad school. 

I really think it’s attainable to have a conversation with my computer, and not only have it recognize what I’m saying and understand what I mean, but have the ability to make higher level inferences and reasoning. And also have knowledge about me and about the world. 

I want to see that kind of evolution in this technology.

Learn more about Larry Heck

To learn more about Larry Heck and his impressive work, check out these sources:

Larry Heck Talks Bixby and 30 Years in Voice

Bixby Developer Day 2018 Korea: The Future of Personal Assistants

Making Personal Assistants Smarter with Samsung and Bixby

Larry Heck’s Google Scholar page

The post Building Hybrid Experiences in Conversation AI with Larry Heck appeared first on Symbl.ai.

]]>
The 5 Dimensions of Conversation Intelligence https://symbl.ai/developers/blog/the-5-dimensions-of-conversation-intelligence/ Sun, 02 May 2021 01:05:25 +0000 https://symbl.ai/?p=13476 There are five main ways in which conversation intelligence (CI) is currently being used or envisaged: real-time action, coaching, predictive analysis, knowledge, and search. All of these aspects of the CI landscape will come together to make humans smarter and more efficient, with little effort on the user’s part other than continuing to undertake natural […]

The post The 5 Dimensions of Conversation Intelligence appeared first on Symbl.ai.

]]>
There are five main ways in which conversation intelligence (CI) is currently being used or envisaged: real-time action, coaching, predictive analysis, knowledge, and search. All of these aspects of the CI landscape will come together to make humans smarter and more efficient, with little effort on the user’s part other than continuing to undertake natural conversations.

Conversation intelligence provides the ability to analyze natural human-to-human conversations in real-time. Going beyond simple natural language processing of voice and text conversations, mission-critical communications can be harnessed, analyzed, and optimized.

With conversation intelligence, you can create products that solve problems. As a developer, it’s useful to have an understanding of the overall conversation intelligence landscape so you can determine how you can strategically apply human conversation understanding into your applications.

This post will show you the five main areas in which conversation intelligence is used today.

1. Real-time actions

Conversation intelligence systems can be used to listen, understand, and propose actions based on what’s being discussed; such as workflow automations, sending information to other tools, system of records, and follow-ups. This enhances human productivity and saves time by doing manual repetitive tasks. A system that automates these actions in real-time opens up one of the most exciting and inspiring aspects of this AI.

You can create a platform for human-machine collaboration where the system recommends real-time actions which can trigger Robotic Process Automation (RPA) systems. This removes mundane tasks by automating them so humans can focus on critical decision-making instead. For example, invoice processing workflow – where your system can listen for tasks relevant to invoices that need processing, extract the required data from it, process the invoice, and submit it to an Enterprise Resource Planning (ERP) system.

Imagine bringing these ‘AI real-time actions’ into human to human (H2H) conversations: you don’t need to talk to a bot or train a system to understand common commands or a vocabulary, but instead there’s embedded intelligence in products that you use which passively listens to your conversations and trains itself. This intelligence enhances your productivity by recommending useful actions at just the right moment. For example, during natural conversation in a video call you’d receive real-time suggestions to email a question to another colleague, or the system would automatically do this for you.

So you see, real-time actions from conversation intelligence means we can look at the world through a different lens and alter how we think about the changing dynamics of communications and the workspace. Rather than you having to tell Alexa to add something to your calendar, the conversation intelligence system could listen to your natural speech and do it automatically.

2. Coaching

Conversation intelligence systems can be used to coach people, making them smarter, by maximizing their efficiency, skills, and knowledge.

In a sales context, your business client might want to know why their top five sales agents are so good. A conversation intelligence system can capture what they talk about in their sales calls, transcribe, and then use deep understanding to pull statistics, like tone, keywords, and the timing of certain phrases. Your client can then use this insight to train their sales agents and optimize their skills, time, and approach, as well as ensure compliance.

There’s a similar value for conversation intelligence in customer care industries,  for example, where call center speech analytics can automatically surface interactions to review, and dig deeper into areas like supervisor escalations, dead air, and hold time. It can also employ voice sentiment analysis to learn what drives a positive or negative customer experience and scale the best practices of top agents.

By being able to review conversations and transcripts alongside evaluation forms, a conversation intelligence system can leave coaching tips for agents within a time-stamped transcript. If you use CI to provide contextual feedback and share results for wider team learning, this will create a focus on real skill development.

Another example of how it’s being used is public speaking coaching, where a conversation intelligence system can provide real-time feedback on key communication metrics, like pace, tone, filler words, energy, conciseness, confidence, and pauses.

3. Predictive analysis

Conversation intelligence systems can be built so they use historical data and current data to provide transparency in business forecasting and even specific functions like sales or customer support  and predict next best actions more accurately and with broader context.

One such example is for sales: The ability to predict can be leveraged to maximize the potential of deals, known as deal intelligence. You could build a system that takes data from a CRM and looks at how many times a sales agent has interacted with a customer over four weeks, then combine that with the data from conversations to assess the overall outcome. Your system would be able use this information to predict whether a deal will close or not and enable your client to proactively unblock any obstacles.

In a call centre, predictive analysis can augment agents with recommendations based on historic data and real time conversations. Plus, it can improve customer experiences by accurately estimating customer flow to assess the correct call center capacity, highlight issues in real time, and trigger support or special offers to customers based on aggregated analytics. It can even ensure call center agents use their time more efficiently. For example, if a customer matches 90% of what a previous customer has said, the system will recognize it and predict whether it’s worth investing more time in that particular customer. Also, you can build proactive compliance monitoring, so that the conversation intelligence constantly monitors customer service agent phone calls, predicts where compliance gaps could arise, and provides action reminders.

4. Knowledge

Everyone has valuable information that can be converted into a knowledge base; whether it’s for an internal team or their customers.

Building chatbots to use this automated generated knowledge base from conversations across the organization can help scale redundant questions and save huge amounts of time invested in onboarding new employees and doing redundant meetings to gather information on older projects.

Imagine building a conversation intelligence system that can pull, recommend, use, and share relevant information from natural human to human conversations in real-time, so it’s surfaced at the exact moment that the user needs it.

Conversation intelligence can be particularly useful for finding hidden answers to questions. It can capture insights from emails, like internal project deadlines, or sales figures, and then maximize business efficiency by sharing that knowledge with other members of the team. It can also be smart enough to filter out sensitive information, distinguishing between personal details and business knowledge.

5. Search

An enormous amount of knowledge is shared by people talking. You can build a system that capitalizes on that so you can ask a question using natural language and the system can find the answer. For example, it can search conversations that have been recorded and analyzed, then surface the information in real-time.

There are two types of search:

  • Direct search: You can ask questions in free-flowing natural language, like “What are the statistics of my team’s velocity?”, and the system will search through the knowledge base created from the conversation data and produce the answer.
  • Indirect search or suggestion: The system can automatically do this in real-time based on what you’re saying and how you’re talking. For example, you can say “It would be interesting to know what the team’s velocity statistics are so we can…” and the system understands what information you need and provides the statistics.

To truly capitalize on the conversation data in any business, all of the tools from these categories need to come together. You’ll have noticed similarities in the different categories above, so it’s easy to see how if one aspect isn’t deployed then you miss out on the opportunity to fully optimize the intelligence.

Symbl.ai is taking conversation AI to the next level to enhance the conversation intelligence landscape and make it readily accessible to businesses around the globe. Symbl APIs are easy to implement as a plug-and-play on any communication channel. You get quicker results because they do not require any upfront training to generate insights and knowledge. You can use Symbl to integrate highly secure conversation AI in your client’s products and workflows. Learn more about the solutions that using Symbl brings.

The post The 5 Dimensions of Conversation Intelligence appeared first on Symbl.ai.

]]>