Associate Solutions Engineer https://symbl.ai/developers/blog/author/janhavi-sathe/ LLM for Conversation Data Wed, 28 Sep 2022 17:19:17 +0000 en-US hourly 1 https://symbl.ai/wp-content/uploads/2020/07/favicon-150x150.png Associate Solutions Engineer https://symbl.ai/developers/blog/author/janhavi-sathe/ 32 32 How to Process Audio Recordings with Symbl’s Async API https://symbl.ai/developers/blog/process-audio-recordings-async-api/ Fri, 15 Jul 2022 15:21:05 +0000 https://symbl.ai/?p=25328 Process Audio Recording with Symbl Async API In this blog post, you’ll be guided with the processing of multi-channel audio using the Async Audio URL. Once the Asynchronous Job gets completed, you should be able to make a call to the Conversation APIs for getting the insights such as Topics, Follow-Ups, Questions, Action-Items, and Trackers. […]

The post How to Process Audio Recordings with Symbl’s Async API appeared first on Symbl.ai.

]]>
Process Audio Recording with Symbl Async API

In this blog post, you’ll be guided with the processing of multi-channel audio using the Async Audio URL. Once the Asynchronous Job gets completed, you should be able to make a call to the Conversation APIs for getting the insights such as Topics, Follow-Ups, Questions, Action-Items, and Trackers. The pre-requisite for processing the multi-channel file is the audio channel count and the channel metadata information such as speaker name and email.

Let’s try to understand what multi-channel audio is and how it’s built or constructed. When it comes to the audio file, it could be either a mono or a stereotyped recording. A recording that consists of more than one channel of audio recordings is called a stereo or multi-channel recording. There are various tools for building multi-channel audio. You’ll now see how to build a multichannel recording using FFmpeg. Please follow this tutorial for building a proper multi-channel recording using an open-source tool named FFmpeg.

Before sending an Async Audio URL request to Symbl there are a few parameters that can be added to the body request in order to expand the results of the insights and increase transcript accuracy.

Transcript accuracy: How can this be achieved?

  1. In case you have a recording that was done in two or more channels you can enable Speaker Separated Channel audio processing and add to each channel the speaker name and details using these parameters:

2. Add custom vocabulary to add a list of words and phrases that provide hints to the speech recognition task:

3. In case the recorded audio sample rate was 8KHz adding the phone mode can improve the transcript quality:

Note: If your call recorded sample rate is higher than 8KHz there is no need to add this param
Note:  More details in the params can be found in this Link.

More insights options: How can this be achieved?

  • Entities – By adding the parameter “detectEntities”: true in the body request it will find the entities in the conversation like location, person, date, number, organization, datetime, daterange, etc.
  • Trackers – By adding tracker functionality it will find the themes and business insights that the customer is looking to trace in the conversation. This can be achieved easily by either adding the tracker’s parameters to the body request with a  list of dictionaries that contains the tracker’s name and vocabulary. For example:
"trackers": [
         {
             "name": "Hire interns",
             "vocabulary": [
                 "would like to interview",
                 "hire our candidates",
                 "hire high school interns"
             ]
         },
             {
           "name":"c-level",
           "vocabulary":[
               "CEO",
               "Co-founder",
               "CTO",
               "CFO"
           ]
           }
   ]

Note: There are more ways to enable trackers by first creating them using manage API and then using all of them (enableAllTrackers) or only using a few of them by selecting the trackers option with the trackers Ids list. More details can be found in the Trackers API documentation.
For example:
"enableAllTrackers": true
Or, alternatively, using tracker IDs created using the Tracker Management API. (note: replace the “id” key value pairing with your own tracker ID values):

"trackers":[
     { "id": "6581143257219072" },
     { "id": "5044262090571776" },
     { "id": "6191012855676928" },
     { "id": "5512700349120512" }
 ],

      1. Summarization (Beta) – This allows you to generate conversation summaries using the Summary API.

Async POST Audio URL, cURL example:

curl --location --request POST 'https://api.symbl.ai/v1/process/audio/url' 
--header 'x-api-key: ' 
--header 'Content-Type: application/json' 
--data-raw '{
   "url": "",
   "confidenceThreshold": 0.6,
   "timezoneOffset": 0,
   "name": "",
   "mode": "phone",
   "channelMetadata":[
       {
           "channel": 1,
           "speaker": {
           "name": "Robert Bartheon",
           "email": "robertbartheon@example.com"
           }
       },
       {
           "channel": 2,
           "speaker": {
               "name": "John Snow",
               "email": "johnny@example.com"
           }
       }
   ],
        "trackers": [
        {
            "name": "Hire interns",
            "vocabulary": [
                "would like to interview",
                "hire our candidates",
                "hire high school interns"
            ]
        },
            {
          "name":"c-level",
          "vocabulary":[
              "CEO",
              "Co-founder",
              "CTO",
              "CFO"
          ]
          }
  ],
   "detectEntities": true
 }'

Getting Conversation Insights once Async API job request is completed:

Below is part of the Conversation API that you can make use of it for getting the conversation insights.

POST Formatted Transcript

The API returns a formatted transcript in Markdown and SRT format.

Messages

The Messages API returns a list of all the messages in a conversation with an option to get the sentiment of each message and a verbose option to get the word level time stamps from each message in the conversation.

Follow-Ups

This is a category of action items with a connotation to follow-up a request or a task like sending an email or making a phone call, booking an appointment, or setting up a meeting.

Action Items

This API returns a  list of all the action items generated from the conversation. An action item is a specific outcome recognized in the conversation that requires one or more people in the conversation to act in the future.

Questions

This API helps you find explicit questions or requests for information that come up during the conversation, whether answered or not.

Trackers

With this API users can define/modify groups of names with key phrases and words easily to their choice and detect specific or “contextually similar” occurrences of it in any conversation in one json structure for the whole conversation and add it to the Async or real-time calls. The result will be a json structure output with all the results found, where the users can get the searched relevant information for them in any point of the conversation.

Sentiments

Provides a measure of sentiment in the transcript at the message and topic level. Positive vs. negative vs. neutral.
Note: Topics sentiment are available in post conversation.

For post conversation more Insights and UI:

Analytics:

Provides customers with functionality of finding speaker ratio, talk time, silence, pace and overlap in a conversation and is not limited to the number of participants per channel.

Note: Relevant for conversation with speaker separation

Experience API

To create a pre-built Summary UI (this is a great way to review some of the features without diving deep into every single one)

Entities

This API provides you with a functionality to extract entities from the conversation. Each entity belongs to a category specified by the entity’s associated type. The platform generates entities related to the insight types for datetime and person.

Summarization (Beta)

Summarization is capturing key discussion points in a conversation that helps you shorten the time to grasp the contents of the conversation. Using Summary API, you can create Summaries that are succinct and context-based.

Comprehensive Action Item (Labs)

Similar to Action Items, but with added details such as speaker identification, and context.

Conclusion

In this blog post, you have seen how to process  audio recordings by making use of the Symbl Async Audio URL API. Also, understood on how to get the insights (Topics, Questions, Action-Items, Follow-ups, Trackers etc.) using the Conversation APIs. In addition to getting the regular insights, you have seen the usages of Entities and Trackers for getting additional insights that will help your business needs.

Resources

Async Audio API
Trackers
Summary
Conversation API

The post How to Process Audio Recordings with Symbl’s Async API appeared first on Symbl.ai.

]]>
How to Extend Scope of Chatbot Conversations with Symbl.ai https://symbl.ai/developers/blog/extend-scope-chatbot-conversations/ Wed, 13 Jul 2022 22:37:46 +0000 https://symbl.ai/?p=25560 As you redefine the conversational AI and customer experience strategy for your business, your biggest area of focus may be extending the scope of chatbots. You may want to make your virtual assistant experience more robust without the dependency of subject matter experts, but problems can arise: spanning from adding more intents and conversational flows […]

The post How to Extend Scope of Chatbot Conversations with Symbl.ai appeared first on Symbl.ai.

]]>
As you redefine the conversational AI and customer experience strategy for your business, your biggest area of focus may be extending the scope of chatbots. You may want to make your virtual assistant experience more robust without the dependency of subject matter experts, but problems can arise: spanning from adding more intents and conversational flows – to making these changes with minimal delays. Yet another challenge lies in bringing in the context of conversations from virtual assistant interactions to live chat, and to live call. Plus, following all that up to the different systems of record where data continues to live and analyzed across several workflows.

We have seen common use cases for chatbot or virtual assistant experiences across customer support aiming to save costs for the organization and increase end-user satisfaction. However the content of those conversations is heavily reliant on subject matter experts who understand end-user pain points across several communication touch points. These subject matter experts scrub the knowledge bases to define how the pain points can be addressed instantly, providing the most accurate actionable answers based on the conversation context.

In this post, we will cover the life cycle for improving the scope of your chatbot interactions and conservation design as it relates to the value Symbl can help enhance. Our goal is to provide you with different ways in which Symbl as a platform can be used to automate data labeling activities across calls, understand new patterns in real-time, and also provide ways to transfer context from one channel to another.

Revisiting the Conversational AI Pipeline

Before we see what Symbl can do for your conversational AI data pipeline, let’s take a look at the opportunities in the life cycle of conversation design and potential areas for enhancements.

How to Extend Scope of Chatbot Conversations with Symbl.ai
  • Creating a data lake and process to capture and archive the unstructured conversation data across live calls, emails, chat conversations. Some areas to consider are the cost of storage, scalability of the data store and how easy it is to correlate it with other forms of data in the business that will eventually drive insights beyond improving chatbot interactions.
  • Converting unstructured data into a form of context-aware standardized formats like Topics, Questions, Intents, Themes, or Sentiments that can be consumed by your conversation designers to not only extend the scope of the interaction, but also identify new patterns. This format and data structure can keep evolving with your capability to better understand conversation data, which is what Symbl enables you to do. An important area to consider here are time delays and how real-time this can become. That will help enable you to act as soon as you learn new intents. This intelligence will also enable you to identify new patterns, intents and failover scenarios by building a real-time aggregated monitoring solution.
  • Using real-time learnings, new topics, patterns and intents by conversation designers can help you scale your existing virtual assistant implementations or add new ones.

Understanding Unstructured Conversation Data and Context

For the rest of this post, we will talk about that second point above and how can Symbl, as a platform, enables users to understand data and context from multi-channel unstructured conversations. Building with conversation intelligence technology, you can use machine learning to identify the unknown pattern or scale the identification of what is known.

  • If you don’t know what you are looking for (in the cases of understanding the failed intent data or scrubbing through the call data to label it without human labelers) you can use Sentiments, Topics, Questions, and Action Items to help you classify the conversation and go to the specific instances. These conversation understanding tasks are built without bias to a specific type of domain and identify patterns based on the modality of language used, as well as the context and content and tone of the conversation. You calibrate these insights based on specific vocabulary or domains for which you are building the conversational AI experience.
  • If you already know what you are looking for but do not know the various ways in which customers are asking questions or looking for specific information, using Trackers is the right approach. Trackers use a zero shot learning approach to find similarity with language and context as it relates to the few ways in which you already know.

Using Trackers

Trackers are user-defined entities that allow you to track the occurrences of any characteristics or events in a conversation with just a few examples. You can track critical moments in a conversation across several use cases in both real-time (as the conversation is in-progress) and asynchronously (after the conversation is over from recordings). Some use cases for Trackers include when a customer is unhappy, when someone is rude, or a potential sales opportunity. Trackers help you identify emerging trends and gauge the nature of interactions. Please take a look at this Trackers tutorial for more details regarding how to create trackers using the Async APIs.

How does it work?

Run your transcript through Async text API, use audio or video call recordings and process using Async Voice API or Async Video API. You can also implement the ingestion process in real-time.

For ease of understanding, we will take the example of processing transcripts thought Symbl to find out the several ways in which your customers are asking the question for a specific intent.

1. Create a developer account with Symbl.ai and authenticate:

   Generate a valid Symbl accessToken by sending POST oauth2/token:generate: with your Symbl App ID and App Secret. If you don’t have App ID and App Secret you can sign up here.

For example:

curl --location --request POST 'https://api.symbl.ai/oauth2/token:generate' 
--header 'Content-Type: application/json' 
--data-raw '{
   "type": "application",
   "appId": "",
   "appSecret": ""}'

2. Process the transcript, below is a sample part of the transcription. You can also relate this to a chat conversation or text message

a. Send Async POST text API request and in the body request add the message content, the trackers to search and a webhookUrl for Symbl to signal the chatbot the job request is completed. For example:

curl --location --request POST 'https://api.symbl.ai/v1/process/text' 
--header 'x-api-key: ' 
--header 'Content-Type: application/json' 
--data-raw '{
   "messages": [
     {
       "payload": {
         "content": "My order was supposed to arrive three weeks ago, but it didn'''t. Can I have with this please",
         "contentType": "text/plain"
       },
       "from": {
         "name": "Becky",
         "userId": "becky@email.com"
       }
     }
   ],
       "trackers": [
         {
             "name": "Order delay",
             "vocabulary": [
                 "Order was supposed to arrive",
                 "Order did not arrive",
                 "Order is delayed",
                 "Order is not here"
             ]
         },
             {
           "name":"refund",
           "vocabulary":[
               "I want refund",
               "refund please",
               "cancel this order",
               "give me money back"
           ]
           }
   ]
 }'

b. Successful Async POST text request will result in a “conversationId” and a “jobId” to be stored in a DB for continued and later usage. Async POST response example:

{
   "conversationId": "4530092689588224",
   "jobId": "16d119db-7d2a-4182-b1db-33b27b6a7711"
}

c. Once job status is completed reported by the webhookUrl (Or by checking the job status) send a GET tracker + conversationId to check if one of the provided trackers were found in the message. For example:

curl --location --request GET 'https://api.symbl.ai/v1/conversations//trackers-detected' 
--header 'x-api-key: ' 
--header 'Content-Type: text/plain' 
--data-binary '@'

d. In case the tracker was found, store the tracker id received and run the same tracker across all the other transcript, chat and text data in your data store to identify all the other ways in which your customers are asking the same question or expressing the same intent “Order Delays.” This will help you improve your current NLU implementation to extend the scope of question or further more inform your conversation design.

Example:

[
   {
       "id": "6116730783924224",
       "name": "Order delay",
       "matches": [
           {
               "messageRefs": [
                   {
                       "id": "6702093319536640",
                       "text": "My order was supposed to arrive three weeks ago, but it didn't.",
                       "offset": 3
                   }
               ],
               "type": "vocabulary",
               "value": "Order was supposed to arrive",
               "insightRefs": []
           }
       ]
   }
]

Let’s take an example of using the same tracker, in your virtual assistant conversation flow that can help you identify intents of failovers where the question was not identified via your NLU engine.

e. Send Async PUT text API request with the conversationId received in step 2.b, the new message content from end-user, same trackers list to search and a webhookUrl for Symbl to signal the chatbot the job request is completed:

curl --location --request PUT 'https://api.symbl.ai/v1/process/text/<Add your conversationId created in step 2.b>' 
--header 'x-api-key: <Add your accessToken>' 
--data-raw '{
   "messages": [
     {
       "payload": {
         "content": "can I please get a refund please?",
         "contentType": "text/plain"
       },
       "from": {
         "name": "Becky",
         "userId": "becky@email.com"
       }
     }
   ],
       "trackers": [
         {
             "name": "Order delay",
             "vocabulary": [
                 "Order was supposed to arrive",
                 "Order did not arrive",
                 "Order is delayed",
                 "Order is not here"
             ]
         },
             {
           "name":"refund",
           "vocabulary":[
               "I want refund",
               "refund please",
               "cancel this order",
               "give me money back"
           ]
           }
   ]
 }'

f. Successful Async POST text request will result in the same conversationId and a new jobId. The jobId can be stored in a DB for checking the job status in case webhookUrl is not used. For example:

{
   "conversationId": "4530092689588224",
   "jobId": "c3cb4253-8961-4519-bf44-c840ce6f0108"
}

g. Once job status is completed reported by the webhookUrl (Or by checking the job status) send a GET tracker request with conversationId to check if one of the provided trackers were found in the message. For example:

curl --location --request GET 'https://api.symbl.ai/v1/conversations//trackers-detected' 
--header 'x-api-key: ' 
--header 'Content-Type: text/plain' 
--data-binary '@'

h. Check if new tracker id was created (Old tracker id is marked in Orange and new tracker id is marked in green in this example) in the response and send the relevant pre-made reply to end-user based on the tracker name found:

[
   {
       "id": "5379319225384960",
       "name": "refund",
       "matches": [
           {
               "messageRefs": [
                   {
                       "id": "5865521099571200",
                       "text": "Can I please get a refund please?",
                       "offset": 19
                   }
               ],
               "type": "vocabulary",
               "value": "refund please",
               "insightRefs": [
                   {
                       "text": "Can I please get a refund please?",
                       "offset": 19,
                       "type": "question",
                       "id": "5764795425882112"
                   },
                   {
                       "text": "Can I please get a refund please?",
                       "offset": -1,
                       "type": "question",
                       "id": "5764795425882112"
                   }
               ]
           }
       ]
   },
 
   {
       "id": "6116730783924224",
       "name": "Order delay",
       "matches": [
           {
               "messageRefs": [
                   {
                       "id": "6702093319536640",
                       "text": "My order was supposed to arrive three weeks ago, but it didn't.",
                       "offset": 3
                   }
               ],
               "type": "vocabulary",
               "value": "Order was supposed to arrive",
               "insightRefs": []
           }
       ]
   }
]

Conclusion

In this blog post, you have seen how conversation intelligence and Symbl can be used to augment your conversational AI pipeline with more insights, coverage and extending scope. We took an example of how Trackers can be used to speed up data labeling, use as a failover intent detection engine or even inform new conversation patterns for the same intent. With Symbl you can inject Async APIs in your existing data pipeline or build a real-time approach as part of your monitoring solution.

Resources

The post How to Extend Scope of Chatbot Conversations with Symbl.ai appeared first on Symbl.ai.

]]>
WebSockets Versus REST: Which Should You Choose? https://symbl.ai/developers/blog/websockets-versus-rest/ Tue, 05 Jul 2022 05:18:21 +0000 https://symbl.ai/?p=25437 It can be difficult to determine what protocol or development structure to use to manage your data needs — especially when you’re unfamiliar with your options. Two widely-used and well-known options are REST and WebSockets. It’s common for people to ask what the differences and similarities are between REST and WebSockets. So we’ll answer just […]

The post WebSockets Versus REST: Which Should You Choose? appeared first on Symbl.ai.

]]>
It can be difficult to determine what protocol or development structure to use to manage your data needs — especially when you’re unfamiliar with your options. Two widely-used and well-known options are REST and WebSockets.

It’s common for people to ask what the differences and similarities are between REST and WebSockets. So we’ll answer just that.

In this article, we’ll compare WebSockets and REST, helping you make an informed decision on which to use for your next web application.

What Is REST?

REST is a stateless architectural style for building application programming interfaces (APIs). In other words, it’s a way to structure communication between different applications on the Internet using constraints and principles to ensure that these applications understand each other.

Because REST is unidirectional, meaning the REST API will only give data when requested to share, you must ask for the proper resource to receive what you’re looking for.

To illustrate how REST works, imagine you want to purchase a t-shirt from an e-commerce web store via their website. Because the web store follows modern web development practices, it loads the t-shirts by querying an API. As with most APIs, there’s a client, such as a browser, and a server. The client sends a request, and the server gives a response.

In this case, your browser sends a request that asks about all available t-shirts. Since the site uses a REST API, this request will query a resource through a resource indicator.

A resource indicator is just an HTTP link, such as this one:

http://api.exampletshirtstore.com/shirts

When you query the link, the server will return the description of the resource. Usually, it’s in a format called JSON, but REST doesn’t actually prescribe specific formats. It will look something like this:

{
"shirts": [{
"name": "Basic",
"color": "white"
},
{
"name": "Advanced",
"color": "blue"
},
{
"name": "Luxury",
"color": "black"
}
]
}

Of course, you won’t see anything like this on the web page (or in the HTML). The browser will render it for you in a more presentable manner.

REST is appealing to developers for several reasons. First, it’s simple to use. It’s standardized while still allowing for a high degree of flexibility. REST is also a great way to decouple the client and the server. The client doesn’t need to know how the database is built. it just needs to know how to query a resource and render it.

What Is WebSocket?

WebSocket is a way to open a two-way communication channel with a server. It enables you to both send messages to the server and receive event-driven messages back.

WebSocket is a wholly different protocol from HTTP, which is what REST APIs usually use. So, while you’re accessing REST resources with links that look like regular web links, a WebSocket would use ws:// or ws:///.

Usually, a client would send a regular HTTP request together with a request for upgrading to WebSocket. If the server agrees, the client and server are “connected” and can send messages without needing to establish state — meaning the authorization information, metadata, and so on — with each request-response pair.

Think back to our t-shirt store example. With a REST API, you query the database about recent posts when you load the page. But you have no way to receive alerts about new posts without explicitly asking for the posts again. Since REST is stateless, the server can answer your requests, but it doesn’t know you. It can’t send you new posts when they are added to the network.

One could imagine solutions for this, such as a refresh button or automatically repeated requests from the client, but none of those give a feeling of real-time interaction. This is the gap that WebSockets fill. Using a WebSocket, you could receive notifications, for example, about new t-shirts being added to the seller’s website.

WebSockets vs. REST

So, what are the differences between REST and WebSocket?

To begin, it’s important to understand that they’re of slightly different categories. REST is a style of building APIs, while a WebSocket is a communications protocol. REST’s use of HTTP links the two, though.

Let’s explore some of the similarities and differences between REST and WebSockets below, as well as some use cases where one of these platforms is better suited for the task at hand.

Architectures

A common web application that uses a REST API consists of three components: the client (browser), the server, and the database. The server receives the API request and translates it for the database, fetches or changes data in the database, and sends a response.

With WebSockets, everything is the same, except you have a special server that’s responsible for communicating via sockets. It’s common to have the HTTP server preprocess incoming connections, which are then moved to that WebSocket server.

Ability to Handle Data and Requests

In some cases, WebSocket can bring certain performance gains over REST. You’ll see this when looking at the differences in client-server messaging in both.

In REST, you put all of the state in the header of the message. The server processes that data and your request, and gives you a response. If you have another data request, you have to send the header again to remind the server of who you are.

In WebSocket, you send the header once and establish a two-way channel of communication. This means you don’t need to remind the server of who you are and what is the context of your operations, shortening the overall amount of data transmitted.

You can open a WebSocket and use it to send messages from the server that tell the client how to modify the page when the user takes certain actions. The result is much faster than loading the pages on each request via REST.

Stateful Versus Stateless

A large difference between WebSocket and REST is that WebSockets are stateful, while REST isn’t.

WebSockets are meant for maintaining a bi-directional connection, where each party can send the other messages when they want to. This is impossible to do without establishing some kind of state — in other words, without receiving a response.

REST, in contrast, is focused on responding to requests. Each request is stateless, meaning the server is like a  function producing outputs for the client’s inputs. As such, REST is much more suited for infrequent, one-directional, atomic connections.

Performance

Developing with REST is simple and easy. Because it’s so straightforward, there’s not a lot you need to add to the development process to make it run more smoothly. One of the biggest benefits of working with REST is that you don’t need to be hands-on with its management.

Developing with WebSockets, in contrast, is much more challenging. This is because you need to manage a stateful system, meaning there are multiple users connected to a server at the same time, with certain statuses, privileges, user groups, and so on. The variables can add some extra labor.

However, when you elect to use WebSockets where they’re most applicable, they shouldn’t add a lot of extra complexity that doesn’t already exist in the problem domain. Rather, they’ll simplify things that are typically very difficult.

REST is very simple, and some of that is because it’s used for simple things. Therefore, using WebSockets for simple applications will just complicate the development process. But for complicated real-time applications, using WebSockets will be much easier and will offer more flexibility than REST.

Dependencies

Both REST APIs and WebSockets are rather easy to set up and maintain if you don’t develop them from scratch.

Since REST APIs are foundational to the modern web, any kind of web framework will enable you to set them up in a trivial amount of time.

Though WebSockets are a little bit less popular, there are plenty of libraries for them in most main programming languages. For example, JavaScript has Socket.IO, and Python uses WebSockets. So, if you choose to use WebSockets, you won’t feel a lack of support.

WebSockets vs. REST: Use Cases

Both REST and WebSockets have their use cases where they’re best suited for the job — and others where they’re not as strong. Let’s discuss an instance where you’d want to use each.

When Should You Use REST?

Do you need to get a simple web resource up and running or create a basic API? If so, REST will be enough. REST is well suited to isolated, occasional communications — like creating a basic API, or in a GET request to call RESTful APIs. The simplicity of development will justify any performance downsides —if there are any — that your app might suffer from being stateless.

When Should You Use WebSocket?

If you have an application that needs real-time interaction, or if it has any other events that need to be sent back to users (such as notifications), WebSocket is a tool that can help you manage a lot of the complexity of the process.

For a regular application that’s well-served by a stateless, request-response-driven architecture, using WebSocket will be a case of over-engineering and most likely, add unnecessary complexity.

Deciding Between WebSocket and REST

REST and WebSocket are two different technologies that are quite popular and well-supported in the world of software development.

REST is the most common way of building APIs. You can find it in virtually every web project, and it is generally the best choice for most simple web projects. But in cases where you’re working with users that need to receive real-time information, like social media updates, real-time analytics, or monitoring data, it’s better to use WebSockets.

Now that you’ve seen a comprehensive overview of REST and WebSockets and their strengths, weaknesses, and ideal use cases, you’re prepared to decide which protocol — though technically different — you should use for your next project.

The post WebSockets Versus REST: Which Should You Choose? appeared first on Symbl.ai.

]]>
Integrate Conversation Intelligence with Your Communication Stack https://symbl.ai/developers/blog/integrate-conversation-intelligence-communication-stack/ Fri, 24 Jun 2022 23:40:16 +0000 https://symbl.ai/?p=25388 If you are building a communication experience that is enabled natively with voice, video, you are probably building either with an open source communication stack like Jitsi, a cloud API like Twilio or Agora or with communication platforms like Zoom or Microsoft Teams. In any case, building communication does not end by adding the voice […]

The post Integrate Conversation Intelligence with Your Communication Stack appeared first on Symbl.ai.

]]>
cloud API like Twilio or Agora or with communication platforms like Zoom or Microsoft Teams. In any case, building communication does not end by adding the voice or video, it goes beyond the enablement. The 2.0 for communication experiences are now defined with conversation intelligence both in real-time, before the communication and after the particular instance is ended. Using the characteristics, content, and tone of conversations and leveraging the intelligence from single or multiple calls is redefining the next generation of digital communications. Conversation content is an untapped data source from finding the most effective ways to improve your product to learning about your customer’s pain points and concerns. Meaningful insights can still be missed or go undocumented when these conversations are not captured or contextually analyzed.  If you don’t take the time to pass information from calls on to other team members, or if there isn’t a method in place to record and distribute feedback to other stakeholders, all of the knowledge that could have helped better your company and product is never saved and utilized. Conversational intelligence refers to software that analyzes audio or text using artificial intelligence (AI) to obtain data-driven insights from communication between employees and consumers.

Real-world applications for Intelligent Communication Experiences

In order to act on data in real time, conversation data needs to be streamed across various platforms like CRMs, ad platforms, data analytics, attribution systems, and digital experience platforms. It’s then used by revenue teams in marketing, sales, customer service, and e-commerce to improve purchasing experiences and increase conversions and revenue. Some examples of applications across the customer experience lifecycle:
  • Retrieve meaningful insights directly from customers by obtaining detailed information from conversations with them.
  • Better understand customer behavior and map them out in order to improve your services.
  • Predict customer behavior and provide them with exceptional service.
Other use cases include:
  • Management of pipelines. Sales managers can review late-stage conversations so that they can forecast more accurately.
  • Coaching for sales. Managers can examine successful sales calls to learn from top performers and provide coaching/feedback to those who need it.
  • Sharing tribal knowledge. Sales conversations provide a wealth of data that teams can use to influence product roadmaps, messaging, and competitive market intelligence.
  • Improvements to the sales process. Organizations can detect bottlenecks in their sales process and make adjustments.
  • Sales onboarding. To reduce training time, new sales personnel can listen to successful sales calls.
In this blog, we will take an example of how you can set-up Symbl and Jitsi and the communication workflow to enable conversation intelligence and analytics in your application. Note that for demo purposes we are using a simple js code to run the browser, but ideally if you are building a web application with the communication APIs, please use WebSDK to build the integration. Drop a note on support@symbl.ai if you have any questions on using the WebSDK for your app. Now let’s take a quick look at a quick test run with Jisti and Symbl.ai

What Is Jitsi?

Jitsi is an open source video conferencing software that allows you to easily build and deploy secure video conferencing solutions. At the heart of Jitsi are Jitsi Videobridge and Jitsi Meet, which let you hold conferences on the internet. Other projects in the community enable features like audio, dial-in, recording, and simulcasting.

What Is Symbl.ai?

Symbl.ai is a conversation intelligence platform that allows developers to natively integrate conversation intelligence into their voice or video applications without building machine learning models. It’s an AI-powered, API first, conversation intelligence platform for natural human conversations that works on audio, video, and textual content in real-time or recorded files. Symbl.ai’s APIs lets you generate highly accurate, and contextually relevant real-time sentiment analysis, question, action items, topics, trackers, and summary in your applications.

Setting Up Jitsi

For this tutorial, you’ll need to run Node.js version 14 or greater and npm version 7 or greater. Download Node.js and install it by cloning the repository.
# Clone the repository
git clone https://github.com/jitsi/jitsi-meet
cd ./jitsi-meet
npm install
# To build the Jitsi Meet application, just type
make
Run this code with webpack-dev-server for development. When you execute the make dev command, you’ll be able to successfully serve the app on localhost:8080 in the browser. In your terminal, type the make dev command. The default backend deployment is alpha.jitsi.net. If you plan to use a different server, you can use a proxy server to point the Jitsi Meet app to a different backend. To accomplish this, set the WEBPACK_DEV_SERVER_PROXY_TARGET variable:
export WEBPACK_DEV_SERVER_PROXY_TARGET=https://your-example-server.com
make dev
Now the app should be running at https://localhost:8080/. Note that the development certificate is self-signed and browsers may display a certificate error. It’s safe to ignore these warnings and proceed to your website.

Setting Up Symbl.ai

To set up Symbl.ai, create a free account or log in at Symbl.ai. Then, copy your API keys. Using the Symbl.ai credentials, generate an authentication token for making API queries. !Symbl.ai API credentials To send a recorded conversation or to make a live connection, you need to send discussion data in real-time or after the call has ended using the following APIs:
  • Async APIs allow you to send text, audio, or video conversations in recorded format.
  • Streaming APIs allow you to connect Symbl.ai on a live call with WebSocket protocol.
  • Telephony APIs allow you to connect Symbl.ai on a live audio conversation with Session Initiation Protocol [SIP] and Public Switched Telephone Network [PSTN].
Finally, you need to get the conversation intelligence. By default, you should have had conversationId returned to you in the previous step. This can now be used in the Conversation API to create any of the following:
  • Speech-to-text (transcripts)
  • Topics
  • Sentiment analysis
  • Action items
  • Follow-ups
  • Questions
  • Trackers
  • Conversation analytics

Integrate Symbl.ai with Jitsi

Building apps and SDKs on Windows is not supported by Jitsi. To address this problem, you’ll need to make use of Debian or Ubuntu. Before integrating Symbl.ai with Jitsi, make sure you have Jitsi running locally on your machine. Then log in to your Symbl.ai dashboard and copy your App ID and App Secret. Make a POST request to https://api.symbl.ai/oauth2/token:generate with a tool like cURL or Postman. Use the following as the POST body:
{

"type": "application",

"appId": "YOUR APP ID",

"appSecret": "YOUR APP SECRET"

}
Once you edit the token, click SEND to generate an accessToken. Now you need to integrate the Live speech to text and AI insights on your browser within your Jitsi meeting using WebSockets. Instead of using the PSTN (Public Switched Telephone Network) because of its expense when it comes to scalability, you’ll utilize WebSockets for this integration. Navigate to your Jitsi webpage, open the console, and paste the following:
/**
* The JWT token you get after authenticating with our API.
* Check the Authentication section of the documentation for more details.
*/
const accessToken = "" //your access token from symbl.ai
const uniqueMeetingId = btoa("user@example.com")
const symblEndpoint = `wss://api.symbl.ai/v1/realtime/insights/${uniqueMeetingId}?access_token=${accessToken}`;

const ws = new WebSocket(symblEndpoint);

// Fired when a message is received from the WebSocket server
ws.onmessage = (event) => {
// You can find the conversationId in event.message.data.conversationId;
const data = JSON.parse(event.data);
if (data.type === 'message' && data.message.hasOwnProperty('data')) {
console.log('conversationId', data.message.data.conversationId);
}
if (data.type === 'message_response') {
for (let message of data.messages) {
console.log('Transcript (more accurate): ', message.payload.content);
}
}
if (data.type === 'topic_response') {
for (let topic of data.topics) {
console.log('Topic detected: ', topic.phrases)
}
}
if (data.type === 'insight_response') {
for (let insight of data.insights) {
console.log('Insight detected: ', insight.payload.content);
}
}
if (data.type === 'message' && data.message.hasOwnProperty('punctuated')) {
console.log('Live transcript (less accurate): ', data.message.punctuated.transcript)
}
console.log(`Response type: ${data.type}. Object: `, data);
};

// Fired when the WebSocket closes unexpectedly due to an error or lost connetion
ws.onerror = (err) => {
console.error(err);
};

// Fired when the WebSocket connection has been closed
ws.onclose = (event) => {
console.info('Connection to websocket closed');
};

// Fired when the connection succeeds.
ws.onopen = (event) => {
ws.send(JSON.stringify({
type: 'start_request',
meetingTitle: 'Websockets How-to', // Conversation name
insightTypes: ['question', 'action_item'], // Will enable insight generation
config: {
confidenceThreshold: 0.5,
languageCode: 'en-US',
speechRecognition: {
encoding: 'LINEAR16',
sampleRateHertz: 44100,
}
},
speaker: {
userId: 'example@symbl.ai',
name: 'Example Sample',
}
}));
};

const stream = await navigator.mediaDevices.getUserMedia({
audio: true,
video: false
});

/**
* The callback function which fires after a user gives the browser permission to use
* the computer's microphone. Starts a recording session which sends the audio stream to
* the WebSocket endpoint for processing.
*/
const handleSuccess = (stream) => {
const AudioContext = window.AudioContext;
const context = new AudioContext();
const source = context.createMediaStreamSource(stream);
const processor = context.createScriptProcessor(1024, 1, 1);
const gainNode = context.createGain();
source.connect(gainNode);
gainNode.connect(processor);
processor.connect(context.destination);
processor.onaudioprocess = (e) => {
// convert to 16-bit payload
const inputData = e.inputBuffer.getChannelData(0) || new Float32Array(this.bufferSize);
const targetBuffer = new Int16Array(inputData.length);
for (let index = inputData.length; index > 0; index--) {
targetBuffer[index] = 32767 * Math.min(1, inputData[index]);
}
// Send audio stream to websocket.
if (ws.readyState === WebSocket.OPEN) {
ws.send(targetBuffer.buffer);
}
};
};

handleSuccess(stream);
Run this code in your browser’s developer console or embed it in a HTML document’s script /> element. Click Enter. Whatever is being said in the meeting will be recorded, as well as topic and transcripts. You should now be able to retrieve conversation insights using the Conversation ID, which you can retrieve from the onmessage handler. With it, you can also view conversation topics, action items, and follow-ups. To end the connection, just close your browser or if you wish to automate the process, you can add your email so that at the end of a call, the insights are emailed directly to you.

Conclusion

In this article you looked at the need of conversation intelligence, the 2.0 of your communication experience and the importance of using conversation data from voice or video calls in your product or business.  Conversation intelligence is a critical part of your communication stack and can boost your growth on all forefronts of communication. You were also introduced to Jitsi, an open source communication stack. You can use it in combination with Symbl.ai to provide APIs that integrate into your conversation intelligence.  By utilizing Jitsi and Symbl.ai together, you can retrieve meaningful insights, understand customer behavior, and predict customer behavior in order to improve both your product and your sales. If you are building a web based communication app, please refer to the Web SDK tutorial with Symbl.  You can also extend the existing experience with both additional intelligence or build the same integration using Cloud APIs for Voice, Video instead of using Jitsi. Please share any feedback on the developer Slack community.

The post Integrate Conversation Intelligence with Your Communication Stack appeared first on Symbl.ai.

]]>
Best Practices For Adding Webhooks To Your Applications https://symbl.ai/developers/blog/best-practices-adding-webhooks-to-applications/ Tue, 07 Jun 2022 04:12:56 +0000 https://symbl.ai/?p=25284 APIs have long been useful for many types of situations that developers regularly encounter. They have enabled you — for example — to expose endpoints and then make the desired requests. However, while this technique has often proven sufficient, it has some limiting factors. For instance, suppose you want to view the commits in a […]

The post Best Practices For Adding Webhooks To Your Applications appeared first on Symbl.ai.

]]>
APIs have long been useful for many types of situations that developers regularly encounter. They have enabled you — for example — to expose endpoints and then make the desired requests. However, while this technique has often proven sufficient, it has some limiting factors.

For instance, suppose you want to view the commits in a specific project and repository on GitHub. To do this, you can load the GitHub API and explore the relevant commits. You can also check whether someone has committed code by checking the current hash against one that you’ve previously recorded. However, GitHub only allows a certain number of API calls each time. Therefore, you inevitably waste bandwidth and resources.

Webhooks simplify this problem

 

Webhooks are an event-driven approach that enables applications to send automated messages or information to other applications. They provide a valuable way for software to communicate with other systems by sending out webhook notifications in response to designated events.
In the context of our earlier example, webhooks eliminate the need to make API calls. Instead, you can just tell GitHub to notify you when there are any new commits within the project or repository.

Developers also call webhooks “reverse APIs” because they function on non-structured infrastructure. This makes for a lightweight data-sharing platform and a quick setup process that is less resource-intensive and architecture-specific than traditional APIs.

To ensure the best experience from webhooks, there are several best practices to understand and a few caveats that you should consider. In the following sections, we will introduce some of these practices and some things to keep in mind as you begin to implement your webhooks.

Best Practices for Adding Webhooks to Applications

Handling Authentication

 

Authentication and authorization are essential aspects of keeping webhooks safe from spoofing, alteration, and infiltration within the network. They are critical because developers don’t receive any replies within webhooks after sending data to an endpoint. The client receiving the webhook should ensure that someone has not altered the data in transit and that the server sending the webhook endpoint is legitimate. There are certain security and authentication measures users should perform to make webhooks secure.

Sign-in Webhooks

 

For your clients or consumers to verify that you are who you claim to be and that the data you sent in a webhook event is legitimate, you must sign your webhook payload with a secret key or signature. Companies like Stripe use symmetrical secret keys in the request header to sign in webhook payloads. This allows users to access the key in their console to confirm the signature at their webhook endpoint.

Here is an example of a Python verification snippet:

if client_signature != server_sig:
return ('Signature does not match', 403)
else:
server_response = "<Response><Say>Huzzah! The Signature has been validated!</Say></Response>"
return Response(server_response, mimetype='application/xml')

Encrypt Data Sent by Webhooks

 

Since webhook requests are like any other HTTP requests, they are visible and readable during a webhook event from source to destination. This access makes them susceptible to data infiltration, leading to interception of authentication tokens, critical messages, and corruption of the whole data set.

By using the secure HTTP protocol communication, HTTPS, developers can reduce the message’s readability. Providing a secure webhook URL and installing SSL certificates in the client provides a secure transport layer that encrypts all data in the network.

Please note: This is transport level encryption, however actual encryption can also happen for the payload (at application level) where a user will get the data encrypted by a symmetric key (mentioned above).

Authenticate Connections and Certificate Pinning

 

Proving the identities of your connections when adding webhooks to your application is imperative. Doing so reduces any information leakage if someone redirects a request. Using Mutual TLS, both the client and the server verify and authenticate each other.

The most secure way to handle webhook payload security is by using certificate pinning. The two connections provide certificates during the TLS handshake to prove their identities. Pinning ensures that an attacker cannot compromise your data, even if an attacker has a malicious root certificate. If an attacker redirects a webhook endpoint, the authentication fails, and the attacker doesn’t access the information within the webhook.

Handling Non-sensitive Data

 

Webhooks are publicly accessible on the Internet, making their security a challenge. You should primarily use webhooks for non-sensitive data, such as letting users know about a status change or non-authenticated tokens. Avoid sending sensitive or private information by webhooks, even when using encryption.

Implement Error Handling and a “Retry” Policy

 

When making POST requests to a client URL, some of the requests fail due to poor routing and domain name system (DNS) issues. As a developer, you’ll want to try a few more times, but only for a limited period with a limited number of attempts.

Using exponential backoff provides a better way to approach this problem by reducing the number of wait periods between events and increasing the time between retries. You should mark a request as broken in your endpoint when it has not been responding with the correct status code — for example, 2xx. Additionally, you should terminate sending requests to it. Inaccessibility of broken endpoints should result in a notification to a developer to fix them.

Also, you can implement a “back off” policy to gradually reduce the number of webhook notifications sent to the destination if the Retry feature creates a continuous loop. Any response by the status code 2xx means it was a success. Otherwise, it is a Retry.

Implement Logging

 

Admin users can use webhook logs to monitor the activity of their webhook account. Webhook logs usually record each broken and successful endpoint during each webhook event. This record helps developers know which of the payloads sent out has reached the consumer, making it easy to debug and fix any errors that are likely to happen down the road.

Subscriptions with Expiry Date

 

The use of expiration time limits is an efficient way to save on time and resources used by webhooks. Subscribed clients should have a period (a week, a month, etc.) after which their subscription expires if they don’t renew it. This approach is best as some consumers are no longer interested or do not want to receive the data anymore. Since it’s not safe to assume, it’s best to send automated renewal instructions before retiring their subscriptions. If your application has different types of users, it’s an excellent choice to prioritize the premium clients by extending the webhook expiration period policy.

Handling Duplicate Events

 

Occasionally, webhook endpoints might receive the same event multiple times. To safeguard against duplicated event receipts, developers should ensure that the event processing is idempotent. The best way to achieve this is to log the events you’ve received and avoid processing events that are already logged.

Next Steps

 

Adopting webhooks into your future application is a great way to better explore them. While there are no standards regarding the security and best practices to follow when adding a webhook to an application, the methods that we have introduced can help you secure your application and ensure your connections are safe when sending data. They can help you achieve security, scalability, and ease of implementation with less bandwidth use and less manual oversight.

The post Best Practices For Adding Webhooks To Your Applications appeared first on Symbl.ai.

]]>
Enabling accessibility & automated content management for E-learning platforms with Symbl.ai 👩🏻‍🎓 [Part 2] https://symbl.ai/developers/blog/accessibility-automated-content-management-e-learning-platforms/ Tue, 31 May 2022 18:08:25 +0000 https://symbl.ai/?p=24788 In the first part of this tutorial, you saw how to rebuild a video platform like Udemy, using api.video with Next.js, React, and Typescript. However, no modern E-Learning platform is truly complete until the knowledge is accessible in all forms: audio, video, and text. Additionally, it is also important to have content management capabilities such […]

The post Enabling accessibility & automated content management for E-learning platforms with Symbl.ai 👩🏻‍🎓 [Part 2] appeared first on Symbl.ai.

]]>
first part of this tutorial, you saw how to rebuild a video platform like Udemy, using api.video with Next.js, React, and Typescript. However, no modern E-Learning platform is truly complete until the knowledge is accessible in all forms: audio, video, and text. Additionally, it is also important to have content management capabilities such as subject tags, an overview of a lecture for quick recaps, etc. These simple features, being a part of the regular offline learning experience as they are, go a long way in enhancing the experience and engagement on the online platform, for both the teachers and learners. In the second part of this tutorial, you will learn how to add transcripts/speech-to-text, as well as AI-generated topics and the summarization of the video lecture using Symbl.ai. This demo is built using Next.js, React & Typescript. The complete code is available on GitHub in case you want to follow along.

Features

This demo includes the following features:
  • Speech-to-Text  –  Video Transcript to make a lecture accessible and more engaging
  • Topics  –  Categorical tagging & Overview of lecture based on actual keywords mentioned
  • Summary  –  Brief overview of a video lecture for a recap or a quick perusal of content

Enabling Accessibility & Automated Content Management for E-learning Platforms

  Before we move on to the code, let us quickly go through the high-level overview of what changes need to be made for this integration to work. If you are already familiar with Symbl, or if you directly want to run the application, you can skip ahead to the demo section.

How Symbl Asynchronous Processing Works

  Symbl.ai consumes the pre-recorded media, such as video lectures on an e-learning platform, through our Async API. The Async API is a REST API that is used to submit your video, audio, or text conversation to Symbl, along with some parameters to specify details about the file format, language, and settings for the intelligence features that you wish to use. Symbl ingests each video and once the processing is complete, you can use the Conversation API to retrieve intelligence for that video as many times as necessary. In a nutshell, the steps for processing a video with Async APIs are –
  1. Submit video file/URL with configuration using a POST Video request (mp4)
  2. Check whether the processing Job is completed in Symbl
  3. Fetch Intelligence using Conversation API endpoints
This needs to be done for each video. Once a video is submitted, you receive a unique Conversation ID from Symbl. This Conversation ID is used to retrieve, update and delete Symbl Insights post-processing. Therefore, the Conversation ID needs to be stored on the client-side along with its associated video. You also receive a Job ID upon successfully submitting a video. The Job ID is used to check the processing status at Symbl. The processing status is one of the following: scheduled, in_progress, completed, failed. *Note: Videos sent for processing are not stored with Symbl. Only the generated insights are retained, until they are manually deleted, or the associated Symbl account is deleted.

Enhancing the backend 

  So far, we have built a website using Next.js, React, and Typescript. You will now expand to add Symbl-related functionality to the Next.js server, the database to store the video IDs with the conversation IDs, and lastly add in some components to display the intelligence in the UI.

High-level Architecture diagram

  A simplified high-level architecture diagram looks like the following: accessibility & automated content management for E-learning platforms with Symbl.ai You will be adding a database to the backend with functionality for maintaining it, and functionality for processing all videos with Symbl. This is explained in more detail in the following sections.

Choosing a database

  Choosing an appropriate database for your application is extremely important. Some of the factors to consider are the nature of the data and most frequently used queries, estimated storage volumes, costs, and performance. While using Symbl with api.video, the primary reason we need a database is to link the videos with their insights. The second reason is to keep track of which videos have already been processed with Symbl and which still need to be processed. The bare minimum requirements from the database are the following:
  1. Link a video id with the corresponding conversation ID
  2. Update the status of job processing with Symbl
The Video ID is used as the primary key for querying. The two other essential attributes are conversation ID and Symbl processing status. In addition to this, we also chose to save the Video URLs from api.video in the database for quicker access, and the publishedAt field for the video as a sorting parameter. In general, the values would be as follows:
Parameter Name Type Description
videoId String Api.video’s alphanumeric Video Id
conversationId String Symbl’s numeric Conversation Id
videoUrl String The mp4 link from api.video, of the form `https://cdn.api.video/vod/${{videoId}}/mp4/source.mp4`
publishedAt String The date and time video was created on api.video platform. Date and time are provided using ISO-8601 UTC format.
symblStatus String Information on whether the video has been processed with Symbl. Values used in this demo were pending, in_progress with jobId ${{jobId}} and completed
To keep things very basic for this demo, we went ahead and just used a plain old JSON file. For actual applications, you can choose a database of your liking based on all the points mentioned in this section.

Server functionality

  Now that we have a database, we need to communicate with it and process the videos. We will expand our Next.js server to perform two additional functions:
  1. Maintain the database – add, update, retrieve and delete video-related data
  2. Make sure that all videos are processed completely by Symbl
Both of these are broad functions and can be broken down into smaller functions. The exact breakdown would vary slightly depending on your tech stack and database design. Maintaining the database involves periodically fetching user videos from Api.video and updating the database with new videos, as well as updating the existing videos with the Symbl conversation ID and Symbl processing status to in_progress and then completed appropriately. To make sure all videos are processed completely by Symbl, you need to check the symblStatus and queue all videos with pending status. In the demo, we only process one video at a time. To process the videos, you need to first fetch an access token from Symbl. Using axios, the function looks something like this:
const fetchSymblToken = async () => {
  try {
    const url = 'https://api.symbl.ai/oauth2/token:generate';
    const data = {
      type: 'application',
      appId: process.env.SYMBL_APP_ID,
      appSecret: process.env.SYMBL_APP_SECRET,
    };
    const response = await httpClient.post(url, data);
    const token = await response.data.accessToken;
    console.log('🦄 Symbl Authentication Successful - ', token);
    return token; // Store the response token
  } catch (error) {
    console.log(error);
  }
};
You can find full details of the Authentication request in our Docs – Symbl Docs | API Reference – Authentication. The access token generated is valid for 24 hours (86400 seconds). This access token should be passed in the headers of all Symbl requests. The Symbl Async Request to submit a video would look as follows:
const postAsyncVideo = async (accessToken, videoUrl, videoId) => {
  const url = 'https://api.symbl.ai/v1/process/video/url';
  const axiosConfig = {
    headers: {
      'Authorization': `Bearer ${accessToken}`,
      'Content-Type': 'application/json',
      'x-api-key': accessToken,
    },
  };
  const data = {
    url: videoUrl, // mp4 video link
    name: videoId ? videoId : '',
    enableSummary: true,
  };
  try {
    const response = await httpClient.post(url, data, axiosConfig);
    const conversationId = await response.data.conversationId;
    const jobId = await response.data.jobId;
    console.log('🦄 Symbl Async Request Successful! ', response.data);
    return { conversationId: conversationId, jobId: jobId };
  } catch (error) {
    console.log('🦄 Error in Async Request -', error);
  }
};
Above you can see that the headers have the Symbl accessToken we generate in the last step. The data, or request body, contains the mp4 url that we get from api.video, and the videoId is being used as the conversation name on Symbl, but you can use other parameters accordingly as described in the Async POST Video URL API. Once you submit the job, you will need to check whether the processing has been completed – if it is not, then we will get only empty or partial results for the insights. In this demo, we used a polling mechanism to check the job status on Symbl, but there is also the provision of adding a Webhook URL in the body of the request should you wish to. You can find the full request body parameters in our Docs here: API Reference – Async POST Video URL. You can find the full code related to Server functionalities in the udemy-clone-next-typescript/src/services folder.

Fetching Symbl Results into UI Components

  Once the processing is complete, you can now fetch the results on the client-side and display them when the user opens the video. In this section, you will see how to get the Speech-to-Text (Transcripts), Summary, and Topics using the Symbl Conversation APIs and have a look at ways you can effectively display them.

Overview Section

  To make it easier to navigate through a series of lectures in an online course, each video should have a summary and a list of keywords from the lecture as part of the lecture overview. accessibility & automated content management for E-learning platforms with Symbl.ai

Summary

To generate the Summary of the video you use the GET Summary request from the Symbl Conversation APIs. Our backend handler function using axios is as follows:
const handler = async (req, res) => {
  try {
    const { conversationId } = req.query; // pass conversation id to function
    const result = await axios.get(
        `https://api.symbl.ai/v1/conversations/${conversationId}/summary`, {
         headers: {
            'Authorization': req.headers.authorization, // pass symbl access token
            'Content-Type': 'application/json'
         }
    });
    const { summary } = result.data
    res.status(200).json(summary);
  } catch (err) {
    console.error(err)
  }
};
This handler is called in the Video component, and takes in the Symbl access token & conversation id. The response is an array of text strings, which can be interpreted as logical paragraphs of the summary. To display the full summary, traverse the array and display each text as a separate paragraph, or concatenate it and display it as a single paragraph.

Topics

To retrieve the keyword tags, or topics, use the GET Topics request. Our backend handler function using axios is as follows:
const handler = async (req, res) => {
    try {
        const { conversationId } = req.query; // pass conversation id to function
        const result = await axios.get(
            `https://api.symbl.ai/v1/conversations/${conversationId}/topics`, {
            headers: {
                'Authorization': req.headers.authorization,
                'Content-Type': 'application/json'
            }
        }
        );
        const { topics } = result.data
        res.status(200).json(topics);
    } catch (err) {
        console.error(err)
    }

};
This handler is also called in the Video component, and takes in the Symbl access token & conversation id. The response is an array of topics along with some additional information related to the them, such as sentiment, confidence score. You can display the topics as a string of tags or utilize the additional information for fancier displays such as word clouds based on confidence score, or sentiment color coding.  Enabling accessibility & automated content management for E-learning platforms with Symbl.ai 👩🏻‍🎓 [Part 2]

Transcript Tab

Adding a transcript of the video allows learners to access and engage with the videos in textual medium in addition to audio-visual medium of the video. The transcript also makes it easier to quote parts of the lecture in your notes! You can fetch the speech-to-text results from Symbl in 3 formats – a JSON Array of sentences, Markdown, and SRT captions. The JSON Array of sentences can be fetched using the GET Speech-to-Text Messages request. This gives you a sentence-level breakdown of the transcript and provides timestamps, speaker, entities, important phrases, and sentiments for each sentence. The Markdown & SRT Formats are available as part of the Formatted Transcript. The Markdown format can be used to display the entire transcript in your markdown compatible UI components, and it has parameters to auto-format the transcript and make it presentable and highlight insights. The SRT format can be uploaded as captions for your video, and you can configure it to show the speaker’s name. For the transcript tab, either the Speech-to-Text Messages or Markdown format can be used. In this demo, we used the Speech-to-Text Messages request. Our backend handler function using axios is as follows:
const handler = async (req, res) => {
  try {
    const { conversationId } = req.query;
    const result = await axios.get(
      `https://api.symbl.ai/v1/conversations/${conversationId}/messages`, {
      headers: {
        'Authorization': req.headers.authorization,
        'Content-Type': 'application/json'
      }
    }
    );
    const { messages } = result.data
     res.status(200).json(messages);
  } catch (err) {
    console.error(err)
  }

};
Same as before, this handler is called in the Video component, and takes in the Symbl access token & conversation id. The transcript is then displayed in the sidebar. Viewers can toggle the sidebar when they need it. Using the Speech-to-Text Messages instead of Markdown allows us to break the transcript down into smaller messages which are easier to read while watching a lecture, compared to the Markdown transcript which is broken into longer, logical paragraphs which are better for reading through independently.

How to Run the Demo

Pre-requisites

For the second part of this demo, you will need the following:
  • A Symbl account: Sign up to Symbl for free and gather your Symbl credentials i.e. your Symbl App ID and App Secret.
  • An Api.video account: Sign up for a free Api.video and grab your Sandbox API Key.

Clone repo/ Switch branches & install dependencies

In case you have not completed the first part of the demo yet, you can get the completed code by cloning the Github repo using the following command: $ git clone https://github.com/apivideo/udemy-clone-next-typescript.git And then navigate to the folder containing the code by using the following command:$ cd udemy-clone-next-typescriptSwitch to the branch containing the Symbl integration code using the following command: $ git checkout api-video-symbl Install the additional dependencies by running: $ npm install

Environment Variables

From Symbl Platform, grab your API Keys (App ID, App Secret) and store them in the .env file as follows: SYMBL_APP_ID=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx SYMBL_APP_SECRET=xxxxxxxxxxxxxxxxxxxxxxxxxxxx

Run the app

To start the development server, run one of the following commands:
# run the development server
npm run dev
# or
yarn dev

Navigating the demo

When the demo application is up and running, you will first be greeted by the following page: In the top right, you can enter your api.video key and your name. At the bottom, you can see the library of videos that you have uploaded to Api.video Dashboard. You can click on a video to start learning!Once the demo application has started, the server will process all videos one by one, so you will not see insights immediately. You can view the logs to check the processing status. After the Transcript, Topics and Summary have been generated for the video, you can access them in the application. You can open the Transcript tab by clicking on the document icon button below the video.

API Reference

Next Steps

To learn more about different Integrations offered at Symbl, go to our Integrations Directory. Explore solutions for different use-cases from our Solutions page. For building more intelligence with Symbl, check out our documentation.This guide is actively developed, and we love to hear from you! If you liked our integration guide, please star our repo! Please feel free to create an issue or open a pull request with your questions, comments, suggestions, and feedback, or reach out to us at devrelations@symbl.ai, through our Community Slack or our forum.

The post Enabling accessibility & automated content management for E-learning platforms with Symbl.ai 👩🏻‍🎓 [Part 2] appeared first on Symbl.ai.

]]>
How To Embed Symbl.ai Insights with Vonage Video Calling https://symbl.ai/developers/blog/live-symbl-insights-with-video-vonage-calling/ Fri, 06 May 2022 22:05:11 +0000 https://symbl.ai/?p=24186 This article demonstrates how to use Symbl.ai’s Conversation Intelligence capabilities in tandem with a live video calling app built with the Vonage Video API. Vonage is a global leader in cloud communications with its wide range of offerings such as Unified Communications, Contact Centers, Communications APIs, and more. One of the offerings in their Communications […]

The post How To Embed Symbl.ai Insights with Vonage Video Calling appeared first on Symbl.ai.

]]>
This article demonstrates how to use Symbl.ai’s Conversation Intelligence capabilities in tandem with a live video calling app built with the Vonage Video API.

Vonage is a global leader in cloud communications with its wide range of offerings such as Unified Communications, Contact Centers, Communications APIs, and more. One of the offerings in their Communications APIs is the “Vonage Video API”, which allows users to program and customize live video applications.

Video Calls have seen a huge spike in the pandemic era, but they were already on an upwards trend for the past several years, being one of the most preferred ways for many people to connect. However, with this increase in adoption, it has become even more critical for developers of video platforms to take the leap from just facilitating these calls to helping more users actually connect. They need to enable conversations.

Symbl.ai provides out-of-the-box Conversational Intelligence capabilities to deeply analyze the spoken conversations that happen over video applications. One of these capabilities is Live Transcription or Speech-to-Text. This allows you to convert the conversations from a verbal form to a textual form and display it to users. By enabling this feature, you ensure that your app is accessible to all audiences.

Combining Vonage Video and Symbl.ai’s Conversation Intelligence, we demonstrate how to build an accessible, engaging video calling application.

How Symbl.ai’s Features Enhance Vonage Video Calling

 

Before we start building, let’s have a quick look at the available features. This video app integrated with Symbl.ai’s Real-time APIs provides the following out-of-the-box conversational intelligence features.

  • Live Closed Captioning: Live closed captioning is enabled by default and provides a real-time transcription of your audio content.
  • Live Sentiment Graph: Live Sentiment Graph shows how the sentiment of the conversations evolves over the duration of the conversation.
  • Live Conversation Insights: Real-time detection of conversational insights such as actionable phrases, questions asked, follow-ups planned.
  • Post-call Speaker Analytics: After the call ends, the app will show speaker talk-times, ratios, overlaps and silences, and talking speeds per speaker.
  • Post-call Automatic Summary: Using Symbl.ai’s Summarisation capabilities, you can generate a full-conversation summary at the end of the call.
  • Video conferencing with real-time video and audio: This allows for real-time use cases where both the video, audio (and its results from Symbl.ai’s back-end) need to be available in real-time. It can be integrated directly via the browser or server.
  • Enable/Disable camera: After connecting your camera, you can enable or disable the camera when you want.
  • Mute/unmute mic: After you connect to your device’s microphone you can mute or unmute when you want.

To see the end result in action, you can watch this video:

Video Vonage Calling Architecture

Video Vonage Calling Architecture

The basic video call app has two major parts, a backend server, and a client-side Web application. The backend server serves authentication tokens for Vonage services to the client-side of the application. The client-side of the application then handles the video call sessions, users, and media. This is done using the Vonage Client SDK.

To use Symbl.ai, we need an authentication token from Symbl.ai and the audio streams of the user over a WebSocket. In the demo, we re-use the same backend server to get the Symbl.ai access token.

For the intelligence, we hook into the client-side application and stream audio to Symbl.ai over WebSocket. The conversational intelligence generated is sent back to the client app and then displayed.

For a more in-depth explanation of the application architecture, see the README of the application.

How to Build the Demo Vonage Video Calling App

 

This app was built using the Vonage Client Web SDK and Symbl.ai’s Web SDK (v 0.6.0) and Create React App.

Pre-requisites

For this project, you need to first have the following ready:

  • Symbl.ai Account: Sign-up to Symbl.ai Platform and gather your credentials (App ID, App Secret). Symbl.ai offers a free plan that would be more than sufficient for this project.
  • Vonage Account: You have a Vonage Video API account. If not, you can sign up for free here.
  • Node.js v10+: Make sure to install the current latest version, minimum v10.
  • NPM v6+:  Node versions v6 or latest is required.

Integration Steps

Step 1: Set up your code

On your local machine, clone the demo code by running the following command:

git clone https://github.com/nexmo-se/symblAI-demo.git

Once cloned, navigate into the root folder of the code.

cd symblAI-demo

From the root folder, install the node modules using:

npm install

* Note: The full list of dependencies can be found in the package.json file.

Step 2: Setting your credentials in .env file

If you want to run the project in the dev environment, create a .env.development file, otherwise, if you want to run the project in the production environment, then create a .env.production file.

Similar to the .env.example file, populate your .env files with the following variables:

 VIDEO_API_API_KEY= "1111"

 VIDEO_API_API_SECRET="11111"

 SERVER_PORT=5000

 SYMBL_API = "https://api.symbl.ai/oauth2/token:generate"

 appSecret = "1111111"

 appId = "11111"

 # Client Env Variables

 REACT_APP_PALETTE_PRIMARY=

 REACT_APP_PALETTE_SECONDARY=

 REACT_APP_API_URL_DEVELOPMENT=

 REACT_APP_API_URL_PRODUCTION=

Step 3: Running the code

Dev

To run the project in development mode, in one terminal/console window, start the backend server with the following command:

npm run server-dev

In another terminal/console window, start the frontend application by running:

npm start

Open http://localhost:3000 to view the application in the browser.

Production

To run the project in production mode, you can build the bundle file with the following command:

npm run build

Then start the backend server by running:

npm run server-prod

Open http://localhost:5000 to view the application in the browser.

You would first see this screen as the landing page:

How To Embed Symbl.ai Insights with Vonage Video Calling

Enter a room name and your name and click “Join Call.”

Now we enter the meeting room and you can see yourself on video, and a live sentiment graph on the left of the screen. As the conversation progresses, this graph will update to show the change in sentiment over the course of the call. At the bottom of the screen, you also see live captions. You can use the mic and camera buttons to toggle your connected audio/video devices.

Symbl also picks up Questions, Action Items, and Follow-ups which can be seen under the Sentiment Graph on the left.

Once you are done with the call, you can click the red exit call button to come to the final page. This page will load the speaker analytics and a summary of your call.

*Note: It may take a few seconds for the metrics and summary to be generated and loaded.

API Reference

 

Find comprehensive information about our Streaming APIs in the API Reference section, and about our Web SDK in the SDKs section.

You can find more information on the Vonage Client SDK in their documentation.

Symbl.ai Insights with Vonage: Next Steps

 

To learn more about different Integrations offered at Symbl.ai, go to our Integrations Directory.

Community

This guide is actively developed, and we love to hear from you! If you liked our integration guide, please star our repo!

Please feel free to create an issue or open a pull request with your questions, comments, suggestions, and feedback, or reach out to us at devrelations@symbl.ai, through our Community Slack or our forum.

The post How To Embed Symbl.ai Insights with Vonage Video Calling appeared first on Symbl.ai.

]]>
How-to Build a Twilio Video React App with Closed Captioning and Transcription https://symbl.ai/developers/blog/build-twilio-video-react-app/ Wed, 20 Apr 2022 04:41:30 +0000 https://symbl.ai/?p=23651 This article demonstrates how to build and use the multi-party video-conferencing application using Twillio and Symbl.ai’s Real-time APIs. Twilio is a developer-centric Customer Engagement Platform. Twilio Video allows you to embed customized video into your applications at scale. However, a video application may not be accessible to all audiences and can be arduous to build. […]

The post How-to Build a Twilio Video React App with Closed Captioning and Transcription appeared first on Symbl.ai.

]]>
This article demonstrates how to build and use the multi-party video-conferencing application using Twillio and Symbl.ai’s Real-time APIs.

Twilio is a developer-centric Customer Engagement Platform. Twilio Video allows you to embed customized video into your applications at scale. However, a video application may not be accessible to all audiences and can be arduous to build. Modern applications need  built-in intelligence to be more inclusive of all audiences.

Symbl.ai provides out-of-the-box Conversational Intelligence capabilities to deeply analyze the spoken conversations that happen over video applications. One of these capabilities is Live Transcription, or Speech-to-Text. This allows you to convert the conversations from a verbal form to textual form and display it to users.

Combining these two platforms, we have built a multi-party video-conferencing application that demonstrates Symbl.ai’s Real-time APIs in the form of Closed Captions.

This application was inspired by Twilio’s video app and is built using Twilio-video.js, Create React App, and Symbl.ai’s Streaming API.

Features

Before we get into the details on how to build the Twilio video conference app, let’s talk about available features. This video app integrated with Symbl.ai’s Real-time APIs provides the following out-of-the-box conversational intelligence features:

  • Live Closed Captioning: Live closed captioning is enabled by default and provides a real-time transcription of your audio content.
  • Real-time Transcription: Symbl.ai offers state-of-the-art Speech-to-Text capability (also called transcription). You can get audio and video conversations transcriptions in real-time.
  • Video conferencing with real-time video and audio: This allows for real-time use cases where both the video, audio (and its results from Symbl.ai’s back-end) need to be available in real-time. It can be integrated directly via the browser or server.
  • Enable/Disable camera: After connecting your camera, you can enable or disable the camera when you want.
  • Mute/unmute mic: After you connect to your device’s microphone you can mute or unmute when you want.
  • Screen sharing: This can be used to capture the screen directly from the web app.
  • Dominant Speaker indicator: The Dominant Speaker refers to the Participant having the highest audio activity at a given time.
  • Network Quality Indicator: It highlights the call panel which displays the current network conditions of the user and if it is Strong, Weak, or Poor.

Supported Platforms

This application is currently supported only on Google Chrome.

How to Build the Video Conference App

This section describes how to run this Symbl.ai-powered Twilio Video Conference App. The code samples are written in cURL.

Prerequisites

Before running the Symbl-Twilio Conference App, ensure that the following prerequisites are met:

  • JS ES6+: Make sure to install the latest version.
  • Node.js v10+: Make sure to install the current latest version, minimum v10.
  • NPM v6+:  Node versions v6 or latest is required.
  • Twilio account: You will require a Twilio account, you’ll also need your account SID, available on your Twilio dashboard, and an API key and secret, which you can generate in your console.
  • Symbl Account: Sign up to the Symbl Platform and gather your Symbl credentials, i.e. your App Id and your App Secret.

Integration Steps

This section walks you through the steps necessary to install the Symbl for Twilio Video Conference App and run it on your local machine.

Step 1: Clone App and Install dependencies

On your local machine, clone the repo by running the following command on your terminal/console:

git clone https://github.com/symblai/symbl-twilio-video-react.git

Then navigate to the folder where the code is:

cd symbl-twilio-video-react

From the master folder, install dependencies using the following command:

npm install

* Note: The full list of dependencies can be found in the package.json file.

Step 2: Store your Symbl credentials in App

This application offers two options for authorizing your Symbl account, in the application, or via the included token server. The local token server is managed by server.js.

The default behavior is for your Symbl account to authorize in-app. A dialog box will be shown automatically if you’re opening the app for the first time. In the config.js file, you will find enableInAppCredentials would have been set to true. For this option, you are not> required to update the .env file with Symbl credentials.

How-to Build a Twilio Video React App with Closed Captioning and Transcription

If you are planning to use the included token server for generating your Symbl token you may disable the in-app App ID/App Secret configuration. You can disable it by setting enableInAppCredentials to false in the config.js. In this case, store your Symbl credentials in the >.env file as follows:

SYMBL_APP_ID=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

SYMBL_APP_SECRET=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

Step 3: Store your Twilio credentials in the .env file:

Your Twilio account will be authorized via the token server, which is managed by server.js. In your Twilio console click on ‘Settings’ and take note of your Account SID. Navigate to Settings/API Keys to generate a new Key SID and Secret. Store it in the .env file as follows:

TWILIO_ACCOUNT_SID=ACxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

TWILIO_API_KEY_SID=SKxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

TWILIO_API_KEY_SECRET=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

This completes the first time set-up of the application.

Step 4: Run the app locally

You can now run the app using:

npm start

This will start the local token server and run the app in development mode.

Open http://localhost:3000 to see the application in the browser. The page will reload if you make changes to the source code in src. You will also see any linting errors in the console.

When you run the app, you will be prompted to enter your name and a room name. Once you join a room, it should look like this:

You can change the audio/video device settings as follows:

(Optional) Step 5: Run only the local token server

By default, when you run the npm start script, the token server is also started. However, if you want to run only the token server locally, you can do so with:

npm run server

The token server runs on port 8081 and exposes two GET endpoints: one to generate an access token for Symbl, and one for generating an access token for Twilio.

Symbl token endpoint expects to GET request at /symbl-token route with no parameters. You can test it with the following curl command:

 
curl 'localhost:8081/symbl-token'

The response will be a JSON response with accessToken and expiresIn values with Symbl access token and expiry of the token.

Twilio token endpoint expects to >GET> request at /twilio-token route with the following query parameters:

identity: string,  // the user’s identity

roomName: string   // the room name

The response will be a token that can be used to connect to a room. Test it out with this sample >curl> command:

 
curl 'localhost:8081/twilio-token?identity=TestName&roomName=TestRoom'

Adding Multiple Participants in a Room

If you want to see how the application behaves with multiple participants, you can simply open localhost:3000 in multiple tabs in your browser and connect to the same room name using different user names.

Each participant must have a unique Twilio token. If the app is deployed using the same set of Twilio credentials for all participants, then each participant needs a unique identity string. Participants may also have their own local installations of the app, but they require the same Twilio Account SID (the Twilio API Key and Secret can be different).

API Reference​

Find comprehensive information about our REST APIs in the API Reference section.

Next Steps

To learn more about different Integrations offered at Symbl.ai, go to our Integrations Directory.

Community

This guide is actively developed, and we love to hear from you! If you liked our integration guide, please star our repo!

Please feel free to create an issue or open a pull request with your questions, comments, suggestions, and feedback, or reach out to us at devrelations@symbl.ai,through our Community Slack or our forum.

The post How-to Build a Twilio Video React App with Closed Captioning and Transcription appeared first on Symbl.ai.

]]>
The Rebirth of Business Intelligence https://symbl.ai/developers/blog/the-rebirth-of-business-intelligence/ Mon, 09 Mar 2020 07:51:56 +0000 https://symbl.ai/?p=6757 Business Intelligence (B.I.) is the infrastructure that a business uses to collect, organize and manage all its data. Everything from simple spreadsheets to the more thorough dashboards, all fall under the umbrella of B.I. While B.I. has been a fundamental part of operations and business strategy decisions, it is traditionally used to present data in […]

The post The Rebirth of Business Intelligence appeared first on Symbl.ai.

]]>
Business Intelligence (B.I.) is the infrastructure that a business uses to collect, organize and manage all its data. Everything from simple spreadsheets to the more thorough dashboards, all fall under the umbrella of B.I. While B.I. has been a fundamental part of operations and business strategy decisions, it is traditionally used to present data in a more readable way. As Michael F. Gorman, professor of operations management and decision science at the University of Dayton in Ohio, said in an article published by CIO Magazine, “[Business Intelligence] doesn’t tell you what to do; it tells you what was and what is.”

As this space has evolved with the adoption of Artificial Intelligence(AI), we’ve identified some of the ways in which AI enhances the simple business intelligence tools into something more powerful.

Where is the B.I. train headed?

Among several important developments thanks to the adoption of machine intelligence, one important aspect is the expanding sources of data. While businesses take into consideration most of the new technologies like IoT, click tracking, and Robotic process automation (RPA) systems that provide waterfalls of useful intel, a lot of effort is spent to discover additional data sources.

With the increase in the number of data sources, the next stop on the data train is at the junction of big data technologies. There is a need for advanced AI to analyze large amounts of data and there are efforts being taken to address it. Apache Hadoop and Spark are some of the most popular open-source frameworks to store and process big data in a distributed environment.

One of the most commonly adopted new business intelligence approaches due to significant ROI potential is Predictive business analysis. This is where large amounts of historical data is analyzed to predict future outcomes – one of the most common use cases is the implementation of Next Best Action in call centers. For example, a call center agent might use the historical data from past appointments to reach out to the customer to book their next appointment.

As we continue to explore the avenues of real-time B.I, conversation analysis systems will become crucial so that these predictions based on historical data can complement with real-time customer conversation experiences.

Conversational Intelligence – the third eye of B.I.

Sophisticated conversational analysis platforms can contribute immensely to the usefulness of the data sources. Speech to Text platforms have been in existence for a while and are in fact starting to become commoditized, but extracting the nuggets of information and insights can change how the real-time B.I tools deliver value to organizations. This capability extends B.I beyond “what is” and “what was” and is more indicative of “what can be done”. This is where B.I becomes action-oriented.

Prophecies for the unleashed

The influx of data seen by businesses encourages the use of proactive, real-time systems for reporting and analysis that can help with alerts and updates.
Getting the meta-conversation data source: Conversations are the last mile of data getting lost today and is heavily underutilized. The meta-level information on customer conversation statistics can give horsepower to the B.I systems for real-time, actionable intelligence – for example understanding customer sentiments can explain the outcomes of the exchange.. Imagine having this flow into the customer support organization to influence real-time call center conversations resulting in service recovery. For supervised learning approaches, it is important to highlight the dependency of the quality of an AI model on the quality of the data, which is why the exploitation of all the available data should be utilized. Sources such as meetings, client interactions website visitors, chatbots, feedback and review forums, and demand trends are a few gold mines of data.

Consider the infamous 80-20 rule: It is an axiom of business management that “80% of sales come from 20% of clients”. With so much data, it is important to identify what is most important. Whether this identification is done through AI or domain experts within the company it is definitely something to keep in mind.

Beyond visual representation and the Role of Natural language generation (NLG) in B.I: New developing branches of AI like NLG have a major impact on the usefulness and accessibility. Jon Arnold, Principal of J Arnold and Associates, in his article for Enterprise Management 360 says- “A key reason why BI is a strong use case is that these platforms provide visual representations of the data, but this isn’t always helpful for workers. Not all forms of data can be easily visualized, and visual outputs aren’t always enough. Sometimes a written analysis is what’s needed, and other times voice is the format that works best.”

Stay tuned for a blog that goes beyond traditional B.I and how cognitive RPA and B.I are changing the way enterprises work. If you are interested in learning how Symbl helps B.I. teams collect conversational data by setting up the metadata and determining the right design patterns, contact us for a demo.

The post The Rebirth of Business Intelligence appeared first on Symbl.ai.

]]>