Symbl Developer Relations Team, Author at Symbl.ai https://symbl.ai/developers/blog/author/symbl-developer-relations-team/ LLM for Conversation Data Mon, 13 May 2024 21:40:28 +0000 en-US hourly 1 https://symbl.ai/wp-content/uploads/2020/07/favicon-150x150.png Symbl Developer Relations Team, Author at Symbl.ai https://symbl.ai/developers/blog/author/symbl-developer-relations-team/ 32 32 Build your own Salesforce Conversational AI with Twilio Flex https://symbl.ai/developers/blog/build-your-own-salesforce-conversational-ai-with-twilio-flex/ Thu, 23 Apr 2020 16:10:23 +0000 https://symbl.ai/?p=8070 Is it possible to capture real-time action items from conversations with customers, automatically push these items to a customer database, and never have to worry about missing another important task ...

The post Build your own Salesforce Conversational AI with Twilio Flex appeared first on Symbl.ai.

]]>
Did you know you can capture real-time action items from conversations with customers, automatically push these items to a customer database, and never have to worry about missing another important task or feverishly scribble customer notes?

We will show you how to connect the Symbl voice API to Twilio Media Streams to access real-time raw audio and easily generate actionable insights and tasks from these audio streams. Take this workflow automation one step further—connect the Salesforce Opportunities dashboard to Twilio Flex to automatically send these valuable insights and tasks to the Salesforce CRM platform.

This blog post will guide you step-by-step through the workflow

Requirements

Before we can get started, you’ll need to make sure to have:

To start, you’ll need to configure Twilio Media Streams and a Symbl Websocket server. If you haven’t already done so, refer to this blog post first to set up your environment.

Getting Started

At this point, you should be able to stream audio through your Websocket server into Twilio Flex and generate insights.

In order to take those generated insights and push them to your Salesforce dashboard in real-time, we will start by updating the `index.js` file in your Websocket server.

const WebSocket = require("ws");
const express = require("express");
const app = express();
const server = require("http").createServer(app);
const ws = new WebSocket.Server({
    server
});
const WebSocketClient = require("websocket").client;
const wsc = new WebSocketClient();
const request = require('request');
let connection = undefined;
let client_connection = undefined;
let opportunity_id = undefined;
// Handle Web Socket Failures
ws.on('connectFailed', (e) => {
    console.error('Connection Failed.', e);
});
// Handle Web Socket Connection
ws.on("connection", (conn) => {
            connection = conn;
            connection.on('close', () => {
                console.log('WebSocket closed.')
            });
            connection.on('error', (erdr) => {
                console.log('WebSocket error.', err)
            });
            connection.on('message', (data) => {
                        const msg = JSON.parse(data);
                        if (data.type === 'utf8') {
                            const {
                                utf8Data
                            } = data;
                        }
                        switch (msg.event) {
                            case 'connected':
                                console.log(`A new call has connected.`);
                                break;
                            case "start":
                                console.log(`Starting Media Stream ${msg.streamSid}`);
                                request.post({
                                    'auth': {
                                        'bearer': ''
                                    },
                                    method: 'POST',
                                    url: 'https:///services/data/v39.0/sobjects/Opportunity',
                                    body: { // Configure the payload as per your needs
                                        "Name": "Prospective Meeting with Magpie",
                                        "CloseDate": "2020-04-12",
                                        "StageName": "Prospecting"
                                    },
                                    json: true
                                }, (err, response, body) => {
                                    opportunity_id = body.id;
                                });
                                break;......
                        }
                    }

In the code sample above, we are modifying the connection start block with a POST request that creates a new opportunity in your Salesforce dashboard with the provided json payload. You can configure this payload as needed.

  1. Once the request is successful, we want to save the opportunity id in a variable so that we can use it to push action items next. (Note: To get your Salesforce Authentication Token, refer to this guide.)
  2. Next, when action items are detected, we want to capture those insights and add them to our Salesforce dashboard under the opportunity we created.

Next, when action items are detected, we want to capture those insights and add them to our Salesforce dashboard under the opportunity we created.

To do this, we will modify the client_connection.on('message') handler.

{
    ......wsc.on("connect", (conn) => {
        client_connection = conn;......
        if (data.type === 'utf8') {
            const {
                utf8Data
            } = data;
            data = JSON.parse(utf8Data) console.log(data, "typeIS: " + data.type);
            if (data.type == "action_insights") {
                request.post({
                    'auth': {
                        'bearer': ''
                    },
                    method: 'POST',
                    url: 'https:///services/data/v39.0/sobjects/Task',
                    body: {
                        "Subject": data.title,
                        "Status": "Not Started",
                        "WhatId": opportunity_id,
                        "Priority": "high",
                        "Description": data.description
                    },
                    json: true
                }, (err, response, body) => {
                    console.log('request', body);
                });
            }
        }
    });

And that’s it! With this integration, your Salesforce dashboard should now have opportunities dynamically created from yours calls and all action items generated will be logged as tasks within the opportunity. If we head over to Salesforce, we can see an opportunity was created: Perspective Meeting with Magpie

If we dive into that opportunity, we can see that the action items that were generated by Symbl have been logged as tasks in the opportunity, automatically.

You can use other Salesforce APIs with Symbl to customize how your action items and topics are displayed on your opportunity dashboard. For example, you can add call logs like in the image above and show the Topics that were generated from your sales call directly in the Description field. Read about the different Salesforce APIs.

Test out the integration

To test out the integration, navigate to the Twilio Flex tab and click on Launch Flex: On your flex dashboard, locate your Twilio phone number and call that number from your cellular device. When you accept the call from Flex, the audio will be streamed through Symbl’s WebSocket API and based on how you’ve configured your API calls for Salesforce, those insights will be logged in your dashboard. Open up your Salesforce dashboard and you’ll see the opportunity being created and insights logging in the opportunity in real-time. Check out our API docs if you want to customize this integration further using the APIs that Symbl provides.

Wrapping up

Congratulations! You can now harness the power of Symbl to empower your sales team to focus on having a great conversation experience with customers and be free of any distracting activities while on the call. Sign up to start building! Need additional help? You can refer to our API Docs for more information and view our sample projects on Github.

The post Build your own Salesforce Conversational AI with Twilio Flex appeared first on Symbl.ai.

]]>
How to Use Symbl’s Voice SDK to Generate Insights in Your Own Applications https://symbl.ai/developers/blog/how-to-use-symbls-voice-sdk-to-generate-insights-in-your-own-applications/ Mon, 09 Mar 2020 08:04:30 +0000 https://symbl.ai/?p=6782 Telephony services make the modern workplace, well, work. Enhance your existing telephony system capabilities by integrating Symbl's Voice SDK.

The post How to Use Symbl’s Voice SDK to Generate Insights in Your Own Applications appeared first on Symbl.ai.

]]>
Telephony services make the modern workplace, well, work. Enhance your existing telephony system capabilities by integrating Symbl’s Voice SDK.

How: Our SDK analyzes voice conversations on SIP or PSTN networks and generates actionable outcomes through contextual conversation intelligence. Products like call centre applications, audio conferencing, PBX systems, and other communication applications that support a telephony interface can use this SDK to provide real-time or post-conversation intelligence to their customers.

What to expect: You’ll receive a post-conversation summary link in the final response.

How to get the analyzed data: Refer to conversation API to get the JSON response of the conversation analyzed in form of: transcripts, action items, topics, follow-ups, questions and more.

For this guide, we will be using a phone number (PSTN) to make the connection with Symbl. It can be easily interchanged with a SIP URI.

Requirements

Before we can get started, you’ll just need to make sure to have:

  • Node JS installed on your system. Get the latest version here.
  • A Symbl account. Sign up here.

Getting Started

First, create an index.js file where we will be writing all our code.

To work with the SDK, you must have a valid app id and app secret.

| If you don’t already have your app id or app secret, log in to the platform to get your credentials.

For any invalid appId and appSecret combination, the SDK will throw Unauthorized errors.

Initialize the Client SDK

1. Use the command below to install and add to your npm project’s package.json.

npm install --save @symblai/language-insights-client-sdk

2. Reference the SDK

import { sdk } from '@symblai/language-insights-client-sdk';

| Finally, initialize the SDK

sdk.init({appId: 'yourAppId',appSecret: 'yourAppSecret' }) .then(() => console.log('SDK Initialized.')) .catch(err => console.error('Error in initialization.', err)); 

Connect to Endpoints

Now that we have successfully initialized the SDK, we can begin connecting to endpoints.

This SDK supports dialing through a simple phone number - PSTN endpoint.

What is PSTN?

The Publicly Switched Telephone Network (PSTN) is the network that carries your calls when you dial in from a landline or cell phone. It refers to the worldwide network of voice-carrying telephone infrastructure, including privately-owned and government-owned infrastructure.

 endpoint: {  type: 'pstn',  phoneNumber: '14083380682', // Phone number to dial in  dtmf: '6155774313#' // Joining code for conferences } 

For this guide, we will be using PSTN to make the connection. Refer to our blog post [here](https://symbl.ai/blogs/ai) to see how to connect using a SIP URI instead.

| The code snippet below dials in using PSTN and hangs up after 60 seconds.

const { sdk } = require('@symblai/language-insights-client-sdk'); sdk.init({ appId: 'yourAppId', appSecret: 'yourAppSecret }).then(() => { sdk.startEndpoint({   endpoint: {type: 'pstn', phoneNumber: '' }>  }).then(connection => {   console.log('Successfully connected.', connection.connectionId);  // Scheduling stop endpoint call after 60 seconds for demonstration purposes  setTimeout(() => { sdk.stopEndpoint({ connectionId: connection.connectionId }).then(() => { console.log('Stopped the connection'); }).catch(err => console.error('Error while stopping the connection', err)); }, 60000); }).catch(err => console.error('Error while starting the connection', err));  }).catch(err => console.error('Error in SDK initialization.', err)); 

The above code snippet initializes the sdk, uses sdk.startEndpoint to connect to the pstn connection, starts streaming voice data for 60 seconds and then finally uses sdk.stopEndpoint to end the connection.

Push Events

Events can be pushed to an on-going connection to have them processed.

Every event must have a type to define the purpose of the event at a more granular level, usually to indicate different activities associated with the event resource. For example - A "speaker" event can have type as started_speaking . An event may have additional fields specific to the event.

Currently, Symbl only supports the speaker event which is described below.

Speaker Event

The speaker event is associated with different individual attendees in the meeting or session. An example of a speaker event is shown below.

Speaker Event has the following types:

started_speaking

This event contains the details of the user who started speaking with the timestamp in ISO 8601 format when he started speaking.

 const speakerEvent = new SpeakerEvent({ type: SpeakerEvent.types.startedSpeaking, timestamp: new Date().toISOString(), user: { userId: 'john@example.com', name: 'John' } });

stopped_speaking

This event contains the details of the user who stopped speaking with the timestamp in ISO 8601 format when he stopped speaking.

 const speakerEvent = new SpeakerEvent({ type: SpeakerEvent.types.stoppedSpeaking, timestamp: new Date().toISOString(), user: { userId: 'john@example.com', name: 'John' } }); 

A startedSpeaking event is pushed on the on-going connection. You can use pushEventOnConnection() method from the SDK to push the events.

Complete Example

 const { sdk, SpeakerEvent } = require('@symblai/language-insights-client-sdk');  sdk.init({ appId: 'yourAppId', appSecret: 'yourAppSecret', basePath: 'https://api.symbl.ai' }).then(() => {   console.log('SDK Initialized'); sdk.startEndpoint({ endpoint: { type: 'pstn', phoneNumber: '14087407256', dtmf: '6327668#' } }).then(connection => {   const connectionId = connection.connectionId; console.log('Successfully connected.', connectionId); const speakerEvent = new SpeakerEvent({ type: SpeakerEvent.types.startedSpeaking, user: { userId: 'john@example.com', name: 'John' } });   setTimeout(() => { speakerEvent.timestamp = new Date().toISOString(); sdk.pushEventOnConnection( connectionId, speakerEvent.toJSON(), (err) => { if (err) { console.error('Error during push event.', err); } else { console.log('Event pushed!'); } } ); }, 2000);   setTimeout(() => { speakerEvent.type = SpeakerEvent.types.stoppedSpeaking; speakerEvent.timestamp = new Date().toISOString();   sdk.pushEventOnConnection( connectionId, speakerEvent.toJSON(), (err) => { if (err) { console.error('Error during push event.', err); } else { console.log('Event pushed!'); } } ); }, 12000);   // Scheduling stop endpoint call after 60 seconds setTimeout(() => { sdk.stopEndpoint({ connectionId: connection.connectionId }).then(() => { console.log('Stopped the connection'); }).catch(err => console.error('Error while stopping the connection.', err)); }, 90000);   }).catch(err => console.error('Error while starting the connection', err));   }).catch(err => console.error('Error in SDK initialization.', err)); 

Above is a quick simulated speaker event example that

1. Initializes the SDK 2. Initiates a connection using PSTN 3. Sends a speaker event of type `startedSpeaking` for user John 4. Sends a speaker event of type `stoppedSpeaking` for user John 5. Ends the connection with the endpoint

Strictly for the illustration and understanding purposes, this examples pushes events by simply using setTimeout() method periodically, but in real usage, you should detect these events and push them as they occur.

Send Summary Email

An action sendSummaryEmail can be passed at the time of making the startEndpoint() call to send the summary email to specified email addresses passed in the parameters.emails array. The email will be sent as soon as all the pending processing is finished after the stopEndpoint() is executed. Below code snippet shows the use of actions to send a summary email on stop.

Optionally, you can send the title of the Meeting and the participants in the meeting which will also be present in the Summary Email.

To send the title of the meeting populate the data.session.name field with the meeting title.

 const { sdk, SpeakerEvent } = require('@symblai/language-insights-client-sdk');   sdk.init({ appId: 'yourAppId', appSecret: 'yourAppSecret', basePath: 'https://api.symbl.ai' }).then(() => { console.log('SDK Initialized'); sdk.startEndpoint({ endpoint: { type: 'pstn', phoneNumber: '14087407256', dtmf: '6327668#' }, actions: [{ "invokeOn": "stop", "name": "sendSummaryEmail", "parameters": { "emails": [ "john@exmaple.com", "mary@example.com", "jennifer@example.com" ] } }], data: { session: { name: 'My Meeting Name' // Title of the Meeting, this will be reflected in the summary email }, 

To send the list of meeting attendees, populate the list of attendees in the user objects in `data.session.users` field as shown in the example. To indicate the Organizer or Host of the meeting set the `role` field in the corresponding user object.

| Setting the timestamp for speakerEvent is optional but it's recommended to provide accurate timestamps in the events when they occurred to get more precision.

 users: [ { user: { name: "John", userId: "john@example.com", role: "organizer" } }, { user: { name: "Mary", userId: "mary@example.com" } }, { user: { name: "John", userId: "jennifer@example.com" } } ] 

Output

This is an example of the summary page you can expect to receive at the end of your call.

Tuning your Summary Page

You can choose to tune your summary page with the help of query parameters to play with different configurations and see how the results look.

Query Parameters

You can configure the summary page by passing in the configuration through query parameters in the summary page URL that gets generated at the end of your meeting. See the end of the URL in this example:

`https://meetinginsights.symbl.ai/meeting/#/eyJ1...I0Nz?insights.minScore=0.95&topics.orderBy=position`

Query Parameter Default Value Supported Values Description
insights.minScore 0.8 0.5 to 1.0 Minimum score that the summary page should use to render the insights
insights.enableAssignee false [true, false] Enable to disable rending of the assignee and due date ofthe insight
insights.enableAddToCalendarSuggestion true [true, false] Enable to disable add to calendar suggestion whenapplicable on insights
insights.enableInsightTitle true [true, false] Enable or disable the title of an insight. The title indicates theoriginating person of the insight and if assignee of the insight.
topics.enabled true [true, false] Enable or disable the summary topics in the summary page
topics.orderBy ‘score' [‘score', ‘position'] Ordering of the topics. <br><br> score - order topics by the topic importance score. <br><br>position - order the topics by the position in the transcript they surfaced for the first time

score - order topics by the topic importance score.

position - order the topics by the position in the transcript they surfaced for the first time

Test Your Integration

Now that you've seen how the SDK works end to end, lets test the integration.

If you've dialed in with your phone number, try speaking the following sentences to see the generated output:

* "Hey, it was nice meeting you yesterday. Let's catch up again next week over coffee sometime in the evening. I would love to discuss the next steps in our strategic roadmap with you."

* "I will set up a meeting with Roger to discuss our budget plan for the next quarter and then plan out how much we can set aside for marketing efforts. I also need to sit down with Jess to talk about the status of the current project. I'll set up a meeting with her probably tomorrow before our standup."

If you've dialed into a meeting, try running any of the following videos with your meeting platform open and view the summary email that gets generated:

At the end, you should recieve an email in your inbox (if you've configured your email address correctly) with a link that will take you to your meeting summary page. There you should be able to see the transcript as well as all the insights that were generated.

Wrapping Up

With this output, you can push the data to several downstream channels like RPA, business intelligence platforms, task management systems, and others using [conversation API] (https://docs.symbl.ai/#conversation-api).

Congratulations! You now know how to use Symbl's Voice SDK to generate your own insights. To recap, in this guide we talked about:

  • installing and initializing the SDK
  • connecting to a phone number through PSTN
  • pushing speaker events
  • configuring the summary page with generated insights
  • tweaking the summary page with query parameters

Sign up to start building!

Need additional help? You can refer to our API Docs for more information and view our sample projects on Github.

The post How to Use Symbl’s Voice SDK to Generate Insights in Your Own Applications appeared first on Symbl.ai.

]]>
Integrating Symbl Insights with Twilio Media Streams https://symbl.ai/developers/blog/integrating-symbl-insights-with-twilio-media-streams/ Mon, 09 Mar 2020 07:58:36 +0000 https://symbl.ai/?p=6766 Capturing audio and deriving real-time insights is not as hard as you may think. Twilio Media Streams provide real-time raw audio and give developers the flexibility to integrate this audio in the voice stack of choice. Couple that with the power of Symbl, and you can surface actionable insights with customer interactions through the Symbl […]

The post Integrating Symbl Insights with Twilio Media Streams appeared first on Symbl.ai.

]]>
Capturing audio and deriving real-time insights is not as hard as you may think. Twilio Media Streams provide real-time raw audio and give developers the flexibility to integrate this audio in the voice stack of choice. Couple that with the power of Symbl, and you can surface actionable insights with customer interactions through the Symbl WebSocket API.

What can you expect upon successful installation?

  A post-conversation email with topics generated, of action items, and a link to view the full summary output. This blog post will guide you step-by-step through integrating the Symbl WebSocket API into Twilio Media Streams.

Requirements

Before we can get started, you’ll need

Setting up the Local Server

Twilio Media Streams use the WebSocket API to live stream the audio from the phone call to your application. Let’s get started by setting up a server that can handle WebSocket connections. Open your terminal, create a new project folder, and create an index.js file.

$ mkdir symbl-websocket
$ cd symbl-websocket
$ touch index.js

To handle HTTP requests we will use node’s built-in http module and Express. For WebSocket connections we will be using ws, a lightweight WebSocket client for node. In the terminal run these commands to install ws, websocket and Express:

$ npm install ws websocket express

To install the server open your index.js file and add the following code.

 const WebSocket = require("ws");
 const express = require("express");
 const app = express();
 const server = require("http").createServer(app);
 const ws = new WebSocket.Server({
     server
 });
 const WebSocketClient = require("websocket").client;
 const wsc = new WebSocketClient();
 // Handle Web Socket Connection 
 ws.on("connection", function connection(ws) {
     console.log("New Connection Initiated");
 });
 //Handle HTTP Request 
 app.get("/", (req, res) => res.send("Hello World"));
 console.log("Listening at Port 8080");
 server.listen(8080);

Save and run index.js with

$ node index.js

Open your browser and navigate to https://localhost:8080 Your browser should show Hello World

Setting up the Symbl WebSocket API

Let’s connect our Twilio number to our WebSocket server. First, we need to modify our server to handle the WebSocket messages that will be sent from Twilio when our phone call starts streaming. There are four main message events we want to listen for: connected, start, media and stop. – Connected: When Twilio makes a successful WebSocket connection to a server – Start: When Twilio starts streaming Media Packets – Media: Encoded Media Packets (This is the Raw Audio) – Stop: When streaming ends the stop event is sent. Modify your index.js file to log messages when each of these messages arrive at the Symbl server.

const WebSocket = require("ws");
const express = require("express");
const app = express();
const server = require("http").createServer(app);
const ws = new WebSocket.Server({ server });
const WebSocketClient = require("websocket").client;
const wsc = new WebSocketClient();
let connection;
let client_connection;

// Handle WebSocket server failures
wsc.on('connectFailed', (e) => {
    console.error('Connection Failed.', e);
});

// Handle WebSocket server connection
ws.on("connection", (conn) => {
    connection = conn;
    connection.on('close', () => {
        console.log('WebSocket closed.');
    });
    connection.on('error', (err) => {
        console.log('WebSocket error.', err);
    });
    connection.on("message", (data) => {
        const msg = JSON.parse(data);
        if (msg.type === 'utf8') {
            const { utf8Data } = msg;
        }
        switch (msg.event) {
            case "connected":
                console.log(`A new call has connected.`);
                break;
            case "start":
                console.log(`Starting Media Stream ${msg.streamSid}`);
                break;
            case "media":
                if (client_connection) {
                    let buff = Buffer.from(msg.media.payload, 'base64');
                    client_connection.send(buff);
                }
                break;
            case "stop":
                console.log(`Call Has Ended`);
                // Send stop request 
                client_connection.sendUTF(JSON.stringify({
                    "type": "stop_request"
                }));
                client_connection.close();
                break;
        }
    });
});

// Handle WebSocket client connection
wsc.on("connect", (conn) => {
    client_connection = conn;
    client_connection.on('close', () => {
        console.log('WebSocket closed.');
    });
    client_connection.on('error', (err) => {
        console.log('WebSocket error.', err);
    });
    client_connection.on('message', (data) => {
        if (data.type === 'utf8') {
            const { utf8Data } = data;
            data = JSON.parse(utf8Data);
            console.log(utf8Data);
        }
    });

    client_connection.send(JSON.stringify({
        "type": "start_request",
        "insightTypes": ["question", "action_item"],
        "config": {
            "confidenceThreshold": 0.5,
            "timezoneOffset": 480, // Your timezone offset from UTC in minutes
            "languageCode": "en-US",
            "speechRecognition": {
                "encoding": "MULAW",
                "sampleRateHertz": 8000 // Make sure the correct sample rate is
            },
            "meetingTitle": "My meeting"
        },
        "speaker": {
            "userId": "<your_email@example.com>",
            "name": ""
        }
    }));
});

wsc.connect('wss://api.symbl.ai/v1/realtime/insights/121', null, null, {
    'X-API-KEY': '<your_auth_token>'
});

// Handle HTTP Request
app.get("/", (req, res) => res.send("Hello World"));

console.log("Listening at Port 8080");
server.listen(8080);

Now we need to set up a Twilio number to start streaming audio to our server. We can control what happens when we call our Twilio number using TwiML. We’ll create an HTTP route that will return TwiML instructing Twilio to stream audio from the call to our server. Add the following POST route to your index.js file.

const WebSocket = require("ws");
const express = require("express");
const app = express();
const server = require("http").createServer(app);
const ws = new WebSocket.Server({ server });
const WebSocketClient = require("websocket").client;
const wsc = new WebSocketClient();
let connection;
let client_connection;

// Handle WebSocket client failures
wsc.on('connectFailed', (e) => {
    console.error('Connection Failed.', e);
});

// Handle WebSocket server connection
ws.on("connection", (conn) => {
    connection = conn;
    connection.on('close', () => {
        console.log('WebSocket closed.');
    });
    connection.on('error', (err) => {
        console.log('WebSocket error.', err);
    });
    connection.on("message", (data) => {
        const msg = JSON.parse(data);
        if (msg.type === 'utf8') {
            const { utf8Data } = msg;
        }
        switch (msg.event) {
            case "connected":
                console.log(`A new call has connected.`);
                break;
            case "start":
                console.log(`Starting Media Stream ${msg.streamSid}`);
                break;
            case "media":
                if (client_connection) {
                    let buff = Buffer.from(msg.media.payload, 'base64');
                    client_connection.send(buff);
                }
                break;
            case "stop":
                console.log(`Call Has Ended`);
                // Send stop request 
                client_connection.sendUTF(JSON.stringify({
                    "type": "stop_request"
                }));
                client_connection.close();
                break;
        }
    });
});

// Handle WebSocket client connection
wsc.on("connect", (conn) => {
    client_connection = conn;
    client_connection.on('close', () => {
        console.log('WebSocket closed.');
    });
    client_connection.on('error', (err) => {
        console.log('WebSocket error.', err);
    });
    client_connection.on('message', (data) => {
        if (data.type === 'utf8') {
            const { utf8Data } = data;
            data = JSON.parse(utf8Data);
            console.log(utf8Data);
        }
    });

    client_connection.send(JSON.stringify({
        "type": "start_request",
        "insightTypes": ["question", "action_item"],
        "config": {
            "confidenceThreshold": 0.5,
            "timezoneOffset": 480, // Your timezone offset from UTC in minutes
            "languageCode": "en-US",
            "speechRecognition": {
                "encoding": "MULAW",
                "sampleRateHertz": 8000 // Make sure the correct sample rate is
            },
            "meetingTitle": "My meeting"
        },
        "speaker": {
            "userId": "<your_email@example.com>",
            "name": ""
        }
    }));
});

wsc.connect('wss://api.symbl.ai/v1/realtime/insights/121', null, null, {
    'X-API-KEY': '<your_auth_token>'
});

// Handle HTTP Request
app.get("/", (req, res) => res.send("Hello World"));

console.log("Listening at Port 8080");
server.listen(8080);

For Twilio to connect to your local server we need to expose the port to the internet. We need to use ngrok to create a tunnel to our localhost port and expose it to the internet. In a new terminal window run the following command:

$ ngrok http 8080

You should get an output with a forwarding address like this. Copy the URL onto the clipboard. Make sure you save the HTTPS URL.

Forwarding https://xxxxxxxx.ngrok.io -> https://localhost:8080

Open a new terminal window and run your index.js file.

$ node index.js

Setting up your Twilio Studio

Now that our WebSocket server is ready, the remaining configuration needed to join Symbl to your customer and agent conversations, will be done through your Twilio Studio Dashboard. Navigate to Studio and create a new flow. Twilio offers three different triggers that you can use to build out this integration. Depending on your use case, you can choose to begin the flow from either the message, call, or REST API triggers. In our example, we want Symbl to join a voice conversation when a customer calls our agent, so we will be using the incoming call trigger to build out our flow. First, use the Fork Stream widget and connect it to the Incoming Call trigger. In the configuration, the URL should match your ngrok domain. NOTE: Use the WebSocket protocol wss instead of http for the ngrok URL.
startws

Next connect this widget to the `Flex Agent` widget which will connect the call to the Flex Agent:

flexagent

Finally, we need to end the stream once the call is complete. To do so, use the same `Fork Stream` widget but the configuration for `stream action` should be `Stop`.

flexagent

Test the integration

To test the integration, navigate to the Flex tab and click on Launch Flex: On your Flex dashboard, locate your Twilio phone number and call that number from your mobile device. When the agent accepts the call, the audio will stream through the WebSocket API. And at the end of the call, you will get an email with the transcript and insights generated from the conversation.

Wrapping up

What else can you do with the data? You can fetch the data out of the conversation and with this output, you can push the data to downstream channels such as Trello, Slack, Jira. Use GET conversation to find the conversation ID. GET https://api.symbl.ai/v1/conversations/{conversationId} This is a sample API call:

  const request = require('request');
  const your_auth_token = '';
  request.get({
      url: 'https://api.symbl.ai/v1/conversations/{conversationId}',
      headers: {
          'x-api-key': your_auth_token
      },
      json: true
  }, (err, response, body) => {
      console.log(body);
  });

The above request returns a response structured like this:

  {
      "id": "5179649407582208",
      "type": "meeting",
      "name": "Project Meeting #2",
      "startTime": "2020-02-12T11:32:08.000Z",
      "endTime": "2020-02-12T11:37:31.134Z",
      "members": [{
          "name": "John",
          "email": "John@example.com",
      }, {
          "name": "Mary",
          "email": "Mary@example.com",
      }, {
          "name": "Roger",
          "email": "Roger@example.com",
      }]
  }

Congratulations! You can now harness the power of Symbl and Media Streams to extend your application capabilities. Need additional help? You can refer to our API Docs for more information and view our sample projects on Github.

The post Integrating Symbl Insights with Twilio Media Streams appeared first on Symbl.ai.

]]>