Smart Conversations callback information
Note:
Currently, this functionality is only available to select customers for closed beta testing.
When using the Smart Conversations functionality, Machine Learning and Artificial Intelligence analyses are delivered through specific callbacks on the Conversation API.
Smart Conversations trigger
In order to take advantage of Smart Conversations functionality, you must configure your solution to subscribe to the SMART_CONVERSATION
webhook. This trigger allows you to subscribe to notifications that provide machine learning analyses of inbound messages from end-users on the underlying channels.
Note:
You can configure, at most, five webhook triggers per Conversation API app. For more information on other Conversation API webhook triggers and callbacks, click here.
In addition to including message identification information, these notifications can deliver the following services and analyses:
Feature | Description |
---|---|
Sentiment Analysis | Provides an assessment of the likelihood that the emotional tone of a message is positive, negative, or neutral. For information on how this analysis is represented in a callback, see the description of the ml_sentiment_result array. |
Natural Language Understanding (NLU) | Provides an assessment of the likelihood that the message corresponds with a specific set of intents. For example, the likelihood that a message is a greeting, a request for information, or an expression of satisfaction or dissatisfaction. For information on how this analysis is represented in a callback, see the description of the ml_nlu_result array. |
Image Comprehension Engine | Provides an analysis of images included in the received message. This includes the identification of probable document types in the image, optical character extraction, and the assignment of values to probable fields identified on the image. For information on how this analysis is represented in a callback, see the description of the ml_image_recognition_result array |
This information allows you to further customize your solution with automated responses. For example, you can create a chatbot to respond to customers differently based on the intent of the customer message. Additionally, you could program the bot to connect a customer with a human operator in the event that the sentiment of a received message crosses a pre-defined negative threshold.
Smart Conversations callbacks
Each callback dispatched by the Conversation API has a JSON payload with the following top-level properties:
Field | Type | Description |
---|---|---|
project_id |
string | The project ID of the app which has subscribed for the callback. |
app_id |
string | Id of the subscribed app. |
accepted_time |
ISO 8601 timestamp | Timestamp marking when the channel callback was accepted/received by the Conversation API. |
event_time |
ISO 8601 timestamp | Timestamp of the event as provided by the underlying channels. |
message_metadata |
string | Metadata associated with the conversation. |
The Smart Conversations callback is used to deliver machine learning analyses about received messages. The details are given in a top level smart_conversation_notification
field.
It's a JSON object with the following properties:
Field | Type | Description |
---|---|---|
contact_id |
string | The unique ID of the contact that sent the message. |
channel_identity |
string | The channel-specific identifier for the contact. |
channel |
string | The channel on which the message was sent. |
message_id |
string | The unique ID of the corresponding message. |
conversation_id |
string | The text of the message. |
analysis_results |
object | The analysis provided by the Smart Conversations machine learning engine(s). The contents of the object are determined by the functionalities that are enabled for your solution. |
Each analysis_results
object contains the results of the analyses you've enabled for your solution. For example, if you have enabled sentiment analysis and NLU analysis, you may get a callback similar to the one below:
{
"app_id": "01FW3DP26MEN4JKSME44JDXWC4",
"accepted_time": "2022-07-15T14:31:52.458350165Z",
"event_time": "2022-07-15T14:31:52Z",
"project_id": "0f93046c-91e1-426f-89b7-d03deb8ff872",
"smart_conversation_notification": {
"contact_id": "01FX7MQMZ0HVK5GPK4R0RBS3VT",
"channel_identity": "alphanumeric_identity",
"channel": "TELEGRAM",
"message_id": "01G814BT8NKT7VYQ7FA58MWJ10",
"conversation_id": "01FX7MQNJNYQ3685MFR7KB7HF7",
"analysis_results": {
"ml_sentiment_result": [
{
"message": "Run sentiment & NLU analysis",
"sentiment": "neutral",
"score": 0.97966236,
"results": [
{
"sentiment": "negative",
"score": 0.0039568725
},
{
"sentiment": "neutral",
"score": 0.97966236
},
{
"sentiment": "positive",
"score": 0.016380679
}
]
}
],
"ml_nlu_result": [
{
"message": "Run sentiment & NLU analysis",
"intent": "general.yes_or_agreed",
"score": 0.6248218,
"results": [
{
"intent": "general.yes_or_agreed",
"score": 0.6248218
},
{
"intent": "chitchat.bye",
"score": 0.2360245
},
{
"intent": "chitchat.how_are_you",
"score": 0.06233201
},
{
"intent": "chitchat.greeting",
"score": 0.03595746
},
{
"intent": "chitchat.thank_you",
"score": 0.028020523
},
{
"intent": "general.i_dont_know",
"score": 0.012405818
},
{
"intent": "general.no",
"score": 0.00026780643
},
{
"intent": "chitchat.who_are_you",
"score": 0.00017008775
}
]
}
]
}
},
"message_metadata": ""
}
The analysis_results
are represented as a JSON object with the following properties:
Field | Type | Description |
---|---|---|
ml_sentiment_result |
array | An array that contains the analyses of the sentiments of the corresponding messages. |
ml_nlu_result |
array | An array that contains the analyses of the intentions of, and entities within, the corresponding messages. |
ml_image_recognition_result |
array | An array that contains the image recognition analyses of the images identified in the corresponding messages. |
The ml_sentiment_result
, ml_nlu_result
, and ml_image_recognition_result
arrays are described below.
The ml_sentiment_result array
The ml_sentiment_result
array may be included in your Smart Conversations callback. An example of a Smart Conversations callback payload that includes the ml_sentiment_result
array is below:
{
"app_id": "01FW3DP26MEN4JKSME44JDXWC4",
"accepted_time": "2022-07-15T14:27:16.528875627Z",
"event_time": "2022-07-15T14:27:15Z",
"project_id": "0f93046c-91e1-426f-89b7-d03deb8ff872",
"smart_conversation_notification": {
"contact_id": "01FX7MQMZ0HVK5GPK4R0RBS3VT",
"channel_identity": "alphanumeric_identity",
"channel": "TELEGRAM",
"message_id": "01G8143CS9ZJ62H1487GZB7Q2C",
"conversation_id": "01FX7MQNJNYQ3685MFR7KB7HF7",
"analysis_results": {
"ml_sentiment_result": [
{
"message": "run sentiment analysis",
"sentiment": "neutral",
"score": 0.9774604,
"results": [
{
"sentiment": "negative",
"score": 0.0030293926
},
{
"sentiment": "neutral",
"score": 0.9774604
},
{
"sentiment": "positive",
"score": 0.019510288
}
]
}
]
}
},
"message_metadata": ""
}
Each ml_sentiment_result
is an array of JSON objects of the following structure:
Field | Type | Description |
---|---|---|
message |
string | The message text that was analyzed. |
sentiment |
string | The most probable sentiment of the analyzed text. One of positive , negative , or neutral . |
score |
float | The likelihood that the assigned sentiment represents the emotional context of the analyzed text. 1 is the maximum value, representing the highest likelihood that the message text matches the sentiment, and 0 is the minimum value, representing the lowest likelihood that the message text matches the sentiment. |
results |
array | An array of JSON objects made up of sentiment and score pairs, where the score represents the likelihood that the message communicates the corresponding sentiment . |
Each JSON object in the results
array are made up of sentiment
and score
fields, which are described below:
Field | Type | Description |
---|---|---|
sentiment |
string | A potential sentiment of the analyzed text. One of positive , negative , or neutral . |
score |
float | The likelihood that the corresponding sentiment represents the emotional context of the analyzed text. 1 is the maximum value, representing the highest likelihood that the message text matches the sentiment, and 0 is the minimum value, representing the lowest likelihood that the message text matches the sentiment. |
The ml_nlu_result array
The ml_nlu_result
array may be included with your Smart Conversations callback. An example of a Smart Conversations callback payload that includes the ml_nlu_result
array is below:
{
"app_id": "01FW3DP26MEN4JKSME44JDXWC4",
"accepted_time": "2022-07-15T14:29:22.935294279Z",
"event_time": "2022-07-15T14:29:22Z",
"project_id": "0f93046c-91e1-426f-89b7-d03deb8ff872",
"smart_conversation_notification": {
"contact_id": "01FX7MQMZ0HVK5GPK4R0RBS3VT",
"channel_identity": "alphanumeric_identity",
"channel": "TELEGRAM",
"message_id": "01G814786076SGDNHSMB67M3XN",
"conversation_id": "01FX7MQNJNYQ3685MFR7KB7HF7",
"analysis_results": {
"ml_nlu_result": [
{
"message": "run nlu analysis",
"intent": "chitchat.greeting",
"score": 0.5713836,
"results": [
{
"intent": "chitchat.greeting",
"score": 0.5713836
},
{
"intent": "general.yes_or_agreed",
"score": 0.19936033
},
{
"intent": "chitchat.bye",
"score": 0.17034538
},
{
"intent": "chitchat.how_are_you",
"score": 0.029416502
},
{
"intent": "chitchat.thank_you",
"score": 0.027005624
},
{
"intent": "general.i_dont_know",
"score": 0.0020965587
},
{
"intent": "chitchat.who_are_you",
"score": 0.00020547185
},
{
"intent": "general.no",
"score": 0.00018652831
}
]
}
]
}
},
"message_metadata": ""
}
Each ml_nlu_result
is an array of JSON objects of the following structure:
Field | Type | Description |
---|---|---|
message |
string | The message text that was analyzed. |
intent |
string | The most probable intent of the analyzed text. For example, chitchat.greeting , chitchat.bye , chitchat.compliment , chitchat.how_are_you , or general.yes_or_agreed . |
score |
float | The likelihood that the assigned intent represents the purpose of the analyzed text. 1 is the maximum value, representing the highest likelihood that the message text matches the intent, and 0 is the minimum value, representing the lowest likelihood that the message text matches the intent. |
results |
array | An array of JSON objects made up of intent and score pairs, where the score represents the likelihood that the message has the corresponding intent . |
Each JSON object in the results
array are made up of intent
and score
fields, which are described below:
Field | Type | Description |
---|---|---|
intent |
string | A potential intent of the analyzed text. For example, chitchat.greeting , chitchat.bye , chitchat.compliment , chitchat.how_are_you , or general.yes_or_agreed . |
score |
float | The likelihood that the corresponding intent represents the purpose of the analyzed text. 1 is the maximum value, representing the highest likelihood that the message text matches the intent, and 0 is the minimum value, representing the lowest likelihood that the message text matches the intent. |
The ml_image_recognition_result array
the ml_image_recognition_result
array may be included with your Smart Conversations callback. An example of a Smart Conversations callback payload that includes the ml_image_recognition_result
array is below:
{
"app_id": "01FW3DP26MEN4JKSME44JDXWC4",
"accepted_time": "2022-07-15T14:30:18.741258673Z",
"event_time": "2022-07-15T14:30:17Z",
"project_id": "0f93046c-91e1-426f-89b7-d03deb8ff872",
"smart_conversation_notification": {
"contact_id": "01FX7MQMZ0HVK5GPK4R0RBS3VT",
"channel_identity": "alphanumeric_identity",
"channel": "TELEGRAM",
"message_id": "01G8148YQRMAWCABMFWR9EAQFR",
"conversation_id": "01FX7MQNJNYQ3685MFR7KB7HF7",
"analysis_results": {
"ml_image_recognition_result": [
{
"url": "image_url_example",
"document_image_classification": {
"doc_type": "test_document",
"confidence": 1
},
"optical_character_recognition": {
"result": [
{
"data": [
"Characters extracted from one section of an image."
]
},
{
"data": [
"Characters",
"extracted from",
"another section of",
"the image.",
]
}
]
},
"document_field_classification": {
"result": {
"date": {
"data": [
"DD/MM/YYYY"
]
},
"zipcode": {
"data": [
"112 18",
"30301"
]
}
}
}
}
]
}
},
"message_metadata": ""
}
Each object in the ml_image_recognition_result
array represents an image identified in the processed message. Each JSON object has the following structure:
Field | Type | Description |
---|---|---|
url |
string | The URL of the image that was processed. |
document_image_classification |
object | An object that identifies a document type within the image, along with a confidence level for that document type. |
optical_character_recognition |
object | An object containing a result array that reports the machine learning engine's character extraction results. |
document_field_classification |
object | An object containing a result object that reports on all identified fields, as well as the values assigned to those fields. |
The document_image_classification
object is described below:
Field | Type | Description |
---|---|---|
doc_type |
string | The document type that the analyzed image most likely contains. |
confidence |
float | The likelihood that the analyzed image contains the assigned document type. 1 is the maximum value, representing the highest likelihood that the analyzed image contains the assigned document type, and 0 is the minimum value, representing the lowest likelihood that the analyzed image contains the assigned document type. |
The optical_character_recognition
object contains a result
array. Each object in the result
array represents a portion of an image that underwent optical character recognition. The objects in this result
array contain data
arrays that are described below:
Field | Type | Description |
---|---|---|
data |
array | The data array contains the string(s) identified in one section of an analyzed image. |
The document_field_classification
object contains a result
object. Each object under the result
object represents a field that was identified and populated on the analyzed image. These objects contain data
arrays that are described below:
Field | Type | Description |
---|---|---|
data |
array | The data array contains the string(s) assigned to the corresponding document field. |