Integrating Custom Channels with the Chat API
Our platform offers a robust Chat API that allows you to seamlessly integrate your AI-powered agents into custom channels, enabling communication across a variety of platforms, including proprietary interfaces, third-party tools, and voice-based systems like Salesforce, WhatsApp, or custom web applications.
For instance, if you want your AI assistant to work within Salesforce for handling customer support or sales inquiries, or WhatsApp for customer engagement, you can integrate our API to make this happen seamlessly.
Our API lets you interact with your assistant using HTTPS requests, making it compatible with any programming language or development tool. Here’s an overview of how to set up and use the API.
Requirements
To use the Chat API, you’ll need two things:
PROJECT TOKEN: This token is retrieved from your workspace under the "Install Your Widget" section, as explained here: Your Project Token.
PERSONAL ACCESS TOKEN: This token is generated by our Customer Success Team and is linked to your account. It allows you to use the API for a specific project.
Making a Request
To send a request to the assistant, use a POST request to the following endpoint:
Endpoint: https://platform.indigo.ai/chat/:project_token/send
The request must be authenticated using your PERSONAL ACCESS TOKEN in the authorization header. Here's an example of how to make the request using curl:
PROJECT_TOKEN="abcd-1234"
PERSONAL_ACCESS_TOKEN="12345678"
REQUEST_BODY='{...}'
curl -X POST "https://platform.indigo.ai/chat/$PROJECT_TOKEN/send" \
-H "content-type: application/json" \
-H "authorization: Bearer pat-$PERSONAL_ACCESS_TOKEN" \
-d $REQUEST_BODY
Request Body Structure
The request body is a JSON object that must contain the following required fields:
sender
: The unique identifier for the conversation (typically the user ID). This ensures that all messages related to the same conversation are tracked and enables retrieval of the message history.source
: Identifies the message source. This is useful when the assistant is integrated into multiple channels (e.g., WhatsApp, Zendesk, or web).data
: Specifies the type of message and its content. We support three types of data content:payload: Used when you want to emulate a button click. You provide the destination agent or workflow label, with an optional label.
{"type": "payload", "payload": "general", "label": "Click here!"}
text: A simple text message to be processed by the assistant.
{"type": "text", "text": "How are you?"}
profile: A set of user information to be associated with the conversation. This can be retrieved from corresponding variables in your workspace.
{"type": "profile", "profile": {"variable1": "some data", "variable2": "other info"}}
Starting a Conversation
Every conversation must start with a payload message with the label init
. This is essential to properly initialize the conversation state in our platform. Here's an example to start a conversation:
PROJECT_TOKEN="abcd-1234"
PERSONAL_ACCESS_TOKEN="12345678"
REQUEST_BODY='{"type": "payload", "payload": "init", "label": "START"}'
curl -X POST "https://platform.indigo.ai/chat/$PROJECT_TOKEN/send" \
-H "content-type: application/json" \
-H "authorization: Bearer pat-$PERSONAL_ACCESS_TOKEN" \
-d $REQUEST_BODY
Parsing the Response
Our API returns responses in chunks, similar to how OpenAI’s API handles streaming. Each chunk represents a block or a fragment of processing, especially for complex elements like carousels.
Each response starts with a processing.start
chunk and ends with a processing.end
chunk. Between these chunks, you will receive one or more content chunks.
Each chunk type corresponds to different message content (e.g., text, media, buttons). Multiple chunks may be sent in response, especially when there are multiple buttons or links.
Below is a table of the different chunk types:
processing.start
The first chunk returned in a response
{ "type": "processing.start" }
processing.end
The last chunk returned in a response
{ "type": "processing.end" }
text
Contains a text block content. The text content keeps all HTML tags to preserve styling.
The is_generated
attribute tells us if the text is static or generated by an LLM.
{ “type”: “text”, "data": { "is_caption": false, "text": "<p class="slate-p">Hi 👋I’m your virtual assistant
" }, "is_generated": false, }
image
Contains an image block content. Media content is hosted on our platform storage if uploaded from a pc.
{ "type": "image", "data": { "alt": "An alt text", "src": "https://platform.indigo.ai/…/image_1.jpg" } }
video
Contains a video block content. Media content is hosted on our platform storage if uploaded from a pc.
{ "type": "video", "data": { "alt": "An alt text", "src": "https://platform.indigo.ai/…/video_1.jpg" } }
button
Contains a single button defined in a quick replies block. When multiple buttons are defined multiple chunks are sent in the response.
{ "type":"button", "data": { "label": "Click here", "payload": "target_answer_label" } }
url
Contains a single url defined in a quick replies block. When multiple urls are defined multiple chunks are sent in the response. When a phonecall button is defined it’s returned as a link chunk with url value equal to tel: {{phone-number}} (Ex. tel: +393333333333)
{ "type":"link", "data": { "label": "Visit our website", "url": "https://indigo.ai" } }
generation.start
The first chunk of an LLM generation. Contains an id for the generation repeated in every chunk for the same generation.
{ "type":"generation.start", "data": { "generation_id": 1234 } }
generation.end
The chunk ending an LLM generation. Contains an id for the generation repeated in every chunk for the same generation.
{ "type":"generation.start", "data": { "generation_id": 1234 } }
generation.chunk
A chunk with the partial content for generated text. Contains an id for the generation repeated in every chunk for the same generation. There are 2 contents returned in this chunk:
Chunk, the text added to the generation
Deltas, a list of operation to apply to current text to obtain the actual text. It is helpful when we work with HTML that is usually converted from a generated Markdown; in this case the previously generated text can be updated. The instructions are derived from the Myers difference algorithm
{ "type":"generation.chunk", "data": { "chunk": "!", "deltas": [ { "command": "mov", "value": 90 }, { "command":"ins", "value":"!" } ], "generation_id":1271 } }
carousel.start
The chunk starting a carousel streaming. Contains an id for the carousel repeated in every chunk for the same carousel’s elements.
{ "type":"carousel.start", "data": { "carousel_id": 1234 } }
carousel.end
The chunk ending a carousel streaming.
{ "type":"carousel.end", "data": { "carousel_id": 1234 } }
carousel.card.start
The chunk starting a carousel card. It also contains the main cards information like text and image contents. Contains a card index (from 1 to 10) repeated in every chunk for the same card’s element.
{ "type": "carousel.card.start", "card_index": 1, "carousel_id": 1234, "data": { "description": "A card", "image": null, "title": "Card 1" } }
carousel.card.end
The chunk ending a carousel card.
{ "type": "carousel.card.end", "card_index": 1, "carousel_id": 1234 }
carousel.card.button
A button contained in a card. Data is the same as button chunk.
{ "type": "carousel.card.button", "card_index": 1, "carousel_id": 1234, "data": { "label": "Button Label", "payload": target_answer_label } }
carousel.card.link
A link contained in a card. Data is the same as link chunk.
{ "type": "carousel.card.link", "card_index": 1, "carousel_id": 1234, "data": { "label": "Visit our website", "url": "https://indigo.ai" } }
Last updated
Was this helpful?