Making an Outgoing Call to Collect Response

In this section, you will be acquainted with how to start sending voice calls to collect DTMF responses.

📘

Getting Started

For more information on how to create a Voice Application, which is necessary for this context, refer to the guide - Creating a Voice Application

To make an outbound call to collect a response, simply follow these steps.

Plan out your IVR Flow

Send a Voice Call via API.

curl --location --request POST 'https://voice.unifonic.com/v1/calls' \
--header 'AppsId: 'XXXXXXXXXXXXX'\
--header 'Content-Type: application/json' \
{
	"recipient":["+966111111111"],
	"type" : "ivr",
	"callerId" : "+966115219086",
	"ivr":
	{
        "language" : "english",
	    	"voice" : "male",
        "say":"Did you like our service? Press 1 to say yes. Press 2 to say yes.",
        "responseUrl":"https://myresponseurl.com/1", //optional field
        "speechCollectionLanguage":"english",  //optional field
        "onEmptyResponse":"We did not receive your response. Good bye.", //optional field
        "loop":"3" //optional field
	}
}

In this code sample above, a say verb was used, which means that we will play a TTS recording of the content that was specified in the verb. Alternatively, you can also specify a play verb with its attributed audioId, which means that Unifonic will play the audio that was uploaded onto your account.

📘

How to upload Audio Files?

You can upload audio files to be used on your Voice calls. Follow this guide to find out more.

You would have noticed this field in the object - responseUrl. With the inclusion of the responseUrl verb, it means that Unifonic will collect a response which will be sent to this callback URL. This allows you to receive the input or speech that was collected from your end recipient.

📘

What is responseUrl?

The responseUrl should be a POST API with no Authorization required. This allows Unifonic to send to you your recipient's response, and collect the next IVR instruction from you.

This is a sample of the request payload which we will be sending to your responseUrl.

{
“confidence”:0.6, //refers to accuracy of this speech recognition activity
“speechResult”:”one”,
“digits”: “1”,
“recipient”: “+966XXXXXXXXX”,
“callerId”: “+966XXXXXXXXX”
}
  • The digits field represent the DTMF input that was returned from the recipient, in response to your question in the audio file. These may be left blank in case the user did not selected a DTMF input on their numpad.
  • The speechResult field represent the speech response returned from the recipient, which Unifonic has converted to text before sending to your callback URL. These may be left blank in case the user has selected DTMF input on their numpad instead.
  • The confidence field represents the level of confidence Unifonic has when converting the speech to text.

📘

Tip on using speech collection

If the speechCollectionLanguage verb is present in your original request payload, then it represents that you would have prompted your recipient to respond via speech. By default, Unifonic will collect any speech activity that we can hear. The collection sensitivity can be rather high, which means that Unifonic may collect background noises and additional words/phrases which might not have been intended. A good and accurate level of confidence should be 0.8 and above.

Note that whatever language was specified in speechCollectionLanguage represents that machine which will be used to decipher the end user's spoken words. In case the machine and the spoken language is not compatible, the end result of the collected text might not be accurate.

Provide us with a response to this POST request

A response is required as it tells us what is the next IVR instruction after you have analyzed the IVR response which was posted to your responseUrl/callbackUrl.

🚧

Provide a response within 3 seconds

In case a response is not received within the timeframe of approximately 3 seconds, the call will be disconnected up with an error - "Sorry, something went wrong".

Please make sure that your system can handle the high traffic in case you are making multiple concurrent calls.

Your response itself, will be the next call instruction. It can be a simple thank you message, like the sample below.

{
  "say":"thank you",
  "language":"english",
  "voice":"male"
}


or 

{
  "play":"audioId"  //in case you have a pre-created audio file that you wish to use
}

Ask further questions instead

In case getting one response is not enough, and you have subsequent questions. You can do so by persisting this question and answer format. This would provide you with the interactivity you wanted to achieve with Voice IVR.

{
  "say":"We are happy to serve you today. Would you recommend us? Press 1 for Yes. Press 2 for No.",
  "language":"english",
  "voice":"male",
  "responseUrl":"https://myresponseurl.com/2", //optional field
}

or

{
  "play":"audioId",
  "responseUrl":"https://myresponseurl.com/2", //optional field
}

By simply defining a different responseUrl, you would be able to determine which response was attributed to which question. You can define a different responseUrl in the path, or as a query parameter. Everything will be sent back to you when we receive the response from your end recipient.

Last Steps

Host your responseUrls's API Server and await for your response collections!

Alternatively, you can skip past all of the coding exercises and simply make use of the Unifonic Flow Studio to create an IVR workflow which can easily be triggered with 1 API, and no further coding required. Unifonic will perform all the necessary IVR interactions for you to achieve the ultimate interactive experience for your end user.

📘

How do I know if the call has taken place?

Each call status can be tracked via a Webhook