Integrating a Chatbot: Dialogflow

Understanding a few more key concepts

Now we will step back and discuss what we built, to better understand how it can be extended.

Entities and Fulfillments

So far you have only used some very basic Dialogflow features.

However, imagine you had been using Dialogflow’s prebuilt “Weather” agent. If you ask it “What’s the temperature next Sunday in London”, you might get a reply containing:

  "result": {
    "source": "agent",
    "resolvedQuery": "What's the temperature next Sunday in London?,",
    "action": "weather.temperature",
    "actionIncomplete": false,
    "parameters": {
      "address": {
        "subadmin-area": "London"
      "temperature": "",
      "date-time": "2019-03-31",
      "unit": ""
    // (... parts skipped ...)
    "fulfillment": {
      "speech": "",
      "messages": [
          "type": 0,
          "speech": ""
    "score": 0.8700000047683716

Notice that this result does not contain any fulfillment text, which is what you would have used to make Pepper give an answer.

What it does have, however, is entities - in this case, more specifically, Dialogflow gives the temperature, the address, and the date for which the user is asking, as well as what exactly is asked (“weather.temperature”).

What is missing to have a complete answer is fulfillment: using these entities as parameters given to another service, that could, in this case, give the actual answer (“It will be 23°C”).

This could be done in two ways:

  • By activating Fulfillment Webhooks in Dialogflow - see
  • By doing the Fulfillment inside your application - after receiving the response from Dialogflow, using those entities to make a new request to a different web service, and composing an answer depending on the reply

Fulfillment isn’t only for composing an answer; you can also call a web service that will create a calendar item, order a product, etc.


Have a look at the following part of the SimpleSayReaction:

override fun runWith(speechEngine: SpeechEngine) {
   val say = SayBuilder.with(speechEngine).withText(answer).build()
   sayFuture = say.async().run()
   try {
       sayFuture?.get() // Block until action is done
   } catch (e: ExecutionException) {
       Log.e("SimpleSayReaction", "Error during say: %e")

… note how we’re building a standard “Say” action, but instead of getting a qiContext like usual, it’s getting a speechEngine.

This ensures that the Say action can be run; if you would try to run a normal “Say” action while a Chat action is running, the “Say” would fail. Providing a special speechEngine is a way of allowing the Say at a specific moment.

This answer could also include any other action - animations, tablet display, even navigation. You could use the entity parameters included in the Dialogflow response to decide what to display on the tablet (for example, with the weather example above, you could show the name of the city address on Pepper’s tablet)

The Dialogflow session

In our DialogflowChatbot class, you created a random dialogflow session ID:

class DialogflowChatbot internal constructor(context: QiContext,
                                            credentialsStream : InputStream
   : BaseChatbot(context) {
   private var dialogflowSessionId = "chatbot-" + UUID.randomUUID().toString()
   // ...

… which means the same session ID would be used until the app is quit or the focus is lost. This doesn’t matter for this dialogue, because the “jokes” agent does not take context into account, but other more complex agents will be able to have a more intelligent back-and-forth (for example, asking the user for confirmation).

In that case, you usually want to generate a new dialog session id for each new human the robot talks to, and pass that to the chatbot.