Goal - Make Pepper listen to a speaking human in order to recognize some predefined chunks of sentences.
Typical usage - Pepper asks something and waits for a short answer, among a set of short and predictable answers.
How it works¶
Defining phrase sets¶
For example, you can make Pepper listen to a “hello” concept:
Disabling the body language¶
By default, Pepper does not stay motionless while listening, he moves slightly, in order to let you know he is listening.
If necessary, you can disable the body language:
Imagine we want to create an application where we control Pepper moves using voice commands.
Define Phrases or PhraseSets
We could define the phrases Pepper should recognize.
But, perhaps, the user will fail to say exactly the phrase we expect, so we should authorize some variants.
Run the Listen object
Retrieve the heard phrase and the corresponding PhraseSet
If the user says “forwards”:
Using these results, we can make Pepper move accordingly.
Performance & Limitations¶
Listen or Chat?¶
Exclusions with other actions¶
The microphones on Pepper are unidirectional, so Pepper can only listen to sounds from before him. This implies that anyone wanting to talk with Pepper should be in front of him.
Also, Pepper may not be able to hear a human in a noisy environment.