ALAnimatedSpeech API

NAOqi Audio - Overview | API


Namespace : AL


           
            #include <alproxies/alanimatedspeechproxy.h>

           
          

Methods

void ALAnimatedSpeechProxy:: say ( const std::string& text )

Say the annotated text given in parameter and animate it with animations inserted in the text. The current Animated Speech configuration will be used.

Parameters:
  • text

    An annotated text (for example: “Hello. ^start(animations/Stand/Gestures/Hey_1) My name is John Doe. Nice to meet you!”).

    For further details, see: Annotated text .

void ALAnimatedSpeechProxy:: say ( const std::string& text , const AL::ALValue& configuration )

Say the annotated text given in parameter and animate it with animations inserted in the text. The given configuration will be used. For the unset parameters, their default value will be used.

Here are the configuration parameters:

Key Value type Default value Possible values For further details, see ...
“bodyLanguageMode” string “contextual” “disabled”, “random”, “contextual” body_language

alanimatedspeech_say_with_configuration.py


            
             #! /usr/bin/env python
# -*- encoding: UTF-8 -*-

"""Example: Use say Method"""

import qi
import argparse
import sys


def main(session):
    """
    Say a text with a local configuration.
    """
    # Get the service ALAnimatedSpeech.

    asr_service = session.service("ALAnimatedSpeech")

    # set the local configuration
    configuration = {"bodyLanguageMode":"contextual"}

    # say the text with the local configuration
    asr_service.say("Hello, I am a robot !", configuration)


if __name__ == "__main__":
    parser = argparse.ArgumentParser()
    parser.add_argument("--ip", type=str, default="127.0.0.1",
                        help="Robot IP address. On robot or Local Naoqi: use '127.0.0.1'.")
    parser.add_argument("--port", type=int, default=9559,
                        help="Naoqi port number")

    args = parser.parse_args()
    session = qi.Session()
    try:
        session.connect("tcp://" + args.ip + ":" + str(args.port))
    except RuntimeError:
        print ("Can't connect to Naoqi at ip \"" + args.ip + "\" on port " + str(args.port) +".\n"
               "Please check your script arguments. Run with -h option for help.")
        sys.exit(1)
    main(session)

            
           
void ALAnimatedSpeechProxy:: setBodyLanguageMode ( unsigned int bodyLanguageMode )

Set the current body language mode.

Parameters:
  • stringBodyLanguageMode

    The choosen body language mode.

    3 modes exist:

    0 (BODY_LANGUAGE_MODE_DISABLED)

    1 (BODY_LANGUAGE_MODE_RANDOM)

    2 (BODY_LANGUAGE_MODE_CONTEXTUAL)

    For further details, see: Speaking Movement Modes .

void ALAnimatedSpeechProxy:: setBodyLanguageModeFromStr ( const std::string& stringBodyLanguageMode )

Set the current body language mode from a string.

Parameters:
  • stringBodyLanguageMode

    The choosen body language mode.

    3 modes exist:

    “disabled”

    “random”

    “contextual”

    For further details, see: Speaking Movement Modes .

unsigned int ALAnimatedSpeechProxy:: getBodyLanguageMode ( )

Get the current body language mode.

Returns: The current body language mode.

3 modes exist:

0 (BODY_LANGUAGE_MODE_DISABLED)

1 (BODY_LANGUAGE_MODE_RANDOM)

2 (BODY_LANGUAGE_MODE_CONTEXTUAL)

For further details, see: Speaking Movement Modes .

unsigned int ALAnimatedSpeechProxy:: getBodyLanguageModeToStr ( )

Get a string corresponding to the current body language mode.

Returns: The current body language mode.

3 modes exist:

“disabled”

“random”

“contextual”

For further details, see: Speaking Movement Modes .

void ALAnimatedSpeechProxy:: addTagsToWords ( const AL::ALValue& tagsToWords )

Deprecated since version 2.4: use ALSpeakingMovementProxy::addTagsToWords instead.

Link some words to some specific animation tags.

Parameters:
  • tagsToWords – Map of tags to words.
void ALAnimatedSpeechProxy:: declareAnimationsPackage ( const std::string& animationsPackage )

Deprecated since version 2.2: use ALAnimationPlayerProxy::declarePathForTags instead.

Allows using animations contained in the specified package as tagged animations.

Parameters:
  • animationsPackage – The name of the package containing animations (and only animations).

Note

The animations package has to have the following tree pattern:

Stand/ => root folder for the standing animations

Sit/ => root folder for the sitting animations

SitOnPod/ => root folder for the sitting on pod animations

void ALAnimatedSpeechProxy:: declareTagForAnimations ( const AL::ALValue& tagsToAnimations )

Deprecated since version 2.2: use ALAnimationPlayerProxy::addTagForAnimations instead.

Dynamically associate tags and animations.

Parameters:
  • tagsToAnimations – Map of tag to animations.
void ALAnimatedSpeechProxy:: setBodyLanguageEnabled ( const bool& enable )

Deprecated since version 1.22: use ALAnimatedSpeechProxy::setBodyLanguageMode instead.

Enable or disable the automatic body Language random mode on the speech. If it is enabled, anywhere you have not annotated your text with animation, the robot will fill the gap with automatically calculated gestures. If it is disabled, the robot will move only where you annotate it with animations.

Parameters:
  • enable

    The boolean value: true to enable, false to disable.

    For further details, see: body_language .

bool ALAnimatedSpeechProxy:: isBodyLanguageEnabled ( )

Deprecated since version 1.22: use ALAnimatedSpeechProxy::getBodyLanguageMode instead.

Indicate whether the body Language is enabled or not.

Returns: The boolean value: true means it is enabled, false means it is disabled.

For further details, see: body_language .

void ALAnimatedSpeechProxy:: setBodyTalkEnabled ( const bool& enable )

Deprecated since version 1.18: use ALAnimatedSpeechProxy::setBodyLanguageMode instead.

Enable or disable the automatic body Language random mode on the speech.

Parameters:
  • enable

    The boolean value: true to enable, false to disable.

    For further details, see: body_language .

bool ALAnimatedSpeechProxy:: isBodyTalkEnabled ( )

Deprecated since version 1.18: use ALAnimatedSpeechProxy::getBodyLanguageMode instead.

Indicate whether the body Language is enabled or not.

Returns: The boolean value: true means it is enabled, false means it is disabled.

For further details, see: body_language .

Events

Event: "ALAnimatedSpeech/EndOfAnimatedSpeech"
callback ( std::string eventName , int taskId , std::string subscriberIdentifier )

Raised when an animated speech is done.

Parameters:
  • eventName ( std::string ) – “ALAnimatedSpeech/EndOfAnimatedSpeech”
  • taskId – The ID of the animated speech which is done.
  • subscriberIdentifier ( std::string ) –