What is it¶
Human represents a physical person detected by the robot.
Pepper is able to focus on humans and can retrieve some of their
How to use it¶
Getting humans around Pepper¶
Get access to the humans found by Pepper:
HumanAwareness humanAwareness = qiContext.getHumanAwareness(); List<Human> humansAround = humanAwareness.getHumansAround();
getHumansAround returns an empty list if Pepper didn’t detect any human.
Getting the human engaged by Pepper¶
The human currently engaged by the robot is available via the
HumanAwareness humanAwareness = qiContext.getHumanAwareness(); Human engagedHuman = humanAwareness.getEngagedHuman();
null if no human is engaged.
Getting the human position¶
The human position is available via the
Human human = ...; Frame headFrame = human.getHeadFrame();
Getting the human face picture¶
The human face picture with accompanying timestamp is available via the
Human human = ... ; TimestampedImage timestampedImage = human.getFacePicture(); EncodedImage encodedImage = timestampedImage.getImage(); ByteBuffer imageData = encodedImage.getData(); Long time = timestampedImage.getTime();
The face picture and timestamp correspond to the last available image satisfying the filtering conditions:
- Keep only images of front facing faces.
- Discard blurry images.
Note that the byte array can be empty if there is no human face picture satisfying the filtering conditions, otherwise the last taken picture will be sent.
Pictures capture, before filtering, occurs with a frequency of 5Hz. The given image is an 8 bit encoded grayscale image with a minimum size of 25x25 pixels.
Human object provides some human characteristics:
Human human = ...; Integer age = human.getEstimatedAge().getYears(); Gender gender = human.getEstimatedGender(); PleasureState pleasureState = human.getEmotion().getPleasure(); ExcitementState excitementState = human.getEmotion().getExcitement(); SmileState smileState = human.getFacialExpressions().getSmile(); AttentionState attentionState = human.getAttention(); EngagementIntentionState engagementIntentionState = human.getEngagementIntention();
The available human characteristics are:
|Characteristics||Represented by …||Based on …||Comment|
|age||an Integer||facial features||
Stabilization time required.
Takes minimum 1s to stabilize depending on the quality of measurements, so expect to have unknown values before.
|smile state||SmileState enum|
|mood||PleasureState enum||facial features, touch, speech semantics||The
|excitement state||ExcitementState enum||voice|
|attention state||AttentionState enum||head orientation, gaze direction||
These values represent the attention of the human during the previous second.
The attention state values are
given relatively to the human
perspective, i.e. when the human
is looking to their right (from
the user’s perspective), the
value should be
engagement intention state
|EngagementIntentionState enum||trajectory, speed, head orientation||
This state describes the willingness of the human to interact with the robot.
Depending on the engagement intention of the human, various robot behaviors can be created to attract interested humans or directly start interacting with the ones who are proactively searching for an interaction.
Engaging a human¶
Make Pepper engage a
Human so that it will focus on him/her:
Human human = ...; EngageHuman engageHuman = EngageHumanBuilder.with(qiContext) .withHuman(human) .build(); engageHuman.async().run();
Performance & Limitations¶
Characteristics Refresh rate
The age, gender, smile and engagement intention characteristics have a refresh rate of 5Hz, while the other characteristics have a refresh rate of 1Hz.