The ROBOTS exhibition, a productive collaboration

Meeting the SoftBank Robotics Project Manager
The ROBOTS exhibition, a productive collaboration

The Robots exhibition runs for 5 years starting from April 2, 2019 at the Cité des Sciences et de l’Industrie (Paris 19th arrondissement). The exhibition invites the visitor to ponder on their vision of robotics and to get acquainted with robotic science and industries. This exhibition shows real robots at work, how they operate, the state of research in the field and the concrete implications for our society.
SBRE has co-conceived 3 pieces of the exhibition in collaboration with Universcience. Christophe DUPOUY, Project Manager, talks about his experience.

Interview

Christophe DUPOUY, Project Manager

Photo-reportage of ROBOTS, the new permanent exhibition of the Cité des Sciences, La Villette, Paris
On Mondays when the Museum is closed, the Project Manager comes and checks the robots and applications setups, technical equipment and performances.

SBR DX - Christophe, you took part in the Cité des Sciences project. When did you join the team and what was your role?
Christophe - I joined the project around the end of November 2018. Universcience and SBRE had been discussing it for about 8 months. I was put in charge of managing the implementation phase.
I attended meetings in December, a lot of discussions were still going on and there were suggestions from both sides. So, from December to January, the Developers’ Manager and I started by writing down specifications for the functional part, in order to create the three robotic applications:

  1. Robots - Portraits with Pepper
  2. Artificial emotions with Pepper
  3. NAO

On one hand, we had to define what was expected, what the client wanted, and on the other hand the associated software functions and the blocks that the dev teams had to build in answer to that.
They also had to deal with complicated technical issues concerning the feasibility studies. There were a lot of disagreements during the talks and eventually, very little were about the needs. So we handled these topics separately and focused our discussions on the needs.
Here is an example of a technical subject: in a museum, robots are elevated and placed behind a glass pane. We cannot use the robot’s sensors. He won’t see people with his lasers nor with his sonars and his cameras won’t work well because of the glass.
So they suggested a LIDAR[1] since SICK is a partner of the exhibition, but we didn’t have any feedback on the subject. A LIDAR is a device that emits a laser and receives its echo back, helping in determining the distance of an object. However as there is a lot of data to analyze - up to 10 000 distances per second - we needed a computer to process the information between the robot and the LIDAR and to communicate with the robot: “In a given zone, there is someone, or there isn’t” and even “There has been someone for a long time but they are not moving so it must be an object so there is no one”.

Please note that the computer and the OS (Windows) on which we developed the data treatment software were determined by Universcience.

The final results are very impressive, with a very high reliability which allows the robot to properly detect people and ignore objects.

Photo-reportage of ROBOTS, the new permanent exhibition of the Cité des Sciences, La Villette, Paris
The proximity to other installations like a video playing permanently and the distance from the visitors made for a challenging environment for the installation 'Pepper Portrait'.

Did you face any other technical issues?
Another key challenge was to make interactions between the visitors and NAO possible. They had to be able to talk to him and he had to be able to answer. However, the environment was very noisy, and the robot was far from the public and protected in an enclosure, which made things more complicated. We suggested using an external microphone. The microphone was imposed by Universcience, it’s the one used by Jean-Jacques Bourdin for his interviews on BFM TV channel.
The microphone is connected to a mixing board which is connected to a bluetooth transmitter, overloading the sound coming from the robot’s microphones.
It wasn’t simple. We had to ask the engineers of SBRE how to cut off the microphones and replace them with another source. It is not trivial nor stable, we even have to restart the robot if the bluetooth transmitter is disconnected to connect it again.

Photo-reportage of ROBOTS, the new permanent exhibition of the Cité des Sciences, La Villette, Paris
A vintage-looking but yet efficient microphone increases the confidence level of what NAO hears at the documentation corner.

We were surprised by the results. The confidence level for the vocal recognition systematically comes close to 90%, without error, or substitution. By improving the quality of the incoming audio stream which we always send to Nuance Remote[2], we improved the robot’s understanding. We did not change the whole chain, only the sound acquisition part.
We had never made a deployment in production with an external microphone and it is a real success. I recommend this for the use of robots in every noisy environment.

We shared this experience (...) to handle future events and demonstrations better.

Do you think that this experience is replicable?
We shared this experience internally to apply this method and handle future events and demonstrations better.

What other exhibition-specific issues did you face?
Robots are in closed spaces and there are a lot of them in this exhibition. There are at least 20 humanoid, industrial or housekeeping robots. The teams at the Cité des Sciences can’t come and turn them off every night and turn them back on every morning. So it must be automated and since our robots cannot turn on on their own, they must be on 24/7: they go to Sleep mode when the exhibition closes and start up again when it opens.

“The robots never need to be handled. Zero extra burden!”

The Cité des Sciences has a computerized time management system, the GTC (Gestion Technique Centralisée, or Centralized Technical Management). The opening hours, as well as special events, are saved in an interface. When they restart, the robots connect to this system to know their operating hours.
The robots never need to be handled. Zero extra burden!

Did you have to adapt to the exhibition’s scenography?
The scenography of the exhibition forced us to properly integrate all the material required for the robots to function.
The charger, the LIDAR, the computer, the control unit, the microphone, the mixing board, the amplifiers, the loudspeakers, the bluetooth transceivers, the corresponding cables, none of this hardware could be visible. The scenographists organized spaces for us to place all of those items in.
We had to list them, purchase them, configure them and make them usable. For example, if a mixing board dies, the teams of the museum can replace it with the proper settings. It is not easy as it is not only about the robot. It’s a lot of planning and logistics.

Did the museum have a lot of specific demands for the robots’ behaviour?
The mediators and the curators in charge of the conception of the exhibition wanted an interaction based on the emotions perceived thanks to the behavioural and verbal signals of the interlocutor with the Berenson robot.
We first suggested Pepper as a platform to integrate the behavioural and vocal perception algorithm but the results weren’t satisfying, so we asked the team in charge of interaction and expressivity at SBRE. They oriented us towards our own perception module which interprets data from the Okao[3] module which in turn analyzes the images coming from the camera.
The robot perceives the visitor’s emotions by analyzing their facial expressions. The robot recognises a range of emotions going from joy to anger as well as surprise and sadness, to which we added a neutral expression.
We tested and were pleasantly surprised by this module’s accuracy. The confidence level on these 5 emotions is very high. We can play on the variations from one emotion to another, so we can show them graphically and in real time! Starting from there we were able to conceive a fun and interactive application.
We added another feature: if the robot captures an extreme emotion (the confidence level is customizable), the robot displays the corresponding image in high definition with the associated vocal feedback and animation.

“NAO’s motor skills when he falls and gets back up captivate people.”

Anything else to add?
NAO used to fall a lot. At first, we thought we had to keep his feet stuck to the floor. We carried some feasibility studies with a NAO whose upper body could move while his legs were immobilized.
But in the end NAO can fall and get back up in front of the visitors. It’s NAO’s great capacity, it’s spectacular. NAO’s motor skills when he falls and gets back up captivate people. In the end, we switched the priorities and a fixed NAO became plan B, with plan A ensuring that he didn’t fall too often.

NAO and Pepper differ from the other robots of the exhibition thanks to the richness of their interactions with the public. How did you handle this?
For the Pepper Portrait app [editor’s note: Pepper introduces Rabbit, the research robot of the CNRS and Keecker, the entertainment robot], there is an exchange made with the robot without using voice or face recognition, which consists in identifying the language spoken by the person facing Pepper. We had started with a cube to handle, then flags to show the robot, or signs to point out… in the end we found the solution thanks to Universcience and their knowledge in mediation devices. The museographers and scenographers found a simple system based on cards with flags on them on the public’s side and an AR code facing the robot, who reads it and interprets it as a language choice[4]. The robot can then make his presentation in French, English or Spanish.

Photo-reportage of ROBOTS, the new permanent exhibition of the Cité des Sciences, La Villette, Paris
With 'Pepper portrait', the visitor chooses a language by pulling up a flag on a card that shows a ARuco marker that the robot can read and process to set up the chosen language.

Is the robot still mobile in the meantime? Can he move?
NAO doesn’t stand still! He tends to slip and turn in addition to falling… We placed a marker so that he faces the public. NAO locates it and corrects his position regularly. Without this, he would find himself showing his back or his profile to the public which makes for a very unpleasant experience. And this is also replicable and should even be compulsory.
We used an existing Pepper application as a basis. It’s called Proactive Mobility for NAOqi 2.5[5]. With this app, Pepper can detect people, get closer to them and return to the marker when the interaction is finished.
The adaptation we made here is being able to define the ArUco Markers’ ID and size, then the robot’s orientation and distance in relation to it. For instance, after several iterations, we found the perfect combination for this environment and we made the robot reposition himself regularly at 180° and 40 cm away from the 10 cm ID 128 marker.

“We have good products to answer these needs (...). In the end we didn’t really have any blocking points.”

Do you think that the difficulties encountered while working for the museum have brought to light specific issues of humanoid robotics?
Museums mean glass panes, elevated exhibitions, lights, noises, a lot of people… We have good products to answer the needs for this environment. In the end we didn’t really meet any blocking points. We always found a solution, either by simplifying or by bypassing. For example, they wanted the robots to automatically turn on and off. Well, no. Our robots are always powered, and instead of turning off go to sleep mode. They wanted robots to be able to hear properly? Well, our robot can’t hear properly when there is too much noise, a lot of people or when they are too far away, so we added an external microphone.

We are here to tell them what can or cannot be done. But we are not going to tell them what to do, it is the museographer’s job. These people are not beginners, they have been doing this job for 20 years and are used to many audiences: children, adults, specific audiences like schools. They know what type of devices are going to work depending also on the size of the crowd. They know their audience very well and want to create a nice experience. Our job is to answer in terms of feasibility, and sometimes to slightly tone down their wishes by proposing downgraded modes. For instance, the microphone is not omnidirectional and we ideally hear people measuring between 100 and 160 cm. So shorter children must step on a table to speak in the microphone and be heard.

They also wanted NAO’s voice to be emitted through speakers. However we had an issue with the bluetooth receivers. We have never been able to make the system steady enough to emit the audio stream reliably. We can receive it but not broadcast it. But in the end it wasn’t necessary, as NAO is in a convenient acoustic environment: the cylinder shape of the booth works as a resonance box where the sound is reverberated.

For Pepper however, while we didn’t want to broadcast his voice through speakers, we noticed from the very first days of the exhibition that it didn’t work. There was too much noise, even at the maximum level with the latest version of the OS, the audience couldn’t hear anything. So we put a lapel microphone near one of his speakers and connected it to a mixing board to process the imperfect sound, then inject it to an amplifier, then to a speaker. And then it’s perfect, we can hear what the robot says even if there is a lot of noise around.

Were there any specific developments for the robots’ behaviour?
We developed 3 applications: Robots-portraits, the Artificial Emotions (Emotions artificielles) and NAO to host the robotic show. They are simple behaviours which must work everyday for 5 years. Following the logic of the museum and to adjust to the influx of visitors in an exhibition of that size, we defined that an interaction shouldn’t last too long and even encourage the audience to move on.

Robot-portrait, what is a robot made of?

About: What is a robot made of, and how does it work? Whether they are research, entertainment or production tools, robots have a lot in common: a set of sensors that perceive the environment, actuators and effectors to interact, a calculation unit to coordinate everything, and a power supply. When we look at it like that, the vacuum cleaner and the humanoid are all part of the same family.
Synopsis: Pepper talks to the audience as a whole to explain how three different robots work: Rabbit, the research robot (CNRS), Keecker, the public service robot (Keecker) and himself. He defines the use for which he was conceived, then shows and explains the different sensors and actuators he’s equipped with. He also explains how he works and how his components are organized.
Museographical design: The robots are protected from visitors by a glass enclosure and are placed on a stage.

The Robot-portrait application state machine diagram
A state machine diagram shows the behaviour of the 'Emotions Artificielles' application.

Artificial emotions

About: If men and robots share the same world, what kind of usage can we imagine for robots that have the ability to express their emotions and to perceive the emotions expressed by others? Roboticians have made a lot of progress these last few years, especially thanks to the developments in the field of learning, when it comes to grasping and interpreting verbal and behavioural signals indicating the emotional state of the person they are speaking to.
Synopsis: The visitor is invited to sit alone in front of Pepper, who is minimally animated and protected by a glass enclosure. Pepper is equipped with an emotional communication program: he perceives the emotions of his interlocutors by analyzing their facial expressions. The visitor is invited to look at Pepper, who sends back his emotional analysis: “I perceive joy/sadness/anger/surprise/a neutral state”. He expresses himself by speaking and displaying emoticons on his tablet.
The visitor’s face must be evenly lit. The time for which the visitor has to pose must be determined by a starting signal and an ending signal. Typically, a short sentence such as “Your ears look so nice!”. Silence is needed while Pepper analyzes data and displays the result.

The Pepper Emotion application state machine schema
The Pepper Emotion application state machine schema

Museographic design: Museographic design: the robot is protected from visitors by a glass enclosure and a guardrail. Removable seats are available for people with reduced mobility.

The robotic exhibition hosted by NAO

NAO welcomes visitors and introduces the various elements of the exhibition.

About: Museographic facility that deals with current topics and social activities in the field of robotics: news, jobs, culture, hobbies.
Synopsis: A NAO robot, the mascot of the place, welcomes the visitors and explains what they will find in the exhibition.
Seating areas can be found at the exhibition as well as reading desks that offer different activities (such as a visual game “Find the robots”, or graphic compositions about jobs in the field of robotics, upcoming training courses or reference books, etc.).
Museographic design: The robot is protected from visitors by a glass enclosure and is placed on a stage.

Are these applications installed on the robots?
Each robot has his own configuration, meaning they belong to a group (of robots in the ADE[6] - editor’s note) and user profiles which give access to applications and preferences.
Any Pepper can become Pepper-portrait when given the right configuration. But configuration management is not that simple, because our robots require a large number of applications in order to function and to be autonomous, for time management on one hand, but also to access the configuration menu for error handling, temperature management, standard dialogs, languages packages, etc.
There are also preferences. For “Pepper Portrait” for instance, there are 8 preferences: for the timing between two animations if no one is around, how long he will wait before stopping the application if the person leaves, the detection time for the language choice, the default language, the hostname of the computer for the LIDAR data, the webpage where we can find the opening hours… everything can be configured and this was often used to adjust the robot’s behaviour to the environment during launch phase.

The same is true for Pepper emotion, who has his own preferences like the emotion detection threshold, which we adjusted by testing, and even more so for NAO with dialog type counters, reorientation system, his behaviour at startup depending on his initial posture...

There is of course a Basic Channel[7] B2B for NAO which has been deeply reworked and is available. We added to it a wide and generic Q&A and contextual dialogs specific to the Cité des Sciences et de l’Industrie which we are constantly improving according to the data collected since the exhibition opened to the public.
The robot’s answers’ levels vary: he can answer a question precisely, more approximately if he didn’t hear all the elements of the question, or completely randomly if he didn’t understand the question at all. But he will always answer something. For example, the robot doesn’t know how to multiply, so if you ask him “How much is 2x3?” he will answer with a joke like “Ask your calculator, it knows more than me”. If you ask him if he prefers a Maserati or a Ferrari, the robot can say he doesn’t like cars or answer even more vaguely saying “I don’t have a preference”. This strategy allows us to cover 90% of the questions asked in a relevant or funny way.

Photo-reportage of ROBOTS, the new permanent exhibition of the Cité des Sciences, La Villette, Paris
Billboard of the ROBOTS exhibition of the Cité des Sciences, La Villette, Paris

Notes

1. LIDAR (light detection and ranging) such as the 2D-LiDAR TiM561

2. The robot module Dialog first processes the human input and sends request to Nuance Remote, a distant ASR (Automatic Speech Recognition) service provided by the company Nuance.

3. Omron's Okao technology is used by mobile phones for face recognition

4. Pepper usually has several languages installed. For further details, see: Getting the list of installed languages .
He can easily switch from one language to another. For further details, see: Temporarily switching to another language .

5. Proactive Mobility on Github-SBR Labs

6. Application Distribution Engine (ADE) is an internal technical entry point that enables robot configuration operations (application management, install applications on robots, assign robots to users, manages access rights of users and robots to applications). Partners can access these features via Command Center.

7. A channel is a way to update dynamically applications on the robots. The Basic Channel is a cluster of applications (mainly dialog ones), allowing the user to enjoy a basic interaction with the robot (like answering basic questions, obeying basic demands or launching other applications).
Dialog Q&A is a collection of sentences, phrases and words that allows the robot to give a generic answer when no specific rule has been matched in the Basic Channel.
Some specific dialog content has been added so that NAO can answer questions about the exhibition and the other robots.

Related Content

Reportage of the ROBOT exhibition

A Realistic approach of robotic behaviour development
Questions to ask when designing, prototyping and producing robotic applications for a workable result - In this lesson, we are going to present the Robot Application Development process from a lead programmer's perspective: we will review the necessary first steps to take and the preliminary questions to ask in order to get a good idea of both the robotic ecosystem and the business concept so you can handle them realistically before starting the development of the application itself.

Green Guy without glasses
Christophe DUPOUY
Project Manager
Blue Girl
Clara BAILLEHACHE
Dev C Editorial team member

ROBOTS, An exhibition at the Cité des Sciences et de l'Industrie

The ROBOTS exhibition starts on the 2nd of April 2019 and will last for 5 years.
Opening times: from Tuesday to Saturday 10.00 am - 6.00 pm, and 10.00 am - 7.00 pm on Sunday. Closed on Mondays and public holidays.
Access: Cité des Sciences et de l'Industrie - 30, avenue Corentin-Cariou - F-75019 Paris.
More details