We made Pepper the robot play games of skill with AI (Artificial Intelligence) at SBRE - SBRE AI Lab (Artificial Intelligence Laboratory) taught Pepper how to successfully throw a ball in a cup and a dart at the dartboard (they are exactly the same dynamic problem) using dexterity and a bit of dynamical systems theory. Here is the story of what it takes to match elementary games and robotics.
Experimenting the flexible and efficient Android library on Pepper’s tablet - Light and flexible, Lottie is a popular mobile library to make 2D animations on Android devices such as Pepper’s tablet. Indeed, Pepper is not just a voice user interface, but the tablet animations are very helpful in improving the interactive efficiency of the robot. At SoftBank Robotics we already tried out Lottie and there is what we found.
CROWDBOT, an European Collaborative project enables robots to freely navigate and assist humans in crowded areas. - Today’s moving robots stop when a human, or any obstacle is too close, to avoid impact. This prevents robots from entering packed areas and effectively performing in a high dynamic environment. CROWDBOT aims to fill in the gap in knowledge on close interactions between robots and humans in motion.
How to reduce Android APK size to optimize deployment on Pepper QiSDK robot - As a developer, you may want to accelerate the build and the deployment of an application on Pepper from Android Studio. Here is a useful optimization trick to optimize the build and deploy time, to reduce the size of the application package (APK) and therefore improve installation and deployment time.
How to use the contextual shortcut Alt+Enter or Option+Enter - Among many keyboard shortcuts, Android Studio provides a very useful context aware shortcut to manage intention actions: Alt+Enter (non Mac world) or Option+Enter (Mac world)
A thesis on model learning - How can a real complex robot, such as NAO, learn new skills from scratch, such as rolling around or sitting up, simply by interacting with its environment? Without access to some simulation such learning would require at a minimum weeks of consistent training using previous state-of-the-art methods. The training would in addition require a significant amount of assistance. Someone would need to reset the robot between different trials, as well as provide repairs to ensure consistent functionality. One of the PhD projects in the AI Lab of SBRE investigated this question, and we are now able to present an original framework that allows this type of control very quickly.