7 Items found
An international partnership where SBRE puts forward Pepper to answer user specific needs - How can technology support the worldwide issue of the ageing populations and the complexity of human connections? SBRE committed into the CARESSES program to integrate a system that plans the robot's actions according to a relevant cultural knowledge base.
CROWDBOT, an European Collaborative project enables robots to freely navigate and assist humans in crowded areas. - Today’s moving robots stop when a human, or any obstacle is too close, to avoid impact. This prevents robots from entering packed areas and effectively performing in a high dynamic environment. CROWDBOT aims to fill in the gap in knowledge on close interactions between robots and humans in motion.
We made Pepper the robot play games of skill with AI (Artificial Intelligence) at SBRE - SBRE AI Lab (Artificial Intelligence Laboratory) taught Pepper how to successfully throw a ball in a cup and a dart at the dartboard (they are exactly the same dynamic problem) using dexterity and a bit of dynamical systems theory. Here is the story of what it takes to match elementary games and robotics.
A Research project about Pepper and NAO immersive teleoperation - A new immersive teleoperation solution based on Extreme Learning Machine (ELM), a Machine Learning technique, is introduced for Pepper and NAO robots. Immersive teleoperation is defined as a robot remote control that renders the operator the sensation of being inside the teleoperated environment and providing a feeling of being the robot itself. The solution is independent of other mapping approaches (e.g. inverse kinematics) and the user’s whole body is used for the robot control. Even with scarce training data, the solution returns satisfactory results in both precision and computational speed.
A thesis on model learning - How can a real complex robot, such as NAO, learn new skills from scratch, such as rolling around or sitting up, simply by interacting with its environment? Without access to some simulation such learning would require at a minimum weeks of consistent training using previous state-of-the-art methods. The training would in addition require a significant amount of assistance. Someone would need to reset the robot between different trials, as well as provide repairs to ensure consistent functionality. One of the PhD projects in the AI Lab of SBRE investigated this question, and we are now able to present an original framework that allows this type of control very quickly.
Learning algorithms generally need significant prior information on the task they are attempting to solve. This requirement limits their flexibility and forces engineers to provide appropriate priors at design time. A question then arises naturally: how can we reduce the amount of prior task information needed by the algorithm? Being able to answer this question could spark the development of algorithms with higher generalization potential, all the while reducing preliminary engineering efforts. If we want to discard any a priori knowledge about the task, we need the learning algorithm to efficiently explore and represent the space of possible outcomes it can achieve.