Tutorials

We are proud to offer the following tutorials at the 2008 Robotics: Science and Systems Conference.

Please see below for a detailed description of the tutorials.


Skills Capture and Transfer

Duration: HALF DAY (morning)
Location: HG D3.3

Organizers: Carlo Alberto Avizzano, Emanuele Ruffaldi

Abstract: Multimodal systems integrate Robotic devices with other Virtual and Mixed Environment technologies to interact with Human beings. The proposed tutorial on Skills Capture and Transfer aims at providing analyze and transfer the skilled component of human activities with the use of multimodal technologies. This topic ranges from the multimodal capture of a human task to the rendering of the skill using haptic interfaces and advanced visualization techniques. At the same time the data acquired in real time is processed by machine learning algorithms for the identification of performance, comparing the user with an existing database of skilled user, producing as an outcome the required stimuli for improving the user task. This tutorial introduces to the topic by presenting the state of the art in multimodal capturing technologies, the techniques for the analysis based on machine learning (dimensional reduction, HMM and Neural Networks) and the descriptors for the evaluation of a skilled performance. The practical part of the tutorial is supported by examples in MATLAB, Simulink and additional Python libraries.

This tutorial is organized inside the SKILLS IP.


Tutorial on Integration of Vision and Inertial Sensors

Duration: HALF DAY (morning)
Location: HG D3.1

Organizers: Jorge Dias, Jorge Lobo

Abstract: Inertial sensors coupled to cameras can provide valuable data about camera ego-motion and how world features are expected to be oriented. Object recognition and tracking benefits from both static and dynamic inertial information. In human vision several tasks rely on the inertial data provided by the vestibular system. Artificial systems should also exploit this sensing fusion. In this tutorial we first present some studies on visuo-vestibular interactions in humans, providing a biological motivation. The complementary information between inertial and visual sensing modalities for robotic applications will be presented and discussed. Starting from a biological perspective, some fundamental approaches for fusing this sensing data in robotic systems will be presented, together with a survey of the recent work in the field.