#1 Load dependent sharing of attention and motor resources of the iCub robot using parallel anticipations
Everyday interactions between humans consist of several real-world objects to be attended to simultaneously. Moreover, most interactions involve multiple coordinated motor actions executed in a parallel fashion. Such rich natural interactions demand the ability to share perceptual, cognitive and motor resources in a task and load dependent fashion. Even though anticipations of future sensory events are thought to be relevant for sharing limited resources, how they affect perception itself is still subject of ongoing research. We have developed a psychophysical paradigm that demonstrates that humans employ parallel cognitive load dependent anticipations at different levels of perception. This seems to be a useful strategy to share limited perceptual and motor resources making use of anticipations of future sensory stimuli. We provide a mathematical model capable of explaining such parallel anticipations for sharing limited resources.
In this project, we employ this model on the iCub humanoid robot with the goal of enabling the robot to maintain rich interactions with its human counterparts. The cognitive resources of the robot are usually easily scalable by adding more nodes on the cluster used by the robot. Whereas the perceptual and motor resources of the robot are more limited. The usage of our model should enable the robot to share its perceptual and motor resources in a multiple goal interaction with humans.
#1a iCub Reactable-DJ:
The basic idea of this project is to integrate the newly acquired tangible table by building yarp modules using the table API so that the robot and the human can interact with each other to create music with both participating in the creation process. They will have to choose some objects representing instruments and putting them on the table to create and modify the music play by it according to the objects and their positions.
The jam session is a cooperative interactive game which will later be enhanced with emotion detection, feature detection etc.
1) develop yarp modules for the reactable so the information of which cube is where on the table is made available to the robot
2) use the iCub simulator to develop a virtual representation of the reactable and online updating of the cubes placed on the real table
3) use the simulation to make the robot manipulate the reactable cubes and make first tests with the real robot
Requisites: C++ knowledge, some knowledge of liner algebra, OpenCV image processing library knowledge would be useful
Maximum 2 students per task (total 6 students)
Supervisors: Martin Inderbitzin, Zenon Mathews, Andre Luvizotto, Marti Sanchez (reactable API)
#1b Hockey demo (max 2 Students)
The robot takes the role of a hockey goalkeeper, who has to deal with multiple balls thrown to the goal at the same time. This task involves sharing the gaze resources to detect multiple moving balls, anticipate their trajectories and trigger multiple motor actions so that the most number of balls can be stopped before entering the goal.
Our task can be easily scaled by adding more perceptual, cognitive or motor load, making the interaction scenario even richer. Moreover, as we are using a general model, the capabilities for sharing limited resources are readily available for other interaction scenarios.
Requisites: C++ knowledge. Responsible: Zenon Mathews, Martin Interbitzin
#2 Unlearn Learned Paralysis.: implementation of a bi manual task involving 3D trajectories (analog buzz game).
Stroke represents one of the main causes of disability in adults and will be one of the main contributors to the burden of disease in 2030. A common consequence of stroke is hemiparesis; weakness and limited mobility of either the left or right side upper-body extremities. To a certain degree, the hemiparesis can be seen as a learned paralysis caused by the inability to move the upper body during the immediate period after the stroke due to swelling and edema of white matter. While the neural pathways to perform movements are actually intact, the nervous system of the hemiparetic patient has learned that it cannot perform some movements and refrains from using the affected arm. Recently, standard rehabilitation procedures have been augmented with new interactive technology. The use of virtual reality (VR) implies a series of advantages in comparison to conventional therapy. In particular, the use of VR allows restoring the sensorimotor contingencies by means of seeing the paretic arm responding to intended movements. In this project we will test if seeing the affected arm moving correctly could counter learned disuse. We will implement a bimanual task where we can influence the perceived capability of the left and right arm separately. We will test healthy subject in order to assess if facilitating the task for the non-dominant arm can shift it’s perceived capability. Further we will test if this learned capability is also transferred to a real world task.
Requisites: Basic programming skills (We will use Unity/c# script that can be learned easily on the job)
Number of students: 2-4
Responsible: Belen Rubio, Jens Nirme, Armin Duff
#3 Kinect delay adaptation. General or specific?
The Kinect is the new controller for the Xbox and it has a noticeable lag (namely, about 160 ms). The human player has to adapt to such a delay and time and put its actions ahead of its desired execution. Whereas this might seem a drawback for the recreational user it does represents an opportunity for experimentation: Kinect allows the player to control an avatar using all his/her limbs and head; therefore, if we play a game such as beach volley, long jump or hurdling, we are adding a general delay to our entire motor system control.
We aim at investigating the extent and nature of an eventual adaptation occurring after prolonged exposure to Kinect. If such a general adaptation occurs, we want to tell whether it leaves a measurable trace in motor control actions not performed with the kinect controller. Any positive result will shed light on the versatility of our brain as motor controller.
For this project we will employ paradigms from experimental psychology, developing simple tasks where the subject has to perform explicit timing operations, in addition to the commercial games provided with the Xbox Kinect.
Requisites: Basic programming
Number of students: 2-4
Responsible: Ivan Herrero
#4 Planning and navigation in a real robot foraging experimentation
The objective of this project is to navigate a real robot in a spatial foraging task such as food harvesting or quest for home. In a successful solution the robot will be able to associate task relevant information to position and use this information to plan and execute trajectories. The behaving paradigm will be based on forward simulations in neural networks that could represent space (such as hippocampal place cells). The robot used in this project is a RoboLab10 mobile platform equipped with an array of proximity sensors and a pan-tilt color camera.
Requisites: C++ knowledge
Responsible: Cesar Renno-Costa and Encarni Marcos
#5 Embodied 3D visualization of neuronal networks
The goal of this project is to represent the real time activity of an epuck robot through an embodied interactive system.
The epuck will be driven by a DAC-based iqr simulation. The user will be able to visualize in real time the system properties (neuronal activity, space plots, etc...) and explore a 3D representation of the running system through the use of the Kinect.
Tools used: Unity, iqr, Kinect
Requisites: Creativity, Basic programming
Number of students: 2-4
Responsible: Pedro Omeda, Eliza Tsaoussi, Vicky Vouloutsi, Alberto Betella