IROS 2015 Workshop

2015 IEEE/RSJ International Conference on Intelligent Robots and Systems
September 28 – October 03, 2015, Hamburg, Germany

Grounding robot autonomy:

Emotional and social interaction in robot behaviour

Organizers: R. Lowe, E. Barakova, E. Billing, J. Broekens

October 2:nd, 8.30 AM to 1.00 PM. More info at iros2015.org

NEWS: Check out abstracts for the workshop talks!

The aim of this workshop is to capture emerging trends and common problems related to the interaction autonomy of social robots. In the past, issues related to the constitutive autonomy of social robots have focused on safe interaction with the environment, and with humans. Today, we see a shift towards social robots that act in human environments and to a larger degree need to act in relation to social and emotional aspects. A nursing robot must not only interact safely with its environment, it should act in a way that communicates care and respect for patients, and that supports the social bounds necessary for the task. Such autonomous aspects have application to educational, companion and personal assistant human-robot interaction scenarios. Furthermore, many social signals should be embedded in the functional behaviors of robots, not added to the behavioral repertoire as specific gestures. Finally, the interpretation of social signals coming from humans should be integrated with the current robot behavior controller, and depending on the type of controller this is, thus, a different process.

So far, robots work with what could be called “perceived” emotions and social abilities, i.e., additions to their instrumental abilities. This focus of this workshop is instead the following question: How can emotion and social interaction be grounded in the behavioral repertoire of the robotic system? This includes sub questions such as: Is the robot able to have intrinsic emotions? How could emotions, grounded in the embodiment of the robot, provide socially adaptive behavior to the robot? How can the communication of emotions between a robot and a human be grounded?

The workshop welcomes conceptual papers and convincing applications, as well as concrete methods and algorithms relevant for the topic.

Topics of interest

Invited speakers

Invitation for extended abstracts

The workshop welcomes submissions for two page abstracts (to be presented as posters at the workshop).

Extended submission deadline: September 14th.

Authors of accepted abstracts will be considered for invitation to a special issue in Adaptive Behavior.

Abstracts are to be submitted via e-mail to robert.lowe@his.se.

Program (Half-day workshop)

TimeTalk
8:30 - 9.00Talk 1: Christian Balkenius - How to make robots moral
9:00 - 9.30Talk 2: Lola Cañamero - Grounding Affective Interaction in Robot Autonomy and Adaptation
9:30 - 10:00Poster teasers
10:00 - 10:30Coffee Break + Poster session
10:30 - 11:00Talks 3: Kenji Suzuki - Humanoid Robot-Assisted Activities based on Affective Feedback
11:00 - 11:30Talks 4: Stefan Wermter - Neural network models for social robot interaction
11:30 - 12:00Talks 5: Angelica Lim - Robot Feelings and Learned Emotion Expression
12:00 - 12:50Panel discussion

Contact details

Talk 1 - How to make robots moral

By Christian Balkenius

I will describe work towards making robot intrinsically moral, that is, being able to do the right thing without being taught explicit rules. This ability depends on three components: (1) The ability to perceive causal relations and assign responsibility and blame, (2) the ability to learn from observations and infer the goals and reactions of others, and (3) the use of emotions such as shame, guilt and embarrassment in decision making. These concepts will be discussed from the perspective of a triad consisting of two robots that interact while a third observing robot can choose to interfere by helping or hindering the other two. I will also describe how the behaviors can change when the robots switch roles in different combinations.

Talk 2 - Grounding Affective Interaction in Robot Autonomy and Adaptation

By Lola Cañamero

In this talk I will discuss how affect (motivation and emotion) and can be modeled in robots using simple principles stemming from embodied AI and the cybernetics tradition, and how affective interaction emerges in the dynamics of interaction in embodied systems, being inseparable from sensorimotor interaction. I will use examples of research in my group in the areas of adaptive behavior and developmental robotics.

Talk 3 - Humanoid Robot-Assisted Activities based on Affective Feedback

By Kenji Suzuki

In this talk, several robot-assisted activities for children with special needs are introduced under the proposed affect-in-the-loop framework. Most of the intervention approaches in the robot-assisted activities have been evaluated based on the participant.s social behavior or feedback. In particular, smiling or laughing during the session are often chosen as a significant behavior which indicates the participant.s positive affect. We have proposed an interactive and iterative learning methodology between humans and cognitive robots, which allows a human trainer to give a guidance/subjective evaluation to the robotic agent in a continuous manner in order to make human trainers involved in the online behavior learning of the robots. The developed wearable interface can be used to get continuous and affective feedback, which can be worn on the side of the face to unobtrusively and continuously detect biosignals, distal EMG. Through real-time pattern classification, facial expressions / affective feedback can be identified from them and interpreted as positive and negative responses from a human who interacts with both the robotic agent. It is considered that the novel wearable device and robotic agent could be effectively used for social interaction with humans allowing vital properties of interaction such as face-to-face communication, emotional feedback and freedom of movement for the human partner.

Kenji Suzuki is currently an associate professor at the Center for Cybernics Research, and also the principal investigator of the Artificial Intelligence Laboratory at the University of Tsukuba. His research interests include assistive and rehabilitation robotics, cognitive robotics, affective computing, biosignal processing, wearable devices and social imaging. He received the B.S. in Physics, M.E. and Dr. Eng. in Pure and Applied Physics from Waseda University, Tokyo, Japan, in 1997, 2000 and 2003, respectively. He led several projects of medical robots/devices for stroke, spinal cord injury, and dysphagia, and also wearable devices for children with autism spectrum disorders (ASD). He was a visiting researcher at the Laboratory of Musical Information, University of Genoa, Italy, and at LPPA, Laboratory of Physiology of Perception and Action, at the College de France, France, in 2000 and 2009. He is an active member of the IEEE and ACM.

Talk 4 - Neural network models for social robot interaction

By Stefan Wermter, Knowledge Technology Group

In the Knowledge Technology group we develop neural network models with a particular focus on robot interaction and development. One of our goals is to better understand grounded communication in humans and machines. Another goal is to use the knowledge we gain to improve multisensory integration and social interaction in humanoid robots. In the context of understanding grounded communication, we will present a deep neural network model for emotion expression recognition for human-robot interaction. Furthermore, we will present a model using self-organizing neural networks for human-action recognition in the context of human-robot assistance. In conclusion, we show how emotion perception and multisensory observation of human actions can contribute to future human-robot cooperation.

Talk 5 - Robot Feelings and Learned Emotion Expression

By Angelica Lim

In this talk, we will present an architecture for robots to ground emotional expressions in physical feelings. We define "gut feelings" of flourishing and distress, linked to internal states such as battery and motor heat. We present the MEI model for the robot to "learn as infants do", through parent-robot interactions known as motherese, in order for emotional expressions to evolve based on culture and experience. Experimental results and videos will be shown.

Hosted by Cognition Reversed - Erik Billing. Image from www.etsy.com. All rights reserved.