HE ERC Proof of Concept Grant 2024-2026
RObot traiNing INdependence
Abstract: RONIN aims to develop a training toolkit, embedded in a robot-assisted training setup, for supporting adolescents with developmental disabilities (DDs) in achieving independent adult life. DDs hinder access to formal education for about 4 million children in Europe, which, in consequence, compromises their independence later in life. This poses a huge challenge for the affected individuals, their families, and for society. One of the major issues that impedes achieving independence by individuals with disabilities is insufficient support, focused on training social and cognitive skills crucial for leading independent adult life. RONIN proposes a solution which uses the Embodied Learning approach (via “role-plays” with the robot) and targets skills necessary for attaining independent adult life: independence in (i) self-care; (ii) interaction with others; and (iii) facing stressful situations, such as exams or job interviews. RONIN’s training toolkit includes: the training protocol scripts and objects used in interaction, encoded robot behaviours, and graphical user interface (GUI) for the therapists. Involving a robot in the training has the advantage that users can benefit from a robot mediator which offers a non-intimidating, more predictable and reduced in complexity interaction with less social pressure, compared to interaction with another human. For the therapists, the robot removes the burden of repetitive and lengthy “role-plays”, allowing the therapists to focus on monitoring progress of the training. The work plan of RONIN consists in designing the training protocol scripts and objects, encoding the robot behaviours, developing the GUI, integrating all components in the setup, and testing the efficacy of the training protocols with clinical populations. Should the training prove efficacious, it will revolutionise the way support is offered to adolescents with DD, their families and therapists, and will open a pathway to exploitation.
Total budget: 150.000,00€
Total contribution: 150.000,00€
H2020 ERC - Starting Grant 2017-2022
Intentional stance for social attunement - InStance
Abstract: The InStance project focuses on the question of whether (and under what conditions) people adopt an intentional mindset towards robots, a mindset that is typically adopted towards other humans. An intentional mindset is what the philosopher Daniel Dennett termed “intentional stance” - predicting and explaining behaviour with reference to the agent’s mental states such as beliefs, desires and intentions. To give an example: when I see a person gazing at a glass filled with water and extending their arm in its direction, I automatically surmise that the person intends to grasp it, because they feel thirst, believe that water will ease their thirst, and hence want to drink water from the glass. The terms “intend”, “feel” or “believe” all refer to mental states, and the assumption is that through referring to mental states, I can understand and explain someone else’s behaviour. However, for non-intentional systems (such as man-made artefacts), we often adopt the design stance - assuming that the system’s has been designed to behave in particular way (for example the car will slow down when one pushes the brakes not because the car intends to be slower, but because the car has been designed to slow down when the brake pedal is pushed). Adopting either the intentional stance or the design stance is crucial not only for predicting others’ behaviour but presumably also for becoming engaged in a social interaction. That is, when I adopt the intentional stance, I direct my attention to where somebody is pointing, and hence we establish joint focus of attention, thereby becoming socially attuned. On the contrary, if I see that a machine’s artificial arm is pointing somewhere, I might be unwilling to attend there, as I do not believe that the machine wants to show me something, i.e., there is no intentional communicative content in the gesture. This raises the question: to what extent are humans ready to adopt the intentional stance towards robots with human-like appearance, and to attune socially with them? It might be that once a robot imitates human-like behaviour at the level of subtle (and often implicit) social signals, humans might automatically perceive its behaviour as reflecting mental states. This would presumably evoke social cognition mechanisms to the same (or similar) extent as in human-human interactions, allowing social attunement. By social attunement we mean a collection of mechanisms of social cognition that the brain employs during interactions with others: for example, joint attention, or visual-spatial perspective taking. Joint attention is a mechanism through which two or more individuals attend the same event or object in the environment. Engagement in joint attention often happens through directing others’ attention to where one is attending through, for example, gaze direction or a pointing gesture. Visual-spatial perspective taking is a mechanism that allows for taking someone else’s perspective in representation of space (for example, I understand that my “right” is “left” for my interaction partner, who is sitting opposite to me). In daily interactions with other humans we employ such mechanisms automatically. But would we employ similar mechanisms also in interaction with humanoid robots? The objectives of the InStance project are to understand various factors that contribute to activating mechanisms of social attunement in interaction with humanoid robots, with a special focus on intentional stance. Does adopting intentional stance affect social attunement? What are the conditions for adopting the intentional stance, and social attunement? In the InStance project we are specifically focusing on subtle characteristics of the robot behaviour, such as human-likeness of its movement parameters, mutual gaze, contingency of gaze behaviour or social intentions carried in communicative social signals. In addition, we are interested in how cultural embedding or familiarity with artificial agents impacts adopting the intentional stance and social attunement. We use interactive protocols with a humanoid robot iCub, and we employ cognitive neuroscience methods and well-controlled experiments. We measure how the human brain responds to the robot’s behaviour, whether we can observe behavioural and neural markers of social attunement (joint attention, visuo-spatial perspective taking or theory of mind) in interaction with iCub.