Body Models in Humans and Robots

01/20/2022
by   Matej Hoffmann, et al.
0

Neurocognitive models of higher-level somatosensory processing have emphasised the role of stored body representations in interpreting real-time sensory signals coming from the body (Longo, Azanon and Haggard, 2010; Tame, Azanon and Longo, 2019). The need for such stored representations arises from the fact that immediate sensory signals coming from the body do not specify metric details about body size and shape. Several aspects of somatoperception, therefore, require that immediate sensory signals be combined with stored body representations. This basic problem is equally true for humanoid robots and, intriguingly, neurocognitive models developed to explain human perception are strikingly similar to those developed independently for localizing touch on humanoid robots, such as the iCub, equipped with artificial electronic skin on the majority of its body surface (Roncone et al., 2014; Hoffmann, 2021). In this chapter, we will review the key features of these models, discuss their similarities and differences to each other, and to other models in the literature. Using robots as embodied computational models is an example of synthetic methodology or 'understanding by building' (e.g., Hoffmann and Pfeifer, 2018), computational embodied neuroscience (Caligiore et al., 2010) or 'synthetic psychology of the self' (Prescott and Camilleri, 2019). Such models have the advantage that they need to be worked out into every detail, making any theory explicit and complete. There is also an additional way of (pre)validating such a theory other than comparing to the biological or psychological phenomenon studied by simply verifying that a particular implementation really performs the task: can the robot localize where it is being touched (see https://youtu.be/pfse424t5mQ)?

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset