Event-driven sensors generate information only with movement, static stimuli can only be perceived upon exploratory actions. In vision, we transfer visual attention models to their event-driven, spiking, counterpart, to equip iCub with a first low-latency gateway to select relevant regions for saccades and computationally intensive inspection. For the next object recognition, we explore gradient-based local learning, with the goal of implementing a fully spiking pipeline on neuromorphic hardware. Tactile exploration will follow the exploratory procedures of humans, guided by event-driven proprioception and tactile information, using unique neuromorphic multi-modal tactile sensors.
Methods: biologically inspired models of vision (attention, depth, and motion perception) and touch (exploratory procedures, hardware emulation of human glabrous skin tactile afferents), implemented using Spiking Neural Networks and spike-driven learning.