Perceiving the motion of a target is essential for a successful interaction with a dynamic environment, in which objects and the robot itself move simultaneously. Event-cameras allow to track fast-moving targets, without losing information “between frames,” as a moving object triggers events from all pixels along its spatio-temporal trajectory.
We develop tracking algorithms robust to event-clutter due to ego-motion and prediction algorithms that anticipate the trajectory of the target and give sufficient decisional time to the robot to perform an adequate action. We explore learning frameworks to teach the robot to exploit low-latency asynchronous visual information, to correctly plan actions and their timing towards high-speed dynamical tasks. To this aim, we use the air-hockey game with iCub, because it is a 2D constrained environment ideal for testing high-speed motion planning, simple to realize in a normal-size laboratory and characterized by high uncertainty, high-variable trajectories, and presence of disturbances.
Event driven cameras can be a powerful tool for low-latency estimation of the environment required in dynamic task. We seek methods to perform computer vision operations in a low-latency, real-time fashion with events. Real-time cannot be intrinsically guaranteed, as the number of events changes with motion and clutter. We explore data representations, algorithms and coding that guarantee real-time and low-latency processing. Decoupling the event-driven update of the data representation from the processing, we can achieve extremely high update rate and minimum dependency on the events rate.
Methods: event-driven computer vision, probabilistic filtering for tracking, hybrid event-driven/frame-based vision, motion estimation, reinforcement learning.