Sensory Fusion with AER-based Sensors for Decision Making in Neuromorphic Robots

Single neuromorphic sensors, such as event based retinas or auditory sensors, have been used in various projects. Many spiking neural network models have been designed to compute data from these sensors. For example networks for written digit classification, spoken digit classification, object classification and detection, sound characterization and localization and many more. However, a sensory fusion algorithm is needed in order to combine the information collected from different sensors and to obtain a classification output which could be used to take a decision when a robotic platform is used.

In this group we would like to discuss how the human brain carries out sensory fusion theoretically, and how we could design a system/network that implements such computation. For this task, neuromorphic platforms like SpiNNaker or DYNAPSE will be used, among others spiking neural network simulators.

In the next step we will test out the proposed sensory fusion models with artificial inputs and neuromorphic sensory data (We bring a Dynamic Vision Sensor (DVS) and two Neuromorphic Auditory Sensors (NAS)). Further we will discuss how the network's output can theoretically be translated into motor commands. In the case of fast progress a closed loop system could be tested in simulation or a real-world environment.

Finally, we perform a comparative study to see the advantages/disadvantages and also the results of the different models.

Go to group wiki

Timetable

Day Time Location
Tue, 23.04.2019 21:30 - 22:00 Lab
Thu, 25.04.2019 15:00 - 16:00 Lab
Fri, 26.04.2019 15:30 - 16:30 Lab

Moderator

Daniel Gutierrez-Galan
Thorben Schoepe

Members

Enea Ceolini
Giulia D'Angelo
Hector Gonzalez
Álvaro González
Jean-Matthieu Maro
Marco Monforte
Omar Oubari
Nicoletta Risi
Baris Serhan
Jibin Wu
Bojian Yin