Sensory fusion with EMG and DVS/DAVIS sensors
As the traditional single sensor information acquisition system is becoming more and more difficult to meet the complex practical applications, the fusion application of multisensory have been made a great deal of research. In this workgroup, we propose to fuse EMG and DAVIS signals to classify the human upper limb movements more accurately. The final goal of the WG is to build a mobile app to run the fusion and recognition in real time on a portable platform.
For the purpose of the WG we need to collect a new dataset consisting of finger movements recorded by EMG sensors and DAVIS camera. The dataset can be processed using two approaches:
1) EMG and DAVIS signal will be processed separately by feature extraction, and the two outputs will be fused at the classification level in a mobile app. This scenario can be divided into multiple tasks:
a) EMG feature extraction (spiking and/or non-spiking algorithms)
b) event and or frame-based image feature extraction
c) Feature fusion and training the classifier offline (e.g. SVM/LSTM)
d) App implementation of the trained network
2) EMG and DAVIS signals will be fused in a spiking recurrent neural network and an event-based learning algorithm will be used to train the readout phase for gesture recognition. This scenario can be either implemented on simulation (Brian2) or a neuromorphic chip (Dynap-se).
Disclaimer we do not see scenario N.2 as being portable to a mobile app as yet.
Timetable
Day | Time | Location |
---|---|---|
Wed, 24.04.2019 | 15:30 - 16:00 | Sala Panorama |
Wed, 24.04.2019 | 19:00 - 20:00 | Sala Panorama |
Thu, 25.04.2019 | 14:00 - 15:00 | Sala Panorama |
Fri, 26.04.2019 | 21:30 - 22:30 | Sala Arcate |
Mon, 29.04.2019 | 19:00 - 20:00 | Sala Panorama |