Reduced-precision computation, Stochastic Rounding

There is increasing interest in saving on energy usage and memory bandwidth/footprint in more conventional machine learning implementations by reducing the size and complexity of the arithmetic types used, with a number of successful implementations. We aim to discuss how far this can go in terms of the precision of underlying storage and computation, how this can be achieved without losing necessary precision in learning and/or inference, the role of stochasticity, and whether these issues may overlap with spiking neural networks (where the representation is arguably 1-bit) or other low energy computation mechanisms.

Go to group wiki

Timetable

Day Time Location
Thu, 25.04.2019 16:00 - 17:00 Sala Panorama
Tue, 30.04.2019 16:00 - 17:00 Sala Panorama

Moderator

Michael Hopkins

Members

Alessandro Aimar
Matteo Cartiglia
Erika Covi
Charlotte Frenkel
Arren Glover
Álvaro González
Daniel Gutierrez-Galan
Germain Haessig
Michael Hopkins
Jamie Knight
Laura Kriener
Alexander Kugele
Christian Mayr
Moritz Milde
Mattias Nilsson
Melika Payvand
Ole Richter
Baris Serhan
Jonathan Tapson
Pau Vilimelis Aceituno
Niklas Vollmar
Annika Weisse
Dmitrii Zendrikov