[Seminar] Ms. Giorgia Dellaferrera "Unveiling Principles of Neural Computations: from Artificial to Biological Intelligence, and back"
Ms. Giorgia Dellaferrera
PhD student, IBM Research Zurich - Institute of Neuroinformatics (ETH and UZH)
Unveiling Principles of Neural Computations: from Artificial to Biological Intelligence, and back
Artificial neural networks (ANNs) are learning models inspired by the biological circuits that constitute the animal brain. On one hand, biological circuits are a fertile source of ideas to push forward the design of ANNs. A striking example is the GRAPES optimizer, which incorporates the biological mechanism of dendritic integration in ANNs training. By modulating the error signal based on the distribution of the network’s weights, the GRAPES optimizer improves the accuracy and convergence rate of neural networks trained with backpropagation (BP) and mitigates their vulnerability to catastrophic forgetting. Furthermore, the principles governing learning in the brain can be used to design training algorithms for neural networks alternative to BP. To comprehensively address the biologically unrealistic aspects of BP, such as the weight transport problem, we developed PEPITA, a training scheme that is more biologically plausible than BP and that demonstrates that neural networks can be trained without backward computations. On the other hand, although ANNs are extremely simplified models compared to biological networks, they offer an unprecedented tool to investigate the mechanisms underlying brain dynamics. ANNs are the current best models of cortical activity and the finest predictors of neural activity in the ventral stream. We demonstrate that ANN “substitute models” fitted to neural activity can be used to synthesize images that guide small neural populations toward not only a specific response but also target perceptual behavior.
Meeting ID: 418 871 3911
Subscribe to the OIST Calendar: Right-click to download, then open in your calendar application.