Speaker: Professor Yuhai Tu, IBM Thomas J. Watson Research Center, Yorktown Heights, NY USA, AAAS Fellow, APS Fellow, Chair of the APS Division of Biophysics (DBIO)
Time: 13:00—14:00 p.m., April 1, 2024, GMT+8
Venue: B101, Lui Che-woo Building, PKU
Abstract:
Learning happens in realistic neural networks in brain as well as in artificial neural networks (ANN) such as those in deep learning, which has achieved near or above human level performance for an increasing list of specific tasks. However, both their network architectures and the underlying learning rules are significantly different. On the architecture side, realistic neural networks in brains have recurrent connections between neurons while ANNs in deep learning have a simple feedforward architecture. More importantly, brain learns through updates of the synaptic weights through a local learning rule such as the Hebbian learning rule while the weight parameters in deep learning ANN models are updated to minimize a global loss function.
In this talk, we will discuss the commonalities and differences between learning dynamics of realistic neural networks and artificial neural networks in the context of representational learning by using two examples from the mammalian olfactory system: 1) alignment of neural representations from two sides of the brain; 2) representational drift in piriform cortex.
Source: School of Life Sciences, PKU