AI Seminar: Maintaining Plasticity Through Selective Reinitialization, J. Fernando Hernandez Garcia

Описание к видео AI Seminar: Maintaining Plasticity Through Selective Reinitialization, J. Fernando Hernandez Garcia

The AI Seminar is a weekly meeting at the University of Alberta where researchers interested in artificial intelligence (AI) can share their research. Presenters include both local speakers from the University of Alberta and visitors from other institutions. Topics can be related in any way to artificial intelligence, from foundational theoretical work to innovative applications of AI techniques to new fields and problems.

Abstract:
The problem of loss of plasticity in neural networks, in which a network loses its ability to learn from new observations when trained for an extended time, is a major limitation to implementing deep learning systems that learn continually. A tried and tested idea for maintaining plasticity is to sporadically reinitialize low-utility features in the network, an algorithm known as continual backpropagation. However, measuring the utility of features depends on the feature's connectivity pattern, which makes it difficult to combine continual backpropagation with any arbitrary network. This drawback is removed if one works at the lowest level in a network: the weights. In this talk, I present the successes and failures of continual backpropagation in maintaining plasticity in different architectures. Then, I present a new algorithm, selective weight reinitialization, which successfully maintains plasticity by reinitializing weights instead of features.

Presenter Bio:
J. Fernando Hernandez-Garcia is PhD student in the RLAI Lab at the University of Alberta, supervised by Dr. Richard Sutton. He aims to design intelligent systems that continually learn while interacting with the world.

Комментарии

Информация по комментариям в разработке