Purdue University Graduate School
Browse
PhD_thesis_Gobinda_Saha_updated.pdf (4.15 MB)

Efficient Continual Learning in Deep Neural Networks

Download (4.15 MB)
thesis
posted on 2024-05-07, 22:58 authored by Gobinda SahaGobinda Saha

Humans exhibit remarkable ability in continual adaptation and learning new tasks throughout their lifetime while maintaining the knowledge gained from past experiences. In stark contrast, artificial neural networks (ANNs) under such continual learning (CL) paradigm forget the information learned in the past tasks upon learning new ones. This phenomenon is known as ‘Catastrophic Forgetting’ or ‘Catastrophic Interference’. The objective of this thesis is to enable efficient continual learning in deep neural networks while mitigating this forgetting phenomenon. Towards this, first, a continual learning algorithm (SPACE) is proposed where a subset of network filters or neurons is allocated for each task using Principal Component Analysis (PCA). Such task-specific network isolation not only ensures zero forgetting but also creates structured sparsity in the network which enables energy-efficient inference. Second, a fast and more efficient training algorithm for CL is proposed by introducing Gradient Projection Memory (GPM). Here, the most important gradient spaces (GPM) for each task are computed using Singular Value Decomposition (SVD) and the new tasks are learned in the orthogonal direction to GPM to minimize forgetting. Third, to improve new learning while minimizing forgetting, a Scaled Gradient Projection (SGP) method is proposed that, in addition to orthogonal gradient updates, allows scaled updates along the important gradient spaces of the past task. Next, for continual learning on an online stream of tasks a memory efficient experience replay method is proposed. This method utilizes saliency maps explaining network’s decision for selecting memories that are replayed during new tasks for preventing forgetting. Finally, a meta-learning based continual learner - Amphibian - is proposed that achieves fast online continual learning without any experience replay. All the algorithms are evaluated on short and long sequences of tasks from standard image-classification datasets. Overall, the methods proposed in this thesis address critical limitations of DNNs for continual learning and advance the state-of-the-art in this domain.

History

Degree Type

  • Doctor of Philosophy

Department

  • Electrical and Computer Engineering

Campus location

  • West Lafayette

Advisor/Supervisor/Committee Chair

Kaushik Roy

Additional Committee Member 2

Anand Raghunathan

Additional Committee Member 3

Vijay Raghunathan

Additional Committee Member 4

Sumeet Gupta