Purdue University Graduate School
Browse
- No file added yet -

Learning and Design Methodologies for Efficient, Robust Neural Networks

Download (12.42 MB)
thesis
posted on 2019-08-15, 17:52 authored by Priyadarshini PandaPriyadarshini Panda
"Can machines think?", the question brought up by Alan Turing, has led to the development of the eld of brain-inspired computing, wherein researchers have put substantial effort in building smarter devices and technology that have the potential of human-like understanding. However, there still remains a large (several orders-of-magnitude) power efficiency gap between the human brain and computers that attempt to emulate some facets of its functionality. In this thesis, we present design techniques that exploit the inherent variability in the difficulty of input data and the correlation of characteristic semantic information among inputs to scale down the computational requirements of a neural network with minimal impact on output quality. While large-scale artificial neural networks have achieved considerable success in a range of applications, there is growing interest in more biologically realistic models, such as, Spiking Neural Networks (SNNs), due to their energy-efficient spike based processing capability. We investigate neuroscienti fic principles to develop novel learning algorithms that can enable SNNs to conduct on-line learning. We developed an auto-encoder based unsupervised learning rule for training deep spiking convolutional networks that yields state-of-the-art results with computationally efficient learning. Further, we propose a novel "learning to forget" rule that addresses the catastrophic forgetting issue predominant with traditional neural computing paradigm and offers a promising solution for real-time lifelong learning without the expensive re-training procedure. Finally, while artificial intelligence grows in this digital age bringing large-scale social disruption, there is a growing security concern in the research community about the vulnerabilities of neural networks towards adversarial attacks. To that end, we describe discretization-based solutions, that are traditionally used for reducing the resource utilization of deep neural networks, for adversarial robustness. We also propose a novel noise-learning training strategy as an adversarial defense method. We show that implicit generative modeling of random noise with the same loss function used during posterior maximization, improves a model's understanding of the data manifold, furthering adversarial robustness. We evaluated and analyzed the behavior of the noise modeling technique using principal component analysis that yields metrics which can be generalized to all adversarial defenses.

History

Degree Type

  • Doctor of Philosophy

Department

  • Electrical and Computer Engineering

Campus location

  • West Lafayette

Advisor/Supervisor/Committee Chair

Prof. Kaushik Roy

Additional Committee Member 2

Prof. Anand Raghunathan

Additional Committee Member 3

Prof. Byunghoo Jung

Additional Committee Member 4

Prof. Vijay Raghunathan