Purdue University Graduate School
Vineeth Chigarangappa Rangadhamappa.pdf (1.18 MB)

Fast Computation of Wide Neural Networks

Download (1.18 MB)
posted on 2019-01-02, 18:16 authored by Vineeth Chigarangappa RangadhamapVineeth Chigarangappa Rangadhamap
The recent advances in arti cial neural networks have demonstrated competitive performance of deep neural networks (and it is comparable with humans) on tasks like image classi cation, natural language processing and time series classi cation. These large scale networks pose an enormous computational challenge, especially in resource constrained devices. The current work proposes a targeted-rank based framework for accelerated computation of wide neural networks. It investigates the problem of rank-selection for tensor ring nets to achieve optimal network compression. When applied to a state of the art wide residual network, namely WideResnet, the framework yielded a signi cant reduction in computational time. The optimally compressed non-parallel WideResnet is faster to compute on a CPU by almost 2x with only 5% degradation in accuracy when compared to a non-parallel implementation of uncompressed WideResnet.


Degree Type

  • Master of Science


  • Industrial Engineering

Campus location

  • West Lafayette

Advisor/Supervisor/Committee Chair

Vaneet Aggarwal

Additional Committee Member 2

Juan Wachs

Additional Committee Member 3

Roshanak Nateghi