Purdue University Graduate School
Browse

File(s) under embargo

Reason: Prepared for publications

17

day(s)

until file(s) become available

Fast and Energy-Efficient Computing with Spiking Neural Networks

thesis
posted on 2023-05-10, 20:07 authored by Wachirawit PonghiranWachirawit Ponghiran

Spiking Neural Networks (SNNs) are artificial neural networks that are designed to closely mimic biological neurons. They use binary values to represent neuron activity (commonly known as spikes) and this simple communication scheme offers an opportunity to compute as necessary when SNNs are executed on event-driven hardware. Their basic inherent recurrence allows SNNs to retain information over time while potentially learning with less trainable parameters than commonly used Long Short-Term Memory networks (LSTMs).

The first part of this dissertation explores different applications for SNNs and techniques to train them. We first use simple networks consisting of random fixed connections of spiking neurons called liquid state machines (LSMs) for reinforcement learning (RL). This results in solving a complex task with very few trainable parameters. However, our results also reveal that learning to capture long-term dependency with simple LSMs is not straightforward. To fully utilize the inherence recurrence of spiking neurons, we discuss certain issues of training SNNs from scratch on practical sequential problems. We then propose modifications to the spiking neuron dynamics to tackle those issues. We suggest a training scheme that makes spiking neurons produce multi-bit outputs (as opposed to binary spikes) to mitigate the gradient mismatch problem which arises from the use of a surrogate function. Alternatively, we also propose a technique to avoid training of SNNs from scratch for sequential tasks by making some part of LSTM spiking as the training from scratch can be compute-intensive.

The latter part of this dissertation highlights a unique benefit of SNNs that has not been well explored along with architectural changes that can improve SNNs' recognition performance. We first investigate the use of SNNs for event-based optical flow prediction. We pointed out that more frequent optical flow estimations can be made by casting the flow estimation as a sequential learning problem. Such fast predictions are realizable in an energy-efficient manner with SNNs that perform event-driven computing. We show that SNNs can be trained to continuously estimate temporally dense optical flow without a network reset. Independently, we investigate skip connections that have been introduced to SNNs for combating the spike vanishing gradient problem. We found that skip connections in SNNs so far are still limited between inputs and outputs of spiking neurons at each particular time-step. Hence, we introduce a new type of skip connections that connect the inputs and outputs of spiking neurons across different time-steps. We show that these time-skip connections facilitate information flow in SNNs and help improve network prediction accuracy. Lastly, we also introduce a method to quickly search for time-skip connections that yield the best prediction accuracy for a given sequential recognition task. 

History

Degree Type

  • Doctor of Philosophy

Department

  • Electrical and Computer Engineering

Campus location

  • West Lafayette

Advisor/Supervisor/Committee Chair

Kaushik Roy

Additional Committee Member 2

Anand Raghunathan

Additional Committee Member 3

Vijay Raghunathan

Additional Committee Member 4

Sumeet Gupta

Usage metrics

    Licence

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC