Purdue University Graduate School
Browse

CROSS-LAYER DESIGN TO ENHANCE THE FUNCTIONALITY AND ROBUSTNESS OF COMPUTING-IN-MEMORY FOR DEEP NEURAL NETWORK ACCELERATORS

thesis
posted on 2025-06-26, 13:35 authored by Akul MalhotraAkul Malhotra

Compute-in-memory (CiM) has emerged as a promising solution for edge computing that enables low-latency and energy-efficient computation by mitigating the von Neumann bottleneck. CiM workloads broadly fall into two categories: vector-matrix multiplications (VMMs), which are central to deep neural network (DNN) inference, and in-memory Boolean and arithmetic operations. In VMM CiM, the memory arrays store the weight matrix and the input vectors are applied as wordline signals, enabling in situ computation of dot-products of the weights and the inputs. On the other hand, bit-wise Boolean and arithmetic operations are performed on operands, both of which are stored within the memory.

While VMM-based CiM accelerators enable efficient DNN execution, they face significant challenges due to crossbar non-idealities and memory faults, which degrade computational accuracy. In contrast, in-memory Boolean and arithmetic CiM offers greater robustness. However, it has its own challenges such as limited range of functions that can be implemented efficiently.

This dissertation investigates these issues and proposes cross-layer design strategies to enhance the functionality and robustness of CiM accelerators across multiple memory technologies.

History

Degree Type

  • Doctor of Philosophy

Department

  • Electrical and Computer Engineering

Campus location

  • West Lafayette

Advisor/Supervisor/Committee Chair

Sumeet Kumar Gupta

Additional Committee Member 2

Anand Raghunathan

Additional Committee Member 3

Kaushik Roy

Additional Committee Member 4

Vijay Raghunathan

Usage metrics

    Licence

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC