Purdue University Graduate School
Browse

Adversarial Attacks On Graph Convolutional Transformer With EHR Data

Download (1.04 MB)
thesis
posted on 2024-04-28, 22:12 authored by Siddhartha PothukuchiSiddhartha Pothukuchi

This research explores adversarial attacks on Graph Convolutional Transformer (GCT) models that utilize Electronic Health Record (EHR) data. As deep learning models become increasingly integral to healthcare, securing their robustness against adversarial threats is critical. This research assesses the susceptibility of GCT models to specific adversarial attacks, namely the Fast Gradient Sign Method (FGSM) and the Jacobian-based Saliency Map Attack (JSMA). It examines their effect on the model’s prediction of mortality and readmission. Through experiments conducted with the MIMIC-III and eICU datasets, the study finds that although the GCT model exhibits superior performance in processing EHR data under normal conditions, its accuracy drops when subjected to adversarial conditions—from an accuracy of 86% with test data to about 57% and an area under the curve (AUC) from 0.86 to 0.51. These findings averaged across both datasets and attack methods, underscore the urgent need for effective adversarial defense mechanisms in AI systems used in healthcare. This thesis contributes to the field by identifying vulnerabilities and suggesting various strategies to enhance the resilience of GCT models against adversarial manipulations.

History

Degree Type

  • Master of Science

Department

  • Computer and Information Technology

Campus location

  • West Lafayette

Advisor/Supervisor/Committee Chair

Dr. Baijian Yang

Additional Committee Member 2

Dr. Marcus K. Rogers

Additional Committee Member 3

Dr. Tatiana Ringenberg

Usage metrics

    Licence

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC