Purdue University Graduate School
Browse

DEFENDING BERT AGAINST MISSPELLINGS

Download (720.89 kB)
thesis
posted on 2021-04-06, 13:53 authored by Nivedita NighojkarNivedita Nighojkar
Defending models against Natural Language Processing adversarial attacks is a challenge because of the discrete nature of the text dataset. However, given the variety of Natural Language Processing applications, it is important to make text processing models more robust and secure. This paper aims to develop techniques that will help text processing models such as BERT to combat adversarial samples that contain misspellings. These developed models are more robust than off the shelf spelling checkers.

History

Degree Type

  • Master of Science

Department

  • Computer and Information Technology

Campus location

  • West Lafayette

Advisor/Supervisor/Committee Chair

John Springer

Additional Committee Member 2

Jin Kocsis

Additional Committee Member 3

Eric Dietz

Usage metrics

    Licence

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC