<p dir="ltr">This thesis investigates the robustness of parameter-efficient fine-tuning (PEFT) methods, specifically the Low-Rank Adaptation (LoRA) technique, against adversarial attacks in natural language processing (NLP) classification tasks. The motivation is to assess whether the efficiency benefits of LoRA compromise model security when exposed to adversarial manipulations. Experiments were conducted on multiple transformer-based models such as BERT , DistilBERT , ALBERT , RoBERTa , GPT-Neo, and LLAMA3. Each model was fine-tuned using both full fine-tuning and LoRA-based PEFT on three benchmark datasets including SST-2 , AG-News , and IMDB . Robustness was evaluated using three prominent adversarial attack algorithmsTextFooler, TextBugger, and BAE. Performance was measured using metrics such as classification accuracy under attack, Attack Success Rate (ASR), perturbed word percentage (PWP%), and average queries (AQ). Results indicate that all models are vulnerable to adversarial examples regardless of the fine-tuning strategy, with ASR frequently exceeding 90% and in some cases reaching 100%. Accuracy under attack often dropped by 70–90 percentage points; for instance, BERTs accuracy fell from 99.7% to 0.0% on AG-News under TextFooler. Perturbed word percentages ranged from 15% to 40%, and attacks generally required fewer queries to succeed on LoRA models compared to full models. For example, on IMDB under TextBugger, LoRA-BERT required 427 average queries, while full BERT required 552. TextFooler was particularly effective, often causing significant drops in model performance. Larger models such as LLAMA3 and GPT-Neo showed slightly higher resilience compared to smaller models like DistilBERT. Importantly, LoRA fine-tuned models demonstrated robustness comparable to their fully fine-tuned counterparts and, in some cases, performed better in terms of accuracy under attack. These findings suggest that LoRA’s efficiency gains do not inherently weaken model robustness. However, the consistently high attack success rates across all configurations highlight the necessity for integrating stronger adversarial defenses into NLP systems.</p>