Purdue University Graduate School
PurdueThesis_zhaoxin (2).pdf (2.35 MB)

Interpretable natural language processing models with deep hierarchical structures and effective statistical training

Download (2.35 MB)
posted on 2023-11-03, 19:18 authored by Zhaoxin LuoZhaoxin Luo

The research focuses on improving natural language processing (NLP) models by integrating the hierarchical structure of language, which is essential for understanding and generating human language. The main contributions of the study are:

  1. Hierarchical RNN Model: Development of a deep Recurrent Neural Network model that captures both explicit and implicit hierarchical structures in language.
  2. Hierarchical Attention Mechanism: Use of a multi-level attention mechanism to help the model prioritize relevant information at different levels of the hierarchy.
  3. Latent Indicators and Efficient Training: Integration of latent indicators using the Expectation-Maximization algorithm and reduction of computational complexity with Bootstrap sampling and layered training strategies.
  4. Sequence-to-Sequence Model for Translation: Extension of the model to translation tasks, including a novel pre-training technique and a hierarchical decoding strategy to stabilize latent indicators during generation.

The study claims enhanced performance in various NLP tasks with results comparable to larger models, with the added benefit of increased interpretability.


Degree Type

  • Doctor of Philosophy


  • Statistics

Campus location

  • West Lafayette

Advisor/Supervisor/Committee Chair

Michael Zhu

Additional Committee Member 2

Faming Liang

Additional Committee Member 3

Xiao Wang

Additional Committee Member 4

Vinayak Rao