Purdue University Graduate School
Dissertation_Senwei_final_v2.pdf (30.38 MB)


Download (30.38 MB)
posted on 2022-06-17, 18:47 authored by Senwei LiangSenwei Liang

High-dimensional regression problems are ubiquitous in science and engineering. Deep learning has been a critical tool for solving a wide range of high-dimensional problems with surprising performance. Even though in theory neural networks have good properties in terms of approximation and optimization, numerically obtaining an accurate neural network solution is a challenging problem due to the highly non-convex objective function and implicit bias of least square optimization. In this dissertation, we mainly discuss two topics involving the high dimensional regression using efficient deep learning algorithms. These two topics include solving PDEs with high dimensional domains and data-driven dynamical modeling. 

In the first topic, we aim to develop an efficient solver for PDE problems. Firstly, we focus on neural network structures to increase efficiency. We propose a data-driven activation function called reproducing activation function which can reproduce traditional approximation tools and enable faster convergence of deep neural network training with smaller parameter cost. Secondly, we target the application of neural networks to mitigate the numerical issues that hamper the traditional approach. As an example, we develop a neural network solver for elliptic PDEs on unknown manifolds and verify its effectiveness for the large-scale problem. 

In the second topic, we aim to enhance the accuracy of learning the dynamical system from data by incorporating the prior. In the missing dynamics problem, taking advantage of known partial dynamics, we propose a framework that approximates a map that takes the memories of the resolved and identifiable unresolved variables to the missing components in the resolved dynamics. With this framework, we achieve a low error to predict the missing component, enabling the accurate prediction of the resolved variables. In the recovering Hamiltonian dynamics, by the energy conservation property, we learn the conserved Hamiltonian function instead of its associated vector field. To better learn the Hamiltonian from the stiff dynamics, we identify and splits the

training data into stiff and nonstiff portions, and adopt different learning strategies based on the classification to reduce the training error. 


Degree Type

  • Doctor of Philosophy


  • Mathematics

Campus location

  • West Lafayette

Advisor/Supervisor/Committee Chair

Haizhao Yang

Additional Committee Member 2

Jingwei Hu

Additional Committee Member 3

Di Qi

Additional Committee Member 4

Xiangxiong Zhang

Usage metrics



    Ref. manager