Purdue University Graduate School
Browse
Sihui_Wang_thesis.pdf (3.56 MB)

Sihui_Wang_thesis.pdf

Download (3.56 MB)
thesis
posted on 2023-12-01, 16:44 authored by Sihui WangSihui Wang

Traditionally, spatial data pertains to observations made at various spatial locations, with interpolation commonly being the central aim of such analyses. However, the relevance of this data has expanded notably to scenarios where the spatial location represents input variables, and the observed response variable embodies the model outcome, a concept applicable in arenas like computer experiments and recommender systems. Spatial prediction, pervasive across many disciplines, often employs linear prediction due to its simplicity. Kriging, originally developed in mining, engineering has found utility in diverse fields such as environmental sciences, hydrology, natural resources, remote sensing, and computer experiments, among others. In these applications, Gaussian processes have emerged as a powerful tool. Essential for kernel learning methods in machine learning, Kriging necessitates the inversion of the covariance matrix related to the observed random variables.

A primary challenge in spatial data analysis, in this expansive sense, is handling the large covariance matrix involved in the best linear prediction, or Kriging, and the Gaussian likelihood function. Recent studies have revealed that the covariance matrix can become ill-conditioned with increasing dimensions. This revelation underscores the need to seek alternative methodologies for analyzing extensive spatial data that avoid relying on the full covariance matrix. Although various strategies, such as covariance tapering, use of block diagonal matrices, and traditional low-rank model with perturbation, have been proposed to combat the computational hurdles linked with large spatial data, not all effectively resolve the predicament of an ill-conditioned covariance matrix.

In this thesis, we examine two promising strategies for the analysis of large-scale spatial data. The first is the low-rank approximation, a tactic that exists in multiple forms. Traditional low-rank models employ perturbation to handle the ill-conditioned covariance matrix but fall short in data prediction accuracy. We propose the use of a pseudo-inverse for the low-rank model as an alternative to full Kriging in handling massive spatial data. We will demonstrate that the prediction variance of the proposed low-rank model can be comparable to that of full Kriging, while offering computational cost benefits. Furthermore, our proposed low-rank model surpasses the traditional low-rank model in data interpolation. Consequently, when full Kriging is untenable due to an ill-conditioned covariance matrix, our proposed low-rank model becomes a viable alternative for interpolating large spatial data sets with high precision.

The second strategy involves harnessing deep learning for spatial interpolation. We explore machine learning approaches adept at modeling voluminous spatial data. Contrary to the majority of existing research that applies deep learning exclusively to model the mean function in spatial data, we concentrate on encapsulating spatial correlation. This approach harbors potential for effectively modeling non-stationary spatial phenomena. Given that Kriging is predicated on the data being influenced by an unknown constant mean, serving as the best linear unbiased predictor under this presupposition, we foresee its superior performance in stationary cases. Conversely, DeepKriging, with its intricate structure for both the mean function and spatial basis functions, exhibits enhanced performance in the realm of nonstationary data.

History

Degree Type

  • Doctor of Philosophy

Department

  • Statistics

Campus location

  • West Lafayette

Advisor/Supervisor/Committee Chair

Hao Zhang

Additional Committee Member 2

Faming Liang

Additional Committee Member 3

Vinayak Rao

Additional Committee Member 4

Jun Xie

Usage metrics

    Licence

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC