The aim of this thesis is to systematically study the statistical guarantee for two
representative non-convex optimization problems arsing in the statistics community.
The first one is the high-dimensional Gaussian mixture model, which is motivated by
the estimation of multiple graphical models arising from heterogeneous observations.
The second one is the low-rank tensor estimation model, which is motivated by
high-dimensional interaction model. Both optimal statistical rates and numerical
comparisons are studied in depth.
In the first part of my thesis, we consider joint estimation of multiple graphical
models arising from heterogeneous and high-dimensional observations. Unlike most
previous approaches which assume that the cluster structure is given in advance, an
appealing feature of our method is to learn cluster structure while estimating heterogeneous graphical models. This is achieved via a high dimensional version of Expectation
Conditional Maximization (ECM) algorithm. A joint graphical lasso penalty is
imposed on the conditional maximization step to extract both homogeneity and heterogeneity components across all clusters. Our algorithm is computationally efficient
due to fast sparse learning routines and can be implemented without unsupervised
learning knowledge. The superior performance of our method is demonstrated by extensive experiments and its application to a Glioblastoma cancer dataset reveals some
new insights in understanding the Glioblastoma cancer. In theory, a non-asymptotic
error bound is established for the output directly from our high dimensional ECM
algorithm, and it consists of two quantities: statistical error (statistical accuracy) and optimization error (computational complexity). Such a result gives a theoretical
guideline in terminating our ECM iterations.
In the second part of my thesis, we propose a general framework for sparse and low-rank tensor estimation from cubic sketchings. A two-stage non-convex implementation
is developed based on sparse tensor decomposition and thresholded gradient descent,
which ensures exact recovery in the noiseless case and stable recovery in the noisy
case with high probability. The non-asymptotic analysis sheds light on an interplay
between optimization error and statistical error. The proposed procedure is shown to
be rate-optimal under certain conditions. As a technical by-product, novel high-order
concentration inequalities are derived for studying high-moment sub-Gaussian tensors.
An interesting tensor formulation illustrates the potential application to high-order
interaction pursuit in high-dimensional linear regression