HIGHER ORDER OPTIMIZATION TECHNIQUES FOR MACHINE LEARNING
First-order methods such as Stochastic Gradient Descent are methods of choice for solving non-convex optimization problems in machine learning. These methods primarily rely on the gradient of the loss function to estimate descent direction. However, they have a number of drawbacks, including converging to saddle points (as opposed to minima), slow convergence, and sensitivity to parameter tuning. In contrast, second order methods that use curvature information in addition to the gradient, have been shown to achieve faster convergence rates, theoretically. When used in the context of machine learning applications, they offer faster (quadratic) convergence, stability to parameter tuning, and robustness to problem conditioning. In spite of these advantages, first order methods are commonly used because of their simplicity of implementation and low per-iteration cost. The need to generate and use curvature information in the form of a dense Hessian matrix makes each iteration of second order methods more expensive.
In this work, we address three key problems associated with second order methods – (i) what is the best way to incorporate curvature information into the optimization procedure; (ii) how do we reduce the operation count of each iteration in a second order method, while maintaining its superior convergence property; and (iii) how do we leverage high-performance computing platforms to significant accelerate second order methods. To answer the first question, we propose and validate the use of Fisher information matrices in second order methods to significantly accelerate convergence. The second question is answered through the use of statistical sampling techniques that suitably sample matrices to reduce per-iteration cost without impacting convergence. The third question is addressed through the use of graphics processing units (GPUs) in distributed platforms to deliver state of the art solvers.
Through our work, we show that our solvers are capable of significant improvement over state of the art optimization techniques for training machine learning models. We demonstrate improvements in terms of training time (over an order of magnitude in wall-clock time), generalization properties of learned models, and robustness to problem conditioning.
History
Degree Type
- Doctor of Philosophy
Department
- Electrical and Computer Engineering
Campus location
- West Lafayette
Advisor/Supervisor/Committee Chair
Dr. Charlie Y. HuAdvisor/Supervisor/Committee co-chair
Dr. Ananth Y. GramaAdditional Committee Member 2
Dr. David F. GleichAdditional Committee Member 3
Dr. Petros DrineasAdditional Committee Member 4
Dr. Mithuna S. ThottethodiUsage metrics
Categories
- Digital processor architectures
- Other information and computing sciences not elsewhere classified
- Artificial intelligence not elsewhere classified
- Applied computing not elsewhere classified
- Distributed computing and systems software not elsewhere classified
- Software engineering not elsewhere classified
- Optimisation