Purdue University Graduate School
Browse

LINEAR CONVERGENCE OF DISTRIBUTED GRADIENT TRACKING SCHEMES UNDER THE KL PROPERTY

Download (675.2 kB)
thesis
posted on 2022-06-10, 13:47 authored by Tejaskumar Pradipbhai TamboliTejaskumar Pradipbhai Tamboli
<p>We study decentralized multiagent optimization over networks modeled as undirected</p> <p>graphs. The optimization problem consists of minimizing a (non convex) smooth function</p> <p>plus a convex extended-value function, which enforces constraints or extra structure on the</p> <p>solution (e.g., sparsity, low-rank). We further assume that the objective function satisfies the</p> <p>Kurdyka- Lojasiewicz (KL) property, with exponent in [0, 1). The KL property is satisfied</p> <p>by several (non convex) functions of practical interest, e.g., arising from machine learning</p> <p>applications; in the centralized setting, it permits to achieve strong convergence guarantees.</p> <p>Here we establish a first convergence result of the same type for distributed algorithms,</p> <p>specifically the distributed gradient-tracking based algorithm SONATA, first proposed in</p> <p>[ 1 ]. When exponent is in (0, 1/2], the sequence generated by SONATA is proved to converge to a</p> <p>stationary solution of the problem at an R-linear rate whereas sublinear rate is certified</p> <p>when KL exponent is in (1/2, 1). This matches the convergence behaviour of centralized proximal-gradient</p> <p>algorithms. Numerical results validate our theoretical findings.</p>

History

Degree Type

  • Master of Science

Department

  • Industrial Engineering

Campus location

  • West Lafayette

Advisor/Supervisor/Committee Chair

Gesualdo Scutari

Additional Committee Member 2

Harsha Honappa

Additional Committee Member 3

Vaneet Aggarwal

Usage metrics

    Licence

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC