LINEAR CONVERGENCE OF DISTRIBUTED GRADIENT TRACKING SCHEMES UNDER THE KL PROPERTY
We study decentralized multiagent optimization over networks modeled as undirected
graphs. The optimization problem consists of minimizing a (non convex) smooth function
plus a convex extended-value function, which enforces constraints or extra structure on the
solution (e.g., sparsity, low-rank). We further assume that the objective function satisfies the
Kurdyka- Lojasiewicz (KL) property, with exponent in [0, 1). The KL property is satisfied
by several (non convex) functions of practical interest, e.g., arising from machine learning
applications; in the centralized setting, it permits to achieve strong convergence guarantees.
Here we establish a first convergence result of the same type for distributed algorithms,
specifically the distributed gradient-tracking based algorithm SONATA, first proposed in
[ 1 ]. When exponent is in (0, 1/2], the sequence generated by SONATA is proved to converge to a
stationary solution of the problem at an R-linear rate whereas sublinear rate is certified
when KL exponent is in (1/2, 1). This matches the convergence behaviour of centralized proximal-gradient
algorithms. Numerical results validate our theoretical findings.
History
Degree Type
- Master of Science
Department
- Industrial Engineering
Campus location
- West Lafayette