Poster
Achieving Linear Convergence with Parameter-Free Algorithms in Decentralized Optimization
Ilya Kuruzov · Gesualdo Scutari · Alexander Gasnikov
This paper tackles (strongly) convex smooth optimization in a network of agents without a centralized server. Existing decentralized algorithms require knowledge of problem and network parameters, such as the Lipschitz constants of the global gradient or network information, which agents usually cannot access. This leads to conservative hyperparameter selections and slow convergence or divergence. This paper introduces a decentralized algorithm that eliminates the need for specific parameter tuning. Our approach employs an operator splitting technique with a novel variable metric, enabling a local backtracking line-search to adaptively select the learning rate without global information or extensive communication. This results in favorable convergence guarantees and dependence on optimization and network parameters compared to existing nonadaptive methods. Notably, our method is the first adaptive decentralized algorithm to achieve linear convergence for strongly convex functions. Numerical experiments on machine learning problems demonstrate superior performance in convergence speed and scalability, particularly in scenarios where conventional methods fail due to stepsize limitations.
Live content is unavailable. Log in and register to view live content