Stochastic subgradient algorithms for strongly convex optimization over distributed networks

dc.citation.epage260en_US
dc.citation.issueNumber4en_US
dc.citation.spage248en_US
dc.citation.volumeNumber4en_US
dc.contributor.authorSayin, M. O.en_US
dc.contributor.authorVanli, N. D.en_US
dc.contributor.authorKozat, S. S.en_US
dc.contributor.authorBaşar, T.en_US
dc.date.accessioned2018-04-12T11:02:51Z
dc.date.available2018-04-12T11:02:51Z
dc.date.issued2017en_US
dc.departmentDepartment of Electrical and Electronics Engineeringen_US
dc.description.abstractWe study diffusion and consensus based optimization of a sum of unknown convex objective functions over distributed networks. The only access to these functions is through stochastic gradient oracles, each of which is only available at a different node; and a limited number of gradient oracle calls is allowed at each node. In this framework, we introduce a convex optimization algorithm based on stochastic subgradient descent (SSD) updates. We use a carefully designed time-dependent weighted averaging of the SSD iterates, which yields a convergence rate of O N ffiffiffi N p (1s)T after T gradient updates for each node on a network of N nodes, where 0 ≤ σ < 1 denotes the second largest singular value of the communication matrix. This rate of convergence matches the performance lower bound up to constant terms. Similar to the SSD algorithm, the computational complexity of the proposed algorithm also scales linearly with the dimensionality of the data. Furthermore, the communication load of the proposed method is the same as the communication load of the SSD algorithm. Thus, the proposed algorithm is highly efficient in terms of complexity and communication load. We illustrate the merits of the algorithm with respect to the state-of-art methods over benchmark real life data sets. © 2017 IEEE.en_US
dc.description.provenanceMade available in DSpace on 2018-04-12T11:02:51Z (GMT). No. of bitstreams: 1 bilkent-research-paper.pdf: 179475 bytes, checksum: ea0bedeb05ac9ccfb983c327e155f0c2 (MD5) Previous issue date: 2017en
dc.identifier.doi10.1109/TNSE.2017.2713396en_US
dc.identifier.issn2327-4697
dc.identifier.urihttp://hdl.handle.net/11693/37101
dc.language.isoEnglishen_US
dc.publisherIEEE Computer Societyen_US
dc.relation.isversionofhttp://dx.doi.org/10.1109/TNSE.2017.2713396en_US
dc.source.titleIEEE Transactions on Network Science and Engineeringen_US
dc.subjectConsensus strategiesen_US
dc.subjectConvex optimizationen_US
dc.subjectDiffusion strategiesen_US
dc.subjectDistributed processingen_US
dc.subjectOnline learningen_US
dc.titleStochastic subgradient algorithms for strongly convex optimization over distributed networksen_US
dc.typeArticleen_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Stochastic Subgradient Algorithms for Strongly.pdf
Size:
1.34 MB
Format:
Adobe Portable Document Format
Description:
Full printable version