• About
  • Policies
  • What is open access
  • Library
  • Contact
Advanced search
      View Item 
      •   BUIR Home
      • Scholarly Publications
      • Faculty of Engineering
      • Department of Electrical and Electronics Engineering
      • View Item
      •   BUIR Home
      • Scholarly Publications
      • Faculty of Engineering
      • Department of Electrical and Electronics Engineering
      • View Item
      JavaScript is disabled for your browser. Some features of this site may not work without it.

      Stochastic subgradient algorithms for strongly convex optimization over distributed networks

      Thumbnail
      View / Download
      1.3 Mb
      Author(s)
      Sayin, M. O.
      Vanli, N. D.
      Kozat, S. S.
      Başar, T.
      Date
      2017
      Source Title
      IEEE Transactions on Network Science and Engineering
      Print ISSN
      2327-4697
      Publisher
      IEEE Computer Society
      Volume
      4
      Issue
      4
      Pages
      248 - 260
      Language
      English
      Type
      Article
      Item Usage Stats
      171
      views
      149
      downloads
      Abstract
      We study diffusion and consensus based optimization of a sum of unknown convex objective functions over distributed networks. The only access to these functions is through stochastic gradient oracles, each of which is only available at a different node; and a limited number of gradient oracle calls is allowed at each node. In this framework, we introduce a convex optimization algorithm based on stochastic subgradient descent (SSD) updates. We use a carefully designed time-dependent weighted averaging of the SSD iterates, which yields a convergence rate of O N ffiffiffi N p (1s)T after T gradient updates for each node on a network of N nodes, where 0 ≤ σ < 1 denotes the second largest singular value of the communication matrix. This rate of convergence matches the performance lower bound up to constant terms. Similar to the SSD algorithm, the computational complexity of the proposed algorithm also scales linearly with the dimensionality of the data. Furthermore, the communication load of the proposed method is the same as the communication load of the SSD algorithm. Thus, the proposed algorithm is highly efficient in terms of complexity and communication load. We illustrate the merits of the algorithm with respect to the state-of-art methods over benchmark real life data sets. © 2017 IEEE.
      Keywords
      Consensus strategies
      Convex optimization
      Diffusion strategies
      Distributed processing
      Online learning
      Permalink
      http://hdl.handle.net/11693/37101
      Published Version (Please cite this version)
      http://dx.doi.org/10.1109/TNSE.2017.2713396
      Collections
      • Department of Electrical and Electronics Engineering 4011
      Show full item record

      Browse

      All of BUIRCommunities & CollectionsTitlesAuthorsAdvisorsBy Issue DateKeywordsTypeDepartmentsCoursesThis CollectionTitlesAuthorsAdvisorsBy Issue DateKeywordsTypeDepartmentsCourses

      My Account

      Login

      Statistics

      View Usage StatisticsView Google Analytics Statistics

      Bilkent University

      If you have trouble accessing this page and need to request an alternate format, contact the site administrator. Phone: (312) 290 2976
      © Bilkent University - Library IT

      Contact Us | Send Feedback | Off-Campus Access | Admin | Privacy