• About
  • Policies
  • What is openaccess
  • Library
  • Contact
Advanced search
      View Item 
      •   BUIR Home
      • Scholarly Publications
      • Faculty of Engineering
      • Department of Electrical and Electronics Engineering
      • View Item
      •   BUIR Home
      • Scholarly Publications
      • Faculty of Engineering
      • Department of Electrical and Electronics Engineering
      • View Item
      JavaScript is disabled for your browser. Some features of this site may not work without it.

      Sequential nonlinear learning for distributed multiagent systems via extreme learning machines

      Thumbnail
      View / Download
      3.3 Mb
      Author
      Vanli, N. D.
      Sayin, M. O.
      Delibalta, I.
      Kozat, S. S.
      Date
      2017
      Source Title
      IEEE Transactions on Neural Networks and Learning Systems
      Print ISSN
      2162-237X
      Publisher
      Institute of Electrical and Electronics Engineers Inc.
      Volume
      28
      Issue
      3
      Pages
      546 - 558
      Language
      English
      Type
      Article
      Item Usage Stats
      121
      views
      130
      downloads
      Abstract
      We study online nonlinear learning over distributed multiagent systems, where each agent employs a single hidden layer feedforward neural network (SLFN) structure to sequentially minimize arbitrary loss functions. In particular, each agent trains its own SLFN using only the data that is revealed to itself. On the other hand, the aim of the multiagent system is to train the SLFN at each agent as well as the optimal centralized batch SLFN that has access to all the data, by exchanging information between neighboring agents. We address this problem by introducing a distributed subgradient-based extreme learning machine algorithm. The proposed algorithm provides guaranteed upper bounds on the performance of the SLFN at each agent and shows that each of these individual SLFNs asymptotically achieves the performance of the optimal centralized batch SLFN. Our performance guarantees explicitly distinguish the effects of data-and network-dependent parameters on the convergence rate of the proposed algorithm. The experimental results illustrate that the proposed algorithm achieves the oracle performance significantly faster than the state-of-the-art methods in the machine learning and signal processing literature. Hence, the proposed method is highly appealing for the applications involving big data. © 2016 IEEE.
      Keywords
      Distributed systems
      Extreme learning machine (ELM)
      Multiagent optimization
      Sequential learning
      Single hidden layer feedforward neural networks (SLFNs)
      Permalink
      http://hdl.handle.net/11693/37102
      Published Version (Please cite this version)
      http://dx.doi.org/10.1109/TNNLS.2016.2536649
      Collections
      • Department of Electrical and Electronics Engineering 3524
      Show full item record

      Browse

      All of BUIRCommunities & CollectionsTitlesAuthorsAdvisorsBy Issue DateKeywordsTypeDepartmentsThis CollectionTitlesAuthorsAdvisorsBy Issue DateKeywordsTypeDepartments

      My Account

      Login

      Statistics

      View Usage StatisticsView Google Analytics Statistics

      Bilkent University

      If you have trouble accessing this page and need to request an alternate format, contact the site administrator. Phone: (312) 290 1771
      Copyright © Bilkent University - Library IT

      Contact Us | Send Feedback | Off-Campus Access | Admin | Privacy