Sequential nonlinear learning for distributed multiagent systems via extreme learning machines
dc.citation.epage | 558 | en_US |
dc.citation.issueNumber | 3 | en_US |
dc.citation.spage | 546 | en_US |
dc.citation.volumeNumber | 28 | en_US |
dc.contributor.author | Vanli, N. D. | en_US |
dc.contributor.author | Sayin, M. O. | en_US |
dc.contributor.author | Delibalta, I. | en_US |
dc.contributor.author | Kozat, S. S. | en_US |
dc.date.accessioned | 2018-04-12T11:02:53Z | |
dc.date.available | 2018-04-12T11:02:53Z | |
dc.date.issued | 2017 | en_US |
dc.department | Department of Electrical and Electronics Engineering | en_US |
dc.description.abstract | We study online nonlinear learning over distributed multiagent systems, where each agent employs a single hidden layer feedforward neural network (SLFN) structure to sequentially minimize arbitrary loss functions. In particular, each agent trains its own SLFN using only the data that is revealed to itself. On the other hand, the aim of the multiagent system is to train the SLFN at each agent as well as the optimal centralized batch SLFN that has access to all the data, by exchanging information between neighboring agents. We address this problem by introducing a distributed subgradient-based extreme learning machine algorithm. The proposed algorithm provides guaranteed upper bounds on the performance of the SLFN at each agent and shows that each of these individual SLFNs asymptotically achieves the performance of the optimal centralized batch SLFN. Our performance guarantees explicitly distinguish the effects of data-and network-dependent parameters on the convergence rate of the proposed algorithm. The experimental results illustrate that the proposed algorithm achieves the oracle performance significantly faster than the state-of-the-art methods in the machine learning and signal processing literature. Hence, the proposed method is highly appealing for the applications involving big data. © 2016 IEEE. | en_US |
dc.description.provenance | Made available in DSpace on 2018-04-12T11:02:53Z (GMT). No. of bitstreams: 1 bilkent-research-paper.pdf: 179475 bytes, checksum: ea0bedeb05ac9ccfb983c327e155f0c2 (MD5) Previous issue date: 2017 | en |
dc.identifier.doi | 10.1109/TNNLS.2016.2536649 | en_US |
dc.identifier.issn | 2162-237X | |
dc.identifier.uri | http://hdl.handle.net/11693/37102 | |
dc.language.iso | English | en_US |
dc.publisher | Institute of Electrical and Electronics Engineers Inc. | en_US |
dc.relation.isversionof | http://dx.doi.org/10.1109/TNNLS.2016.2536649 | en_US |
dc.source.title | IEEE Transactions on Neural Networks and Learning Systems | en_US |
dc.subject | Distributed systems | en_US |
dc.subject | Extreme learning machine (ELM) | en_US |
dc.subject | Multiagent optimization | en_US |
dc.subject | Sequential learning | en_US |
dc.subject | Single hidden layer feedforward neural networks (SLFNs) | en_US |
dc.title | Sequential nonlinear learning for distributed multiagent systems via extreme learning machines | en_US |
dc.type | Article | en_US |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- Sequential Nonlinear Learning for Distributed.pdf
- Size:
- 3.28 MB
- Format:
- Adobe Portable Document Format
- Description:
- Full printable version