Distributed online learning via cooperative contextual bandits

dc.citation.epage3714en_US
dc.citation.issueNumber14en_US
dc.citation.spage3700en_US
dc.citation.volumeNumber63en_US
dc.contributor.authorTekin, C.en_US
dc.contributor.authorSchaar, Mihaela van deren_US
dc.date.accessioned2019-02-13T07:40:29Z
dc.date.available2019-02-13T07:40:29Z
dc.date.issued2015-07-15en_US
dc.departmentDepartment of Electrical and Electronics Engineeringen_US
dc.description.abstractIn this paper, we propose a novel framework for decentralized, online learning by many learners. At each moment of time, an instance characterized by a certain context may arrive to each learner; based on the context, the learner can select one of its own actions (which gives a reward and provides information) or request assistance from another learner. In the latter case, the requester pays a cost and receives the reward but the provider learns the information. In our framework, learners are modeled as cooperative contextual bandits. Each learner seeks to maximize the expected reward from its arrivals, which involves trading off the reward received from its own actions, the information learned from its own actions, the reward received from the actions requested of others and the cost paid for these actions—taking into account what it has learned about the value of assistance from each other learner. We develop distributed online learning algorithms and provide analytic bounds to compare the efficiency of these with algorithms with the complete knowledge (oracle) benchmark (in which the expected reward of every action in every context is known by every learner). Our estimates show that regret—the loss incurred by the algorithm—is sublinear in time. Our theoretical framework can be used in many practical applications including Big Data mining, event detection in surveillance sensor networks and distributed online recommendation systems.en_US
dc.description.provenanceSubmitted by Betül Özen (ozen@bilkent.edu.tr) on 2019-02-13T07:40:29Z No. of bitstreams: 1 Distributed_Online_Learning_via_Cooperative.pdf: 3548055 bytes, checksum: d5d02c2c80afaf82c1a43428ca68fad6 (MD5)en
dc.description.provenanceMade available in DSpace on 2019-02-13T07:40:29Z (GMT). No. of bitstreams: 1 Distributed_Online_Learning_via_Cooperative.pdf: 3548055 bytes, checksum: d5d02c2c80afaf82c1a43428ca68fad6 (MD5) Previous issue date: 2015-07-15en
dc.identifier.doi10.1109/TSP.2015.2430837en_US
dc.identifier.eissn1941-0476
dc.identifier.issn1053-587X
dc.identifier.urihttp://hdl.handle.net/11693/49380
dc.language.isoEnglishen_US
dc.publisherInstitute of Electrical and Electronics Engineersen_US
dc.relation.isversionofhttp://doi.org/10.1109/TSP.2015.2430837en_US
dc.source.titleIEEE Transactions on Signal Processingen_US
dc.subjectContextual banditsen_US
dc.subjectCooperative learningen_US
dc.subjectDistributed learningen_US
dc.subjectMulti-user banditsen_US
dc.subjectMulti-user learningen_US
dc.subjectOnline learningen_US
dc.titleDistributed online learning via cooperative contextual banditsen_US
dc.typeArticleen_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Distributed_Online_Learning_via_Cooperative.pdf
Size:
3.38 MB
Format:
Adobe Portable Document Format
Description:
Full printable version

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.71 KB
Format:
Item-specific license agreed upon to submission
Description: