RELEAF: an algorithm for learning and exploiting relevance

dc.citation.epage15en_US
dc.citation.spage1en_US
dc.contributor.authorTekin, C.en_US
dc.contributor.authorSchaar, Mihaela van deren_US
dc.date.accessioned2019-02-13T06:54:00Z
dc.date.available2019-02-13T06:54:00Z
dc.date.issued2015-02en_US
dc.departmentDepartment of Electrical and Electronics Engineeringen_US
dc.description.abstractRecommender systems, medical diagnosis, network security, etc., require on-going learning and decision-making in real time. These -- and many others -- represent perfect examples of the opportunities and difficulties presented by Big Data: the available information often arrives from a variety of sources and has diverse features so that learning from all the sources may be valuable but integrating what is learned is subject to the curse of dimensionality. This paper develops and analyzes algorithms that allow efficient learning and decision-making while avoiding the curse of dimensionality. We formalize the information available to the learner/decision-maker at a particular time as a context vector which the learner should consider when taking actions. In general the context vector is very high dimensional, but in many settings, the most relevant information is embedded into only a few relevant dimensions. If these relevant dimensions were known in advance, the problem would be simple -- but they are not. Moreover, the relevant dimensions may be different for different actions. Our algorithm learns the relevant dimensions for each action, and makes decisions based in what it has learned. Formally, we build on the structure of a contextual multi-armed bandit by adding and exploiting a relevance relation. We prove a general regret bound for our algorithm whose time order depends only on the maximum number of relevant dimensions among all the actions, which in the special case where the relevance relation is single-valued (a function), reduces to O~(T2(2√−1)); in the absence of a relevance relation, the best known contextual bandit algorithms achieve regret O~(T(D+1)/(D+2)), where D is the full dimension of the context vector.en_US
dc.description.provenanceSubmitted by Betül Özen (ozen@bilkent.edu.tr) on 2019-02-13T06:54:00Z No. of bitstreams: 1 RELEAF_An_Algorithm_for_Learning.pdf: 318972 bytes, checksum: 7937f33d08dbf6332fd77b14ba699a1f (MD5)en
dc.description.provenanceMade available in DSpace on 2019-02-13T06:54:00Z (GMT). No. of bitstreams: 1 RELEAF_An_Algorithm_for_Learning.pdf: 318972 bytes, checksum: 7937f33d08dbf6332fd77b14ba699a1f (MD5) Previous issue date: 2015-02en
dc.identifier.urihttp://hdl.handle.net/11693/49374
dc.language.isoEnglishen_US
dc.publisherCornell Universityen_US
dc.source.titleIEEE Journal of Selected Topics in Signal Processingen_US
dc.subjectContextual banditsen_US
dc.subjectRegreten_US
dc.subjectDimensionality reductionen_US
dc.subjectLearning relevanceen_US
dc.subjectRecommender systemsen_US
dc.subjectOnline learningen_US
dc.subjectActive learningen_US
dc.titleRELEAF: an algorithm for learning and exploiting relevanceen_US
dc.typeArticleen_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
RELEAF_An_Algorithm_for_Learning.pdf
Size:
311.5 KB
Format:
Adobe Portable Document Format
Description:
Full printable version

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.71 KB
Format:
Item-specific license agreed upon to submission
Description: