Analysis of thompson sampling for combinatorial multi-armed bandit with probabilistically triggered arms

Date
2020
Editor(s)
Advisor
Supervisor
Co-Advisor
Co-Supervisor
Instructor
Source Title
Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics, AISTATS 2019
Print ISSN
Electronic ISSN
Publisher
PLMR
Volume
Issue
Pages
Language
English
Journal Title
Journal ISSN
Volume Title
Series
Abstract

We analyze the regret of combinatorial Thompson sampling (CTS) for the combinatorial multi-armed bandit with probabilistically triggered arms under the semi-bandit feedback setting. We assume that the learner has access to an exact optimization oracle but does not know the expected base arm outcomes beforehand. When the expected reward function is Lipschitz continuous in the expected base arm outcomes, we derive O( Pm i=1 log T /(pii)) regret bound for CTS, where m denotes the number of base arms, pi denotes the minimum non-zero triggering probability of base arm i and i denotes the minimum suboptimality gap of base arm i. We also compare CTS with combinatorial upper confidence bound (CUCB) via numerical experiments on a cascading bandit problem.

Course
Other identifiers
Book Title
Keywords
Citation
Published Version (Please cite this version)