• About
  • Policies
  • What is openaccess
  • Library
  • Contact
Advanced search
      View Item 
      •   BUIR Home
      • Scholarly Publications
      • Faculty of Engineering
      • Department of Electrical and Electronics Engineering
      • View Item
      •   BUIR Home
      • Scholarly Publications
      • Faculty of Engineering
      • Department of Electrical and Electronics Engineering
      • View Item
      JavaScript is disabled for your browser. Some features of this site may not work without it.

      Gambler's ruin bandit problem

      Thumbnail
      View / Download
      1.3 Mb
      Author
      Akbarzadeh, Nima
      Tekin, Cem
      Date
      2017
      Source Title
      Proceedings of the 54th Annual Allerton Conference on Communication, Control, and Computing, Allerton 2016
      Publisher
      IEEE
      Pages
      1236 - 1243
      Language
      English
      Type
      Conference Paper
      Item Usage Stats
      167
      views
      172
      downloads
      Abstract
      In this paper, we propose a new multi-armed bandit problem called the Gambler's Ruin Bandit Problem (GRBP). In the GRBP, the learner proceeds in a sequence of rounds, where each round is a Markov Decision Process (MDP) with two actions (arms): a continuation action that moves the learner randomly over the state space around the current state; and a terminal action that moves the learner directly into one of the two terminal states (goal and dead-end state). The current round ends when a terminal state is reached, and the learner incurs a positive reward only when the goal state is reached. The objective of the learner is to maximize its long-term reward (expected number of times the goal state is reached), without having any prior knowledge on the state transition probabilities. We first prove a result on the form of the optimal policy for the GRBP. Then, we define the regret of the learner with respect to an omnipotent oracle, which acts optimally in each round, and prove that it increases logarithmically over rounds. We also identify a condition under which the learner's regret is bounded. A potential application of the GRBP is optimal medical treatment assignment, in which the continuation action corresponds to a conservative treatment and the terminal action corresponds to a risky treatment such as surgery.
      Keywords
      Learning algorithms
      Bandit problems
      Conservative treatments
      Markov decision processes
      Medical treatment
      Multi-armed bandit problem
      Optimal policies
      Prior knowledge
      State transition probabilities
      Permalink
      http://hdl.handle.net/11693/37637
      Published Version (Please cite this version)
      http://dx.doi.org/10.1109/ALLERTON.2016.7852376
      Collections
      • Department of Electrical and Electronics Engineering 3605
      Show full item record

      Browse

      All of BUIRCommunities & CollectionsTitlesAuthorsAdvisorsBy Issue DateKeywordsTypeDepartmentsThis CollectionTitlesAuthorsAdvisorsBy Issue DateKeywordsTypeDepartments

      My Account

      Login

      Statistics

      View Usage StatisticsView Google Analytics Statistics

      Bilkent University

      If you have trouble accessing this page and need to request an alternate format, contact the site administrator. Phone: (312) 290 1771
      © Bilkent University - Library IT

      Contact Us | Send Feedback | Off-Campus Access | Admin | Privacy