BUIR logo
Communities & Collections
All of BUIR
  • English
  • Türkçe
Log In
Please note that log in via username/password is only available to Repository staff.
Have you forgotten your password?
  1. Home
  2. Browse by Subject

Browsing by Subject "Multi-armed bandit problem"

Filter results by typing the first few letters
Now showing 1 - 4 of 4
  • Results Per Page
  • Sort Options
  • Loading...
    Thumbnail Image
    ItemOpen Access
    Adaptive ambulance redeployment via multi-armed bandits
    (2019-09) Şahin, Ümitcan
    Emergency Medical Services (EMS) provide the necessary resources when there is a need for immediate medical attention and play a signi cant role in saving lives in the case of a life-threatening event. Therefore, it is necessary to design an EMS system where the arrival times to calls are as short as possible. This task includes the ambulance redeployment problem that consists of the methods of deploying ambulances to certain locations in order to minimize the arrival time and increase the coverage of the demand points. As opposed to many conventional redeployment methods where the optimization is primary concern, we propose a learning-based approach in which ambulances are redeployed without any a priori knowledge on the call distributions and the travel times, and these uncertainties are learned on the way. We cast the ambulance redeployment problem as a multi-armed bandit (MAB) problem, and propose various context-free and contextual MAB algorithms that learn to optimize redeployment locations via exploration and exploitation. We investigate the concept of risk aversion in ambulance redeployment and propose a risk-averse MAB algorithm. We construct a data-driven simulator that consists of a graph-based redeployment network and Markov tra c model and compare the performances of the algorithms on this simulator. Furthermore, we also conduct more realistic simulations by modeling the city of Ankara, Turkey and running the algorithms in this new model. Our results show that given the same conditions the presented MAB algorithms perform favorably against a method based on dynamic redeployment and similarly to a static allocation method which knows the true dynamics of the simulation setup beforehand.
  • Loading...
    Thumbnail Image
    ItemOpen Access
    An efficient bandit algorithm for general weight assignments
    (IEEE, 2017) Gökçesu, Kaan; Ergen, Tolga; Çiftçi, S.; Kozat, Süleyman Serdar
    In this paper, we study the adversarial multi armed bandit problem and present a generally implementable efficient bandit arm selection structure. Since we do not have any statistical assumptions on the bandit arm losses, the results in the paper are guaranteed to hold in an individual sequence manner. The introduced framework is able to achieve the optimal regret bounds by employing general weight assignments on bandit arm selection sequences. Hence, this framework can be used for a wide range of applications.
  • Loading...
    Thumbnail Image
    ItemOpen Access
    Gambler's ruin bandit problem
    (IEEE, 2017) Akbarzadeh, Nima; Tekin, Cem
    In this paper, we propose a new multi-armed bandit problem called the Gambler's Ruin Bandit Problem (GRBP). In the GRBP, the learner proceeds in a sequence of rounds, where each round is a Markov Decision Process (MDP) with two actions (arms): a continuation action that moves the learner randomly over the state space around the current state; and a terminal action that moves the learner directly into one of the two terminal states (goal and dead-end state). The current round ends when a terminal state is reached, and the learner incurs a positive reward only when the goal state is reached. The objective of the learner is to maximize its long-term reward (expected number of times the goal state is reached), without having any prior knowledge on the state transition probabilities. We first prove a result on the form of the optimal policy for the GRBP. Then, we define the regret of the learner with respect to an omnipotent oracle, which acts optimally in each round, and prove that it increases logarithmically over rounds. We also identify a condition under which the learner's regret is bounded. A potential application of the GRBP is optimal medical treatment assignment, in which the continuation action corresponds to a conservative treatment and the terminal action corresponds to a risky treatment such as surgery.
  • Loading...
    Thumbnail Image
    ItemOpen Access
    Multi-armed bandits with probing multi-armed
    (IEEE, 2024) Elumar, Eray Can; Tekin, Cem; Yağan, Osman
    Multi-armed bandits is a sequential decision-making problem where an agent must choose between multiple actions to maximize its cumulative reward over time, while facing uncertainty about the rewards associated with each action. The challenge lies in balancing the exploration of potentially higher-rewarding actions with the exploitation of known high-reward actions. We consider a multi-armed bandit problem with probes, where before pulling an arm, the decision-maker is allowed to probe one of the K arms for a cost $c\geq 0$ to observe its reward. We introduce a new regret definition that is based on the expected reward of the optimal action. We develop UCBP, a novel algorithm that utilizes this strategy to achieve a gap-independent regret upper bound that scales with the number of rounds T as $O(\sqrt {KT\log T})$, and an order optimal gap-dependent upper bound of $O(K\log T)$. As a baseline, we introduce UCB-naive-probe, a naive UCB-based approach which has a gap-independent regret upper bound of $O(\sqrt {KT\log T})$, and gap-dependent regret bound of $O(K^{2}\log T)$; and TSP, the Thompson sampling version of UCBP. In empirical simulations, UCBP outperforms UCB-naive-probe, and performs similarly to TSP, verifying the utility of UCBP and TSP algorithms in practical settings.

About the University

  • Academics
  • Research
  • Library
  • Students
  • Stars
  • Moodle
  • WebMail

Using the Library

  • Collections overview
  • Borrow, renew, return
  • Connect from off campus
  • Interlibrary loan
  • Hours
  • Plan
  • Intranet (Staff Only)

Research Tools

  • EndNote
  • Grammarly
  • iThenticate
  • Mango Languages
  • Mendeley
  • Turnitin
  • Show more ..

Contact

  • Bilkent University
  • Main Campus Library
  • Phone: +90(312) 290-1298
  • Email: dspace@bilkent.edu.tr

Bilkent University Library © 2015-2025 BUIR

  • Privacy policy
  • Send Feedback