Adaptive ambulance redeployment via multi-armed bandits
buir.advisor | Tekin, Cem | |
dc.contributor.author | Şahin, Ümitcan | |
dc.date.accessioned | 2019-09-10T11:45:31Z | |
dc.date.available | 2019-09-10T11:45:31Z | |
dc.date.copyright | 2019-09 | |
dc.date.issued | 2019-09 | |
dc.date.submitted | 2019-09-06 | |
dc.description | Cataloged from PDF version of article. | en_US |
dc.description | Thesis (M.S.): Bilkent University, Department of Electrical and Electronics Engineering, İhsan Doğramacı Bilkent University, 2019. | en_US |
dc.description | Includes bibliographical references (leaves 64-68). | en_US |
dc.description.abstract | Emergency Medical Services (EMS) provide the necessary resources when there is a need for immediate medical attention and play a signi cant role in saving lives in the case of a life-threatening event. Therefore, it is necessary to design an EMS system where the arrival times to calls are as short as possible. This task includes the ambulance redeployment problem that consists of the methods of deploying ambulances to certain locations in order to minimize the arrival time and increase the coverage of the demand points. As opposed to many conventional redeployment methods where the optimization is primary concern, we propose a learning-based approach in which ambulances are redeployed without any a priori knowledge on the call distributions and the travel times, and these uncertainties are learned on the way. We cast the ambulance redeployment problem as a multi-armed bandit (MAB) problem, and propose various context-free and contextual MAB algorithms that learn to optimize redeployment locations via exploration and exploitation. We investigate the concept of risk aversion in ambulance redeployment and propose a risk-averse MAB algorithm. We construct a data-driven simulator that consists of a graph-based redeployment network and Markov tra c model and compare the performances of the algorithms on this simulator. Furthermore, we also conduct more realistic simulations by modeling the city of Ankara, Turkey and running the algorithms in this new model. Our results show that given the same conditions the presented MAB algorithms perform favorably against a method based on dynamic redeployment and similarly to a static allocation method which knows the true dynamics of the simulation setup beforehand. | en_US |
dc.description.provenance | Submitted by Betül Özen (ozen@bilkent.edu.tr) on 2019-09-10T11:45:31Z No. of bitstreams: 1 ucsahin_thesis.pdf: 3071530 bytes, checksum: 6a36fc4a4c4ddc8beec7441c44d86bc3 (MD5) | en |
dc.description.provenance | Made available in DSpace on 2019-09-10T11:45:31Z (GMT). No. of bitstreams: 1 ucsahin_thesis.pdf: 3071530 bytes, checksum: 6a36fc4a4c4ddc8beec7441c44d86bc3 (MD5) Previous issue date: 2019-09 | en |
dc.description.statementofresponsibility | by Ümitcan Şahin | en_US |
dc.format.extent | xii, 68 leaves : illustrations, charts (some color) ; 30 cm. | en_US |
dc.identifier.itemid | B134431 | |
dc.identifier.uri | http://hdl.handle.net/11693/52402 | |
dc.language.iso | English | en_US |
dc.rights | info:eu-repo/semantics/openAccess | en_US |
dc.subject | Ambulance redeployment | en_US |
dc.subject | Online learning | en_US |
dc.subject | Multi-armed bandit problem | en_US |
dc.subject | Contextual multi-armed bandit problem | en_US |
dc.subject | Risk-aversion | en_US |
dc.title | Adaptive ambulance redeployment via multi-armed bandits | en_US |
dc.title.alternative | Çok kollu haydutlar ile uyarlanabilir ambulans konumlandırma | en_US |
dc.type | Thesis | en_US |
thesis.degree.discipline | Electrical and Electronic Engineering | |
thesis.degree.grantor | Bilkent University | |
thesis.degree.level | Master's | |
thesis.degree.name | MS (Master of Science) |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- ucsahin_thesis.pdf
- Size:
- 2.93 MB
- Format:
- Adobe Portable Document Format
- Description:
- Full printable version
License bundle
1 - 1 of 1
No Thumbnail Available
- Name:
- license.txt
- Size:
- 1.71 KB
- Format:
- Item-specific license agreed upon to submission
- Description: