Adaptive ensemble learning with confidence bounds for personalized diagnosis

dc.citation.epage468en_US
dc.citation.spage462en_US
dc.citation.volumeNumberWS-16-08en_US
dc.contributor.authorTekin, Cemen_US
dc.contributor.authorYoon, J.en_US
dc.contributor.authorVan Der Schaar, M.en_US
dc.coverage.spatialPhoenix, Arizona, USAen_US
dc.date.accessioned2018-04-12T11:42:52Zen_US
dc.date.available2018-04-12T11:42:52Zen_US
dc.date.issued2016en_US
dc.departmentDepartment of Electrical and Electronics Engineeringen_US
dc.descriptionDate of Conference: 12-13 February 2016en_US
dc.descriptionConference Name: 30th AAAI Conference on Artificial Intelligence, 2016en_US
dc.description.abstractWith the advances in the field of medical informatics, automated clinical decision support systems are becoming the de facto standard in personalized diagnosis. In order to establish high accuracy and confidence in personalized diagnosis, massive amounts of distributed, heterogeneous, correlated and high-dimensional patient data from different sources such as wearable sensors, mobile applications, Electronic Health Record (EHR) databases etc. need to be processed. This requires learning both locally and globally due to privacy constraints and/or distributed nature of the multimodal medical data. In the last decade, a large number of meta-learning techniques have been proposed in which local learners make online predictions based on their locally-collected data instances, and feed these predictions to an ensemble learner, which fuses them and issues a global prediction. However, most of these works do not provide performance guarantees or, when they do, these guarantees are asymptotic. None of these existing works provide confidence estimates about the issued predictions or rate of learning guarantees for the ensemble learner. In this paper, we provide a systematic ensemble learning method called Hedged Bandits, which comes with both long run (asymptotic) and short run (rate of learning) performance guarantees. Moreover, we show that our proposed method outperforms all existing ensemble learning techniques, even in the presence of concept drift.en_US
dc.identifier.urihttp://hdl.handle.net/11693/37524en_US
dc.language.isoEnglishen_US
dc.publisherAAAI Pressen_US
dc.source.titleProceedings of the 30th AAAI Conference on Artificial Intelligence, 2016en_US
dc.subjectArtificial intelligenceen_US
dc.subjectBig dataen_US
dc.subjectCognitive systemsen_US
dc.subjectComputer gamesen_US
dc.subjectComputer programmingen_US
dc.subjectComputer systems programmingen_US
dc.subjectData miningen_US
dc.subjectDecision support systemsen_US
dc.subjectMachine learningen_US
dc.subjectEnsemble learningen_US
dc.subjectOnline learningen_US
dc.subjectHospital data processingen_US
dc.subjectHybrid systemsen_US
dc.subjectLearning algorithmsen_US
dc.subjectLearning systemsen_US
dc.subjectPopulation statisticsen_US
dc.subjectClinical decision support systemsen_US
dc.subjectElectronic health recorden_US
dc.subjectGlobal predictionsen_US
dc.subjectMedical informaticsen_US
dc.subjectMeta-learning techniquesen_US
dc.subjectMobile applicationsen_US
dc.subjectPerformance guaranteesen_US
dc.subjectPrivacy constraintsen_US
dc.subjectConfidence boundsen_US
dc.titleAdaptive ensemble learning with confidence bounds for personalized diagnosisen_US
dc.typeConference Paperen_US

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Adaptive ensemble learning with confidence bounds for personalized diagnosis.pdf
Size:
868.17 KB
Format:
Adobe Portable Document Format
Description:
Full printable version