Non-uniformly sampled sequential data processing

buir.advisorKozat, Süleyman Serdar
dc.contributor.authorŞahin, Safa Onur
dc.date.accessioned2019-09-24T11:03:57Z
dc.date.available2019-09-24T11:03:57Z
dc.date.copyright2019-09
dc.date.issued2019-09
dc.date.submitted2019-09-20
dc.descriptionCataloged from PDF version of article.en_US
dc.descriptionThesis (Master's): Bilkent University, Department of Electrical and Electronics Engineering, İhsan Doğramacı Bilkent University, 2019.en_US
dc.descriptionIncludes bibliographical references (leaves 53-55).en_US
dc.description.abstractWe study classification and regression for variable length sequential data, which is either non-uniformly sampled or contains missing samples. In most sequential data processing studies, one considers data sequence is uniformly sampled and complete, i.e., does not contain missing input values. However, non-uniformly sampled sequences and the missing data problem appear in a wide range of fields such as medical imaging and financial data. To resolve these problems, certain preprocessing techniques, statistical assumptions and imputation methods are usually employed. However, these approaches suffer since the statistical assumptions do not hold in general and the imputation of artificially generated and unrelated inputs deteriorate the model. To mitigate these problems, in chapter 2, we introduce a novel Long Short-Term Memory (LSTM) architecture. In particular, we extend the classical LSTM network with additional time gates, which incorporate the time information as a nonlinear scaling factor on the conventional gates. We also provide forward pass and backward pass update equations for the proposed LSTM architecture. We show that our approach is superior to the classical LSTM architecture, when there is correlation between time samples. In chapter 3, we investigate regression for variable length sequential data containing missing samples and introduce a novel tree architecture based on the Long Short-Term Memory (LSTM) networks. In our architecture, we employ a variable number of LSTM networks, which use only the existing inputs in the sequence, in a tree-like architecture without any statistical assumptions or imputations on the missing data. In particular, we incorporate the missingness information by selecting a subset of these LSTM networks based on presence-pattern of a certain number of previous inputs.en_US
dc.description.provenanceSubmitted by Betül Özen (ozen@bilkent.edu.tr) on 2019-09-24T11:03:57Z No. of bitstreams: 1 Thesis_SafaOnurSahin.pdf: 2747681 bytes, checksum: fc1c99b2eda0ef422ce80df8dafe7ee5 (MD5)en
dc.description.provenanceMade available in DSpace on 2019-09-24T11:03:57Z (GMT). No. of bitstreams: 1 Thesis_SafaOnurSahin.pdf: 2747681 bytes, checksum: fc1c99b2eda0ef422ce80df8dafe7ee5 (MD5) Previous issue date: 2019-09en
dc.description.statementofresponsibilityby Safa Onur Şahinen_US
dc.format.extentx, 55 leaves : charts ; 30 cm.en_US
dc.identifier.itemidB153023
dc.identifier.urihttp://hdl.handle.net/11693/52493
dc.language.isoEnglishen_US
dc.rightsinfo:eu-repo/semantics/openAccessen_US
dc.subjectLong short-term memoryen_US
dc.subjectRecurrent neural networksen_US
dc.subjectNon-uniform samplingen_US
dc.subjectMissing dataen_US
dc.subjectSupervised learningen_US
dc.titleNon-uniformly sampled sequential data processingen_US
dc.title.alternativeDüzgün olmayan şekilde örneklenmiş sıralı verinin işlenmesien_US
dc.typeThesisen_US
thesis.degree.disciplineElectrical and Electronic Engineering
thesis.degree.grantorBilkent University
thesis.degree.levelMaster's
thesis.degree.nameMS (Master of Science)

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Thesis_SafaOnurSahin.pdf
Size:
2.62 MB
Format:
Adobe Portable Document Format
Description:
Full printable version

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.71 KB
Format:
Item-specific license agreed upon to submission
Description: