Akkuş, Aynur2016-01-082016-01-081996http://hdl.handle.net/11693/17759Ankara : Department of Computer Engineering and Information Science and the Institute of Engineering and Science of Bilkent University, 1996.Thesis (Master's) -- Bilkent University, 1996.Includes bibliographical references leaves 98-104.This thesis presents several learning algorithms for multi-concept descriptions in the form of disjoint feature intervals, called Feature Interval Learning algorithms (FIL). These algorithms are batch supervised inductive learning algorithms, and use feature projections of the training instances for the representcition of the classification knowledge induced. These projections can be generalized into disjoint feature intervals. Therefore, the concept description learned is a set of disjoint intervals separately for each feature. The classification of an unseen instance is based on the weighted majority voting among the local predictions of features. In order to handle noisy instances, several extensions are developed by placing weights to intervals rather than features. Empirical evaluation of the FIL algorithms is presented and compared with some other similar classification algorithms. Although the FIL algorithms achieve comparable accuracies with other algorithms, their average running times are much more less than the others. This thesis also presents a new adaptation of the well-known /s-NN classification algorithm to the feature projections approach, called A:-NNFP for k-Nearest Neighbor on Feature Projections, based on a majority voting on individual classifications made by the projections of the training set on each feature and compares with the /:-NN algorithm on some real-world and cirtificial datasets.xiv, 108 leavesEnglishinfo:eu-repo/semantics/openAccessmachine learningsupervised learninginductive learningbatch learningfeature projectionsvotingQA76.9.A43 A35 1996Computer algorithms.Machine learning.Inductive learning.Supervised learning.Batch learning of disjoint feature intervalsThesis