Browsing by Subject "inductive learning"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Open Access Batch learning of disjoint feature intervals(Bilkent University, 1996) Akkuş, AynurThis thesis presents several learning algorithms for multi-concept descriptions in the form of disjoint feature intervals, called Feature Interval Learning algorithms (FIL). These algorithms are batch supervised inductive learning algorithms, and use feature projections of the training instances for the representcition of the classification knowledge induced. These projections can be generalized into disjoint feature intervals. Therefore, the concept description learned is a set of disjoint intervals separately for each feature. The classification of an unseen instance is based on the weighted majority voting among the local predictions of features. In order to handle noisy instances, several extensions are developed by placing weights to intervals rather than features. Empirical evaluation of the FIL algorithms is presented and compared with some other similar classification algorithms. Although the FIL algorithms achieve comparable accuracies with other algorithms, their average running times are much more less than the others. This thesis also presents a new adaptation of the well-known /s-NN classification algorithm to the feature projections approach, called A:-NNFP for k-Nearest Neighbor on Feature Projections, based on a majority voting on individual classifications made by the projections of the training set on each feature and compares with the /:-NN algorithm on some real-world and cirtificial datasets.Item Open Access Classification with overlapping feature intervals(Bilkent University, 1995) Koç, Hakime ÜnsalThis thesis presents a new form of exemplar-based learning method, based on overlapping feature intervals. Classification with Overlapping Feature Intervals (COFI) is the particular implementation of this technique. In this incremental, inductive and supervised learning method, the basic unit of the representation is an interval. The COFI algorithm learns the projections of the intervals in each class dimension for each feature. An interval is initially a point on a class dimension, then it can be expanded through generalization. No specialization of intervals is done on class dimensions by this algorithm. Classification in the COFI algorithm is based on a majority voting among the local predictions that are made individually by each feature.Item Open Access Learning with feature partitions(Bilkent University, 1993) Şirin, İzzetThis thesis presents a new methodology of learning from examples, based on feature partitioning. Classification by Feature Partitioning (CFP) is a particular implementation of this technique, which is an inductive, incremental, and supervised learning method. Learning in CFP is accomplished by storing the objects separately in each feature dimension as disjoint partitions of values. A partition, a basic unit of representation which is initially a point in the feature dimension, is expanded through generalization. The CFP algorithm specializes a partition by subdividing it into two subpartitions. Theoretical (with respect to PAC-model) and empirical evaluation of the CFP is presented and compared with some other similar techniques.