Browsing by Subject "Inductive learning"
Now showing 1 - 3 of 3
- Results Per Page
- Sort Options
Item Open Access Feature interval learning algorithms for classification(Elsevier BV, 2010) Dayanik, A.This paper presents Feature Interval Learning algorithms (FIL) which represent multi-concept descriptions in the form of disjoint feature intervals. The FIL algorithms are batch supervised inductive learning algorithms and use feature projections of the training instances to represent induced classification knowledge. The concept description is learned separately for each feature and is in the form of a set of disjoint intervals. The class of an unseen instance is determined by the weighted-majority voting of the feature predictions. The basic FIL algorithm is enhanced with adaptive interval and feature weight schemes in order to handle noisy and irrelevant features. The algorithms are empirically evaluated on twelve data sets from the UCI repository and are compared with k-NN, k-NNFP, and NBC classification algorithms. The experiments demonstrate that the FIL algorithms are robust to irrelevant features and missing feature values, achieve accuracy comparable to the best of the existing algorithms with significantly less average running times. © 2010 Elsevier B.V. All rights reserved.Item Open Access Learning feature-projection based classifiers(Pergamon Press, 2012-03) Dayanik, A.This paper aims at designing better performing feature-projection based classification algorithms and presents two new such algorithms. These algorithms are batch supervised learning algorithms and represent induced classification knowledge as feature intervals. In both algorithms, each feature participates in the classification by giving real-valued votes to classes. The prediction for an unseen example is the class receiving the highest vote. The first algorithm, OFP.MC, learns on each feature pairwise disjoint intervals which minimize feature classification error. The second algorithm. GFP.MC, constructs feature intervals by greedily improving the feature classification error. The new algorithms are empirically evaluated on twenty datasets from the UCI repository and are compared with the existing feature-projection based classification algorithms (FILIF, VFI5, CFP, k-NNFP, and NBC). The experiments demonstrate that the OFP.MC algorithm outperforms other feature-projection based classification algorithms. The GFP.MC algorithm is slightly inferior to the OFP.MC algorithm, but, if it is used for datasets with large number of instances, then it reduces the space requirement of the OFP.MC algorithm. The new algorithms are insensitive to boundary noise unlike the other feature-projection based classification algorithms considered here. (C) 2011 Elsevier Ltd. All rights reserved.Item Open Access Non-incremental classification learning algorithms based on voting feature intervals(1997-08) Demiröz, GülşenLearning is one of the necessary abilities of an intelligent agent. This thesis proposes several learning algorithms for multi-concept descriptions in the form of feature intervals, called Voting Feature Intervals (VFI) algorithms. These algorithms are non-incremental classification learning algorithms, and use feature projection based knowledge representation for the classification knowledge induced from a set of preclassified examples. The concept description learned is a set of intervals constructed separately for each feature. Each interval carries classification information for all classes. The classification of an unseen instance is based on a voting scheme, where each feature distributes its vote among all classes. Empirical evaluation of the VFI algorithms has shown that they are the best performing algorithms among other previously developed feature projection based methods in term of classification accuracy. In order to further improve the accuracy, genetic algorithms are developed to learn the optimum feature weights for any given classifier. Also a new crossover operator, called continuous uniform crossover, to be used in this weight learning genetic algorithm is proposed and developed during this thesis. Since the explanation ability of a learning system is as important as its accuracy, VFI classifiers are supplemented with a facility to convey what they have learned in a comprehensible way to humans.