Clustered linear regression
Date
2002Source Title
Knowledge-Based Systems
Print ISSN
0950-7051
Publisher
Elsevier
Volume
15
Issue
3
Pages
169 - 175
Language
English
Type
ArticleItem Usage Stats
175
views
views
1,639
downloads
downloads
Abstract
Clustered linear regression (CLR) is a new machine learning algorithm that improves the accuracy of classical linear regression by partitioning training space into subspaces. CLR makes some assumptions about the domain and the data set. Firstly, target value is assumed to be a function of feature values. Second assumption is that there are some linear approximations for this function in each subspace. Finally, there are enough training instances to determine subspaces and their linear approximations successfully. Tests indicate that if these approximations hold, CLR outperforms all other well-known machine-learning algorithms. Partitioning may continue until linear approximation fits all the instances in the training set - that generally occurs when the number of instances in the subspace is less than or equal to the number of features plus one. In other case, each new subspace will have a better fitting linear approximation. However, this will cause over fitting and gives less accurate results for the test instances. The stopping situation can be determined as no significant decrease or an increase in relative error. CLR uses a small portion of the training instances to determine the number of subspaces. The necessity of high number of training instances makes this algorithm suitable for data mining applications. © 2002 Elsevier Science B.V. All rights reserved.
Keywords
Clustering Linear RegressionEager Approach
Machine Learning Algorithm
Approximation Theory
Data Mining
Learning Algorithms
Regression Analysis
Clustered Linear Regression
Learning Systems