Browsing by Subject "Regression Analysis"
Now showing 1 - 7 of 7
- Results Per Page
- Sort Options
Item Open Access Application of a customized pathway-focused microarray for gene expression profiling of cellular homeostasis upon exposure to nicotine in PC12 cells(2004) Konu Ö.; Xu X.; Ma J.Z.; Kane J.; Wang J.; Shi, S.J.; Li, M.D.Maintenance of cellular homeostasis is integral to appropriate regulation of cellular signaling and cell growth and division. In this study, we report the development and quality assessment of a pathway-focused microarray comprising genes involved in cellular homeostasis. Since nicotine is known to have highly modulatory effects on the intracellular calcium homeostasis, we therefore tested the applicability of the homeostatic pathway-focused microarray on the gene expression in PC-12 cells treated with 1 mM nicotine for 48 h relative to the untreated control cells. We first provided a detailed description of the focused array with respect to its gene and pathway content and then assessed the array quality using a robust regression procedure that allows for the exclusion of unreliable measurements while decreasing the number of false positives. As a result, the mean correlation coefficient between duplicate measurements of the arrays used in this study (control vs. nicotine treatment, three samples each) has increased from 0.974±0.017 to 0.995±0.002. Furthermore, we found that nicotine affected various structural and signaling components of the AKT/PKB signaling pathway and protein synthesis and degradation processes in PC-12 cells. Since modulation of intracellular calcium concentrations ([Ca2+]i) and phosphatidylinositol signaling are important in various biological processes such as neurotransmitter release and tissue pathogenesis including tumor formation, we expect that the homeostatic pathway-focused microarray potentially can be used for the identification of unique gene expression profiles in comparative studies of drugs of abuse and diverse environmental stimuli, such as starvation and oxidative stress. © 2003 Elsevier B.V. All rights reserved.Item Open Access Clustered linear regression(Elsevier, 2002) Ari, B.; Güvenir, H. A.Clustered linear regression (CLR) is a new machine learning algorithm that improves the accuracy of classical linear regression by partitioning training space into subspaces. CLR makes some assumptions about the domain and the data set. Firstly, target value is assumed to be a function of feature values. Second assumption is that there are some linear approximations for this function in each subspace. Finally, there are enough training instances to determine subspaces and their linear approximations successfully. Tests indicate that if these approximations hold, CLR outperforms all other well-known machine-learning algorithms. Partitioning may continue until linear approximation fits all the instances in the training set - that generally occurs when the number of instances in the subspace is less than or equal to the number of features plus one. In other case, each new subspace will have a better fitting linear approximation. However, this will cause over fitting and gives less accurate results for the test instances. The stopping situation can be determined as no significant decrease or an increase in relative error. CLR uses a small portion of the training instances to determine the number of subspaces. The necessity of high number of training instances makes this algorithm suitable for data mining applications. © 2002 Elsevier Science B.V. All rights reserved.Item Open Access Differences in the accumulation and distribution profile of heavy metals and metalloid between male and female crayfish (Astacus leptodactylus)(2013) Tunca, E.; Ucuncu, E.; Ozkan, A.D.; Ulger, Z.E.; Cansizoǧlu, A.E.; Tekinay, T.Concentrations of selected heavy metals and a metalloid were measured by ICP-MS in crayfish (Astacus leptodactylus) collected from Lake Hirfanli, Turkey. Aluminum (Al), chromium (52Cr, 53Cr), copper ( 63Cu, 65Cu), manganese (Mn), nickel (Ni) and arsenic (As) were measured in the exoskeleton, gills, hepatopancreas and abdominal muscle tissues of 60 crayfish of both genders. With the exception of Al, differences were determined between male and female cohorts for the accumulation trends of the above-mentioned elements in the four tissues. It was also noted that the accumulation rates of Ni and As were significantly lower in gill tissue of females compared to males and no significant difference was observed for Cu isotopes in female crayfish. Cluster Analysis (CA) recovered similar results for both genders, with links between accumulations of Ni and As being notable. Accumulation models were described separately for male and female crayfish using regression analysis, and are presented for models where R2 > 0.85. © 2013 Springer Science+Business Media New York.Item Open Access An eager regression method based on best feature projections(Springer, Berlin, Heidelberg, 2001) Aydın, Tolga; Güvenir, H. AltayThis paper describes a machine learning method, called Regression by Selecting Best Feature Projections (RSBFP). In the training phase, RSBFP projects the training data on each feature dimension and aims to find the predictive power of each feature attribute by constructing simple linear regression lines, one per each continuous feature and number of categories per each categorical feature. Because, although the predictive power of a continuous feature is constant, it varies for each distinct value of categorical features. Then the simple linear regression lines are sorted according to their predictive power. In the querying phase of learning, the best linear regression line and thus the best feature projection are selected to make predictions. © Springer-Verlag Berlin Heidelberg 2001.Item Open Access Instance-based regression by partitioning feature projections(Springer, 2004) Uysal, İ.; Güvenir, H. A.A new instance-based learning method is presented for regression problems with high-dimensional data. As an instance-based approach, the conventional method, KNN, is very popular for classification. Although KNN performs well on classification tasks, it does not perform as well on regression problems. We have developed a new instance-based method, called Regression by Partitioning Feature Projections (RPFP) which is designed to meet the requirement for a lazy method that achieves high levels of accuracy on regression problems. RPFP gives better performance than well-known eager approaches found in machine learning and statistics such as MARS, rule-based regression, and regression tree induction systems. The most important property of RPFP is that it is a projection-based approach that can handle interactions. We show that it outperforms existing eager or lazy approaches on many domains when there are many missing values in the training data.Item Open Access An overview of regression techniques for knowledge discovery(Cambridge University Press, 1999) Uysal, İ.; Güvenir, H. A.Predicting or learning numeric features is called regression in the statistical literature, and it is the subject of research in both machine learning and statistics. This paper reviews the important techniques and algorithms for regression developed by both communities. Regression is important for many applications, since lots of real life problems can be modeled as regression problems. The review includes Locally Weighted Regression (LWR), rule-based regression, Projection Pursuit Regression (PPR), instance-based regression, Multivariate Adaptive Regression Splines (MARS) and recursive partitioning regression methods that induce regression trees (CART, RETIS and M5).Item Open Access Regression on feature projections(Elsevier, 2000) Guvenir, H. A.; Uysal, I.This paper describes a machine learning method, called Regression on Feature Projections (RFP), for predicting a real-valued target feature, given the values of multiple predictive features. In RFP training is based on simply storing the projections of the training instances on each feature separately. Prediction of the target value for a query point is obtained through two averaging procedures executed sequentially. The first averaging process is to find the individual predictions of features by using the K-Nearest Neighbor (KNN) algorithm. The second averaging process combines the predictions of all features. During the first averaging step, each feature is associated with a weight in order to determine the prediction ability of the feature at the local query point. The weights, found for each local query point, are used in the second prediction step and enforce the method to have an adaptive or context-sensitive nature. We have compared RFP with KNN and the rule based-regression algorithms. Results on real data sets show that RFP achieves better or comparable accuracy and is faster than both KNN and Rule-based regression algorithms.