Browsing by Subject "Explainable artificial intelligence"
Now showing 1 - 3 of 3
- Results Per Page
- Sort Options
Item Open Access Feature selection using stochastic approximation with Barzilai and Borwein non-monotone gains(Elsevier Ltd, 2021-08) Aksakallı, V.; Yenice, Z. D.; Malekipirbazari, Milad; Kargar, KamyarWith recent emergence of machine learning problems with massive number of features, feature selection (FS) has become an ever-increasingly important tool to mitigate the effects of the so-called curse of dimensionality. FS aims to eliminate redundant and irrelevant features for models that are faster to train, easier to understand, and less prone to overfitting. This study presents a wrapper FS method based on Simultaneous Perturbation Stochastic Approximation (SPSA) with Barzilai and Borwein (BB) non-monotone gains within a pseudo-gradient descent framework wherein performance is measured via cross-validation. We illustrate that SPSA with BB gains (SPSA-BB) provides dramatic improvements in terms of the number of iterations for convergence with minimal degradation in cross-validated error performance over the current state-of-the art approach with monotone gains (SPSA-MON). In addition, SPSA-BB requires only one internal parameter and therefore it eliminates the need for careful fine-tuning of numerous other internal parameters as in SPSA-MON or comparable meta-heuristic FS methods such as genetic algorithms (GA). Our particular implementation includes gradient averaging as well as gain smoothing for better convergence properties. We present computational experiments on various public datasets with Nearest Neighbors and Naive Bayes classifiers as wrappers. We present comparisons of SPSA-BB against full set of features, SPSA-MON, as well as seven popular meta-heuristics based FS algorithms including GA and particle swarm optimization. Our results indicate that SPSA-BB converges to a good feature set in about 50 iterations on the average regardless of the number of features (whether a dozen or more than 1000 features) and its performance is quite competitive. SPSA-BB can be considered extremely fast for a wrapper method and therefore it stands as a high-performing new feature selection method that is also computationally feasible in practice.Item Open Access Performance comparison of feature selection and extraction methods with random instance selection(Elsevier Ltd, 2021-10-01) Malekipirbazari, Milad; Aksakallı, V.; Shafqat, W.; Eberhard, A.In pattern recognition, irrelevant and redundant features together with a large number of noisy instances in the underlying dataset decrease performance of trained models and make the training process considerably slower, if not practically infeasible. In order to combat this so-called curse of dimensionality, one option is to resort to feature selection (FS) methods designed to select the features that contribute the most to the performance of the model, and one other option is to utilize feature extraction (FE) methods that map the original feature space into a new space with lower dimensionality. These two methods together are called feature reduction (FR) methods. On the other hand, deploying an FR method on a dataset with massive number of instances can become a major challenge, from both memory and run time perspectives, due to the complex numerical computations involved in the process. The research question we consider in this study is rather a simple, yet novel one: do these FR methods really need the whole set of instances (WSI) available for the best performance, or can we achieve similar performance levels with selecting a much smaller random subset of WSI prior to deploying an FR method? In this work, we provide empirical evidence based on comprehensive computational experiments that the answer to this critical research question is in the affirmative. Specifically, with simple random instance selection followed by FR, the amount of data needed for training a classifier can be drastically reduced with minimal impact on classification performance. We also provide recommendations on which FS/ FE method to use in conjunction with which classifier.Item Open Access xDBTagger: explainable natural language interface to databases using keyword mappings and schema graph(Springer, 2023-08-23) Usta, A.; Karakayalı, A.; Ulusoy, ÖzgürRecently, numerous studies have been proposed to attack the natural language interfaces to data-bases (NLIDB) problem by researchers either as a conventional pipeline-based or an end-to-end deep-learning-based solution. Although each approach has its own advantages and drawbacks, regardless of the approach preferred, both approaches exhibit black-box nature, which makes it difficult for potential users to comprehend the rationale behind the decisions made by the intelligent system to produce the translated SQL. Given that NLIDB targets users with little to no technical background, having interpretable and explainable solutions becomes crucial, which has been overlooked in the recent studies. To this end, we propose xDBTagger, an explainable hybrid translation pipeline that explains the decisions made along the way to the user both textually and visually. We also evaluate xDBTagger quantitatively in three real-world relational databases. The evaluation results indicate that in addition to being lightweight, fast, and fully explainable, xDBTagger is also competitive in terms of translation accuracy compared to both pipeline-based and end-to-end deep learning approaches.