Scholarly Publications - Industrial Engineering
Permanent URI for this collectionhttps://hdl.handle.net/11693/115612
Browse
Recent Submissions
Item Open Access Deblurring images by Huber Lasso(Institute of Electrical and Electronics Engineers Inc., 2025) Pınar, Mustafa Çelebi; Yayla, Emre CanWe propose a proximal-gradient deblurring method that replaces the least-squares data term in FISTA with the Huber loss and augments it with momentum acceleration. The resulting algorithms, called HISTA and FHISTA, combine robust Huber fidelity with an absolute value sparsity penalty and Nesterov-style extrapolation. Experiments on twelve benchmark images blurred by a Gaussian kernel and contaminated with Gaussian noise show that FHISTA improves PSNR by roughly five decibels over classical ISTA. The method is easy to implement, uses a modest number of hyper-parameters, and demonstrates strong resilience to outliers.Item Open Access Using plunging-type testing to investigate process mechanics at micro scale machining(Elsevier BV, 2025) Adeeb, Syed Ahsan; Karpat, YigitIn plunging-Type tests, a cutting tool is given a sinusoidal movement as the work material with a web on its surface is rotated at a constant speed. If the amplitude and feed rate of the cutting tool and rotational speed of the work material are correctly set, the plunging test can be completed within a full rotation. As a result, a detailed investigation of different episodes of micro-scale machining, such as rubbing, plowing, and shearing, can be conducted with a single test. Combined with force measurements and cut chip morphology, the process mechanics can be investigated in detail. This study conducted plunging tests on an ultra-precision CNC with a diamond cutting tool on commercially pure titanium alloy. The differences in tangential and normal forces observed during plunge-in and pull-out periods corresponding to the same amplitude were analyzed using an analytical model. Resultant forces during the pull-out phase are larger than those observed in the plunge-in phase, attributed to an increase in cut chip thickness. A computational model of the plunging-Type experiment has also been developed based on the findings of the analytical model. The proposed hybrid approach may be useful to improve identification of material constitutive model parameters based on micro scale machining experiments.Item Embargo Maintaining fairness in stochastic chemotherapy scheduling(Elsevier Ltd, 2025-04-23) Çelik, Batuhan; Gul, Serhat; Karsu, ÖzlemChemotherapy scheduling is hard to manage under uncertainty in infusion durations, and focusing on expected performance measure values may lead to unfavorable outcomes for some patients. In this study, we aim to design daily patient appointment schedules considering a fair environment regarding patient waiting times. We propose using a metric that encourages fairness and efficiency in waiting time allocations. To optimize this metric, we formulate a two-stage stochastic mixed-integer nonlinear programming model. We employ a binary search algorithm to identify the optimal schedule, and then propose a modified binary search algorithm (MBSA) to enhance computational capability. Moreover, to address stochastic feasibility problems at each MBSA iteration, we introduce a novel reduce-and-augment algorithm that utilizes scenario set reduction and augmentation methods. We use real data from a major oncology hospital to show the efficacy of MBSA. We compare the schedules identified by MBSA with both the baseline schedules from the oncology hospital and those generated by commonly employed scheduling heuristics. Finally, we highlight the significance of considering uncertainty in infusion durations to maintain fairness while creating appointment schedules.Item Open Access Path-regularity and martingale properties of set-valued stochastic integrals(AIMS Press, 2025-10-10) Ararat, Çağın; Ma, JinIn this paper, we study the path-regularity and martingale properties of the set-valued stochastic integrals defined in our previous work [4]. Such integrals have some fundamental differences from the well-known Aumann-Itô stochastic integrals, and are much better suitable for representing set-valued martingales, whence potentially useful in the study of set-valued backward stochastic differential equations. However, similar to the Aumann-Itô integral, the new integral is only a set-valued submartingale in general, and there is very limited knowledge about the path regularity of the related indefinite integral, much less the sufficient conditions under which the integral is a true martingale. In this paper, we first establish the existence of right- and left-continuous modifications of set-valued submartingales in continuous time, and apply the results to set-valued stochastic integrals. Moreover, we show that a set-valued stochastic integral yields a martingale if and only if the set of terminal values of the stochastic integrals associated to the integrand is closed and decomposable. Finally, as a particular example, we study the set-valued martingale in the form of the conditional expectation of a set-valued random variable. We show that when the random variable is a convex random polytope, the conditional expectation of a vertex stays as a vertex of the set-valued conditional expectation if and only if the random polytope has a deterministic normal fan.Item Open Access Distributionally robust optimal allocation with costly verification(Institute for Operations Research and the Management Sciences (INFORMS), 2025-12) Bayrak, Halil İbrahim; Koçyiğit, Cağıl; Kuhn, Daniel; Pınar, Mustafa ÇelebiWe consider the mechanism design problem of a principal allocating a single good to one of several agents without monetary transfers. Each agent desires the good and uses it to create value for the principal. We designate this value as the agent’s private type. Even though the principal does not know the agents’ types, she can verify them at a cost. The allocation of the good thus depends on the agents’ self-declared types and the results of any verification performed, and the principal’s payoff matches her value of the allocation minus the costs of verification. It is known that if the agents’ types are independent, then a favored-agent mechanism maximizes her expected payoff. However, this result relies on the unrealistic assumptions that the agents’ types follow known independent probability distributions. In contrast, we assume here that the agents’ types are governed by an ambiguous joint probability distribution belonging to a commonly known ambiguity set and that the principal maximizes her worst-case expected payoff. We study support-only ambiguity sets, which contain all distributions supported on a rectangle, Markov ambiguity sets, which contain all distributions in a support-only ambiguity set satisfying some first-order moment bounds, and Markov ambiguity sets with independent types, which contain all distributions in a Markov ambiguity set under which the agents’ types are mutually independent. In all cases, we construct explicit favored-agent mechanisms that are not only optimal but also Pareto robustly optimal.Item Open Access Identification of tangential and normal forces in micro end milling through machine learning analysis of force signals(Inderscience Publishers, 2025-11-25) Karpat, YiğitDeveloping digital twins of manufacturing processes, like computer numerical control (CNC) machining, is vital due to their importance for creating high value-added parts. Tool condition monitoring has been an important research topic within this context where a major focus is on analysing machining force signals. Micro-milling is a complex process due to contributing factors like tool runout, deflection, edge radius, elastic recovery of materials, microstructure effects, and machining dynamics. This paper focuses on machine learning analysis of force signals to identify normal and tangential forces acting on the micro end mill. A machine learning algorithm based on Gaussian Process Regression (GPR) has been used to identify normal and tangential forces as a function of uncut chip thickness. The novelty of this approach is that identified normal force variation as a function of uncut chip thickness reveals information on minimum uncut chip thickness and edge radius. Monitoring the variation of these characteristic points on the force curves can be used to identify tool wear and predict remaining useful tool life.Item Open Access Local upper bounds based on polyhedral ordering cones(Elsevier B.V., 2025-12-15) Eichfelder, Gabriele; Ulus, FirdevsThe concept of local upper bounds plays an important role in numerical algorithms for nonconvex, integer, and mixed-integer multiobjective optimization with respect to the componentwise partial ordering, that is, where the ordering cone is the nonnegative orthant. In this paper, we answer the question of whether and how this concept can be extended to arbitrary ordering cones. We define local upper bounds with respect to a closed pointed solid convex cone and study their properties. We show that for special polyhedral ordering cones the concept of local upper bounds can be as practical as it is for the nonnegative orthant.Item Open Access Evaluating the machine learning models in predicting intensive care unit discharge for neurosurgical patients undergoing craniotomy: a big data analysis(Springer, 2025-05-06) Khaniyev, Taghi; Cekic, Efecan; Koç, Muhammet Abdullah; Doğan, İlke; Hanalioglu, SahinBackground: Predicting intensive care unit (ICU) discharge for neurosurgical patients is crucial for optimizing bed sources, reducing costs, and improving outcomes. Our study aims to develop and validate machine learning (ML) models to predict ICU discharge within 24 h for patients undergoing craniotomy. Methods: The 2,742 patients undergoing craniotomy were identified from Medical Information Mart for Intensive Care dataset using diagnosis-related group and International Classification of Diseases codes. Demographic, clinical, laboratory, and radiological data were collected and preprocessed. Textual clinical examinations were converted into numerical scales. Data were split into training (70%), validation (15%), and test (15%) sets. Four ML models, logistic regression (LR), decision tree, random forest, and neural network (NN), were trained and evaluated. Model performance was assessed using area under the receiver operating characteristic curve (AUC), average precision (AP), accuracy, and F1 scores. Shapley Additive Explanations (SHAP) were used to analyze importance of features. Statistical analyses were performed using R (version 4.2.1) and ML analyses with Python (version 3.8), using scikit-learn, tensorflow, and shap packages. Results: Cohort included 2,742 patients (mean age 58.2 years; first and third quartiles 47–70 years), with 53.4% being male (n = 1,464). Total ICU stay was 15,645 bed days (mean length of stay 4.7 days), and total hospital stay was 32,008 bed days (mean length of stay 10.8 days). Random forest demonstrated highest performance (AUC 0.831, AP 0.561, accuracy 0.827, F1-score 0.339) on test set. NN achieved an AUC of 0.824, with an AP, accuracy, and F1-score of 0.558, 0.830, and 0.383, respectively. LR achieved an AUC of 0.821 and an accuracy of 0.829. The decision tree model showed lowest performance (AUC 0.813, accuracy 0.822). Key predictors of SHAP analysis included Glasgow Coma Scale, respiratory-related parameters (i.e., tidal volume, respiratory effort), intracranial pressure, arterial pH, and Richmond Agitation-Sedation Scale. Conclusions: Random forest and NN predict ICU discharge well, whereas LR is interpretable but less accurate. Numeric conversion of clinical data improved performance. This study offers framework for predictions using clinical, radiological, and demographic features, with SHAP enhancing transparency.Item Open Access A hybrid model to analyze stress distributions at the tool and workpiece interface during drilling of thick cfrp laminates considering thermal effects(Springer UK, 2025-06-18) Shariar, Fahim; Karagüzel, Umut; Karpat, YiğitDrilling is employed as a machining method to meet the demands for producing functional CFRP structures without compromising their unique and desirable material properties. Because of its intrinsic material properties and drill-induced damages, drilling CFRP remains an ambitious task. This study investigates the stress distributions at the tool-workpiece interface during the CFRP dry drilling process. A better understanding of contact pressure and tangential stress distribution on the cutting edge of drills is necessary for a better selection of process parameters. The drill margin region, which directly affects the hole wall quality, has been included in the analysis. Drilling experiments were conducted to measure thrust force, torque, and temperature for different cutting parameter configurations. Finite element-based thermal models have been utilized to estimate the hole wall surface temperature during drilling. The analytical cutting force model is coupled with the temperature distribution from the FE model to analyze the variation of contact pressure and tangential stress distributions along the tip of the drill, together with the thermal effects on contact pressure during drilling.Item Open Access Rolling lookahead learning for optimal classification trees(Taylor and Francis Ltd., 2026-02-02) Organ, Zeynel Batuhan; Kayış, Enis; Khaniyev, TaghiClassification trees continue to be widely adopted in machine learning applications due to their inherently interpretable nature and scalability. We propose a rolling subtree lookahead algorithm that combines the relative scalability of the myopic approaches with the foresight of the optimal approaches in constructing trees. The limited foresight embedded in our algorithm aims to address potential learning pathology that may arise in optimal approaches. At the heart of our algorithm lies a novel two-depth optimal binary classification tree formulation flexible to handle any loss function. We show that the feasible region of this formulation is an integral polyhedron, yielding the LP relaxation solution optimal. Through extensive computational analyses, we demonstrate that our approach achieves better performance than existing optimization based solutions, which are subject to practical computational limitations, and computationally efficient myopic approaches in 981 out of 1610 problem instances, improving the out-of-sample accuracy by up to 14.4% and 23.6%, respectively.Item Embargo Exact solution algorithms for biobjective mixed integer programming problems(Wiley-Blackwell Publishing Ltd., 2025) Emre, Deniz; Karsu, Özlem; Ulus, FirdevsWe consider criterion space algorithms for biobjective mixed integer programs. The algorithms solve scalarization models in order to explore predetermined regions of the objective space called boxes, defined by two nondominated points. When exploring, the algorithm exploits information on its corner points and chooses the scalarization problem accordingly so as to detect line segments quickly, without having to solve many scalarizations. We propose three algorithms: The first one creates new boxes immediately when it finds a nondominated point, whereas the second algorithm conducts additional operations after obtaining a nondominated point by the Pascoletti–Serafini scalarization. The third algorithm is another variant that uses the computational advantage of dichotomic search whenever possible. Our computational experiments demonstrate the computational feasibility of the algorithms and show that the number of mixed integer linear programming models is significantly lower compared to similar approaches in the literature. The results further validate the utilization of Pascoletti–Serafini scalarization, aimed at enhancing the representativeness of solutions under time and cardinality limits. We observe that the third variant is particularly effective in finding a representative subset of the nondominated solutions under such limits.Item Open Access Global solution algorithms for DC programming via polyhedral approximations of convex functions(Springer New York LLC, 2025-09-19) Pirani, Fahaar M.; Ulus, FirdevsWe consider difference of convex (DC) programming problems and propose three algorithms to solve them globally. The main working mechanism of the proposed algorithms is to generate polyhedral underestimators to convex functions. Two of these algorithms generate a ‘fine’ polyhedral approximation of the first convex component over the compact feasible region of the DC programming problem. We prove the finiteness of these algorithms, establish the convergence rate of one of them. Moreover, we show that using the polyhedral approximation of the first component, it is possible to compute an approximate global solution of the corresponding DC programming problem without further computational effort. The third algorithm also computes a polyhedral underestimator of the first component of the DC function. Different from the first two algorithms, the third algorithm approximates it locally until finding an approximate global solution to the DC programming problem. It is shown that for any positive approximation error, the third algorithm stops after finitely many iterations. Computational results based on some test instances from the literature are provided.Item Embargo Lagrangian relaxation for airport gate assignment problem(Elsevier Ltd., 2025-11-25) Okur, Göksu Ece; Karsu, Özlem; Solyalı, OğuzIn this paper we focus on the Airport Gate Assignment Problem that minimizes the total walking distance of passengers while ensuring that the number of aircraft assigned to apron is at its minimum. We utilize an alternative formulation for the problem compared to the ones in the literature and propose approaches based on Lagrangian relaxation so as to obtain tight lower bounds. The method also harnesses the power of a good initial upper bound and provides good quality solutions. To the best of our knowledge, the current studies in the literature rely only on upper bounds or the linear relaxation lower bounds, which are hard to obtain when the problem size is large, to assess the quality of heuristic solutions. We propose using the tighter Lagrangian relaxation-based bounds as a better reference to assess solution quality. Our computational experiments demonstrate that the new formulation we propose yields tighter lower bounds compared to previous formulations in the literature. Moreover, our Lagrangian relaxation-based method returns even stronger lower bounds than those obtained by solving mixed integer programming formulations or their linear relaxations by an off-the-shelf solver.Item Embargo Optimal hour-ahead commitment and storage decisions of wind power producers(Elsevier BV, 2025-07-30) Karakoyun, Ece Cigdem; Avci, Harun; Huh, Woonghee Tim; Kocaman, Ayse Selin; Nadar, EmreRenewable energy generators often rely on their battery deployments to meet their commitments in electricity markets. We consider the joint energy commitment and storage problem for a wind farm paired with a battery. The power producer decides, in each hour of a finite planning horizon, how much energy to commit to dispatching or purchasing for the next hour, how much wind energy to generate, and how much energy to charge or discharge. The power producer pays a penalty cost if they do not fully meet their commitment. Using a Markov decision process model under uncertainties in electricity price (assumed to be positive) and wind speed, we first prove the optimality of a state-dependent threshold policy for the power producer’s problem. This policy partitions the state space into several disjoint domains, each associated with a different action type, making it optimal to bring storage and commitment levels to different threshold pairs in each domain. We then employ our structural results to develop a heuristic solution procedure in a more general setting where the electricity price can also be negative. Numerical results show the high efficiency and scalability of this procedure. It provides solutions with an average deviation of only 0.3% from optimality and achieves a speedup of two to three orders of magnitude compared to the standard dynamic programming algorithm, reducing computation times from several hours to just a few minutes.Item Open Access Beyond grids: multi-objective Bayesian optimization with adaptive discretization(Transactions on Machine Learning Research, 2025-09-24) Nika, Andi; Elahi, Sepehr; Ararat, Çağın; Tekin, CemWe consider the problem of optimizing a vector-valued objective function f sampled from a Gaussian Process (GP) whose index set is a well-behaved, compact metric space (X, d) of designs. We assume that f is not known beforehand and that evaluating f at design x results in a noisy observation of f(x). Since identifying the Pareto optimal designs via exhaustive search is infeasible when the cardinality of X is large, we propose an algorithm, called Adaptive ϵ-PAL, that exploits the smoothness of the GP-sampled function and the structure of (X, d) to learn fast. In essence, Adaptive ϵ-PAL employs a tree-based adaptive discretization technique to identify an ϵ-accurate Pareto set of designs in as few evaluations as possible. We provide both information-type and metric dimension-type bounds on the sample complexity of ϵ-accurate Pareto set identification. We also experimentally show that our algorithm outperforms other Pareto set identification methods.Item Open Access Minimizers of sparsity regularized least absolute deviations(Springer New York LLC, 2025-10-11) Akkaya, Deniz; Pınar, Mustafa ÇelebiSparse solutions to linear systems of equations affected by noise or modeling errors are considered. In contrast to standard $ℓ_2-ℓ_0$, we consider a $ℓ_1-ℓ_0$ formulation to better handle outliers in the data. A sparse solution to the system that minimizes the $ℓ_1$-norm of the residual error is sought. Sparsity is controlled using a $ℓ_0$-norm term weighted by a positive parameter. A detailed study of the local and global minimizers is given. A simple necessary condition for global optimality and conditions for monitoring the sparsity level of the minimizers is derived. An upper bound on the maximum entry of a globally optimal solution permits an exact MIP formulation with constraints derived from the analysis.Item Open Access Predicting mortality in subarachnoid hemorrhage patients using big data and machine learning: a nationwide study in Türkiye(Multidisciplinary Digital Publishing Institute (MDPI), 2025-02-10) Khaniyev, Taghi; Çekiç, Efecan; Geçici, Neslihan Nisa; Can, Sinem; Ata, Naim; Ülgü, Mustafa Mahir; Birinci, Suayip; Işıkay, Ahmet Ilkay; Bakır, Abdurrahman; Arat, Anıl; Hanalıoğlu, ŞahinBackground/Objective: Subarachnoid hemorrhage (SAH) is associated with high morbidity and mortality rates, necessitating prognostic algorithms to guide decisions. Our study evaluates the use of machine learning (ML) models for predicting 1-month and 1-year mortality among SAH patients using national electronic health records (EHR) system. Methods: Retrospective cohort of 29,274 SAH patients, identified through national EHR system from January 2017 to December 2022, was analyzed, with mortality data obtained from central civil registration system in Türkiye. Variables included (n = 102) pre- (n = 65) and post-admission (n = 37) data, such as patient demographics, clinical presentation, comorbidities, laboratory results, and complications. We employed logistic regression (LR), decision trees (DTs), random forests (RFs), and artificial neural networks (ANN). Model performance was evaluated using area under the curve (AUC), average precision, and accuracy. Feature significance analysis was conducted using LR. Results: The average age was 56.23 ± 16.45 years (47.8% female). The overall mortality rate was 22.8% at 1 month and 33.3% at 1 year. One-month mortality increased from 20.9% to 24.57% (p < 0.001), and 1-year mortality rose from 30.85% to 35.55% (p < 0.001) in the post-COVID period compared to the pre-COVID period. For 1-month mortality prediction, the ANN, LR, RF, and DT models achieved AUCs of 0.946, 0.942, 0.931, and 0.916, with accuracies of 0.905, 0.901, 0.893, and 0.885, respectively. For 1-year mortality, the AUCs were 0.941, 0.927, 0.926, and 0.907, with accuracies of 0.884, 0.875, 0.861, and 0.851, respectively. Key predictors of mortality included age, cardiopulmonary arrest, abnormal laboratory results (such as abnormal glucose and lactate levels) at presentation, and pre-existing comorbidities. Incorporating post-admission features (n = 37) alongside pre-admission features (n = 65) improved model performance for both 1-month and 1-year mortality predictions, with average AUC improvements of 0.093 ± 0.011 and 0.089 ± 0.012, respectively. Conclusions: Our study demonstrates the effectiveness of ML models in predicting mortality in SAH patients using big data. LR models’ robustness, interpretability, and feature significance analysis validate its importance. Including post-admission data significantly improved all models’ performances. Our results demonstrate the utility of big data analytics in population-level health outcomes studies.Item Open Access Learning the pareto set under incomplete preferences: pure exploration in vector bandits(2024-11-26) Karagözlü, Efe Mert; Yıldırım, Yaşar Cahit; Ararat, Çagın; Tekin, Cem; Dasgupta, S; Mandt, S; Li, YWe study pure exploration in bandit problems with vector-valued rewards, where the goal is to (approximately) identify the Pareto set of arms given incomplete preferences induced by a polyhedral convex cone. We address the open problem of designing sampleefficient learning algorithms for such problems. We propose Pareto Vector Bandits (PaVeBa), an adaptive elimination algorithm that nearly matches the gap-dependent and worst-case lower bounds on the sample complexity of (., d)-PAC Pareto set identification. Finally, we provide an in-depth numerical investigation of PaVeBa and its heuristic vari-ants by comparing them with the state-of-the-art multi-objective and vector optimization algorithms on several real-world datasets with conflicting objectives.Item Open Access A literature review on inventory pooling with applications(MDPI AG, 2025-01-20) Yılmaz, ÖzlemIn this paper, we provide a review of academic research on inventory pooling published between 2010 and 2024, with a particular emphasis on studies that focus on real-world applications. The review analyzes the research conducted over the past 14 years, evaluates the outcomes of these applied studies, and identifies gaps in the literature. The contribution of this work is twofold: firstly, it provides insights into the extent to which theoretical advancements in inventory pooling have been implemented in the practice; secondly, it provides practitioners with an overview of recent real-world applications across various industrial contexts. The findings highlight the impact of inventory pooling on cost savings, service level improvements, inventory optimization in diverse sectors, and sustainability. Additionally, this paper examines the contributions of inventory pooling to economic, environmental, and social sustainability, offering a comprehensive analysis of its role in fostering sustainable practices across supply chains. Finally, the paper discusses practical challenges encountered in implementation and suggests directions for future research in this domain.Item Open Access An integrated price- and incentive-based demand response program for smart residential buildings: a robust multi-objective model(Elsevier BV, 2024-10-15) Talebi, Hossein; Kazemi, Aliyeh; Shakouri, G. Hamed; Kocaman, Ayşe Selin; Caldwell, NigelResidential buildings consume a significant amount of energy, emphasizing the importance of optimizing energy usage. Demand-side management (DSM) helps consumers and producers manage energy consumption through incentives and pricing. This study develops a new mathematical model to manage DSM in smart residential buildings. Extant literature commonly considers only a single objective function, ignores uncertainties, and applies only one price- or incentive-based program to load management in smart residential buildings. This study develops a multi-objective mixed-integer linear programming (MILP) model that applies both price- and incentive-based programs and considers uncertainties. The objectives are cost reduction, peak load minimization, user comfort improvement, and load factor maximization. This model can manage optimal schedules for household appliances and power exchange within buildings. The study shows that participating in the incentive-based program in a four-household residential complex yielded a 2 % decrease in electricity costs and a 1 % reduction in peak load while upholding comfort and load factor levels compared to non-participation. When extended to an eight-household complex, potential benefits include an 8.3 % decrease in electricity cost and a 2.6 % reduction in peak load, highlighting the program’s effectiveness in residential energy management strategies.