Fatollahi, Alireza2024-03-192024-03-192023-02-211879-4912https://hdl.handle.net/11693/114936There has been a lively debate in the philosophy of science over predictivism: the thesis that successfully predicting a given body of data provides stronger evidence for a theory than merely accommodating the same body of data. I argue for a very strong version of the thesis using statistical results on the so-called “model selection” problem. This is the problem of finding the optimal model (family of hypotheses) given a body of data. The key idea that I will borrow from the statistical literature is that the level of support a hypothesis, H, receives from a body of data, D, is inversely related to the number of adjustable parameters of the model from which H was constructed. I will argue that when D is not essential to the design of H (i.e., when it is predicted), the model to which H belongs has fewer adjustable parameters than when D is essential to the design of H (when it is accommodated). This, I argue, provides us with an argument for a very strong version of predictivism.enPredictivismModel selectionAkaike information criterionBayesian information criterionPredictivism and model selectionArticle10.1007/s13194-023-00512-11879-4920