Bolger, F.Önkal-Atay, D.2015-07-282015-07-2820040169-2070http://hdl.handle.net/11693/11251The majority of studies of probability judgment have found that judgments tend to be overconfident and that the degree of overconfidence is greater the more difficult the task. Further, these effects have been resistant to attempts to ‘debias’ via feedback. We propose that under favourable conditions, provision of appropriate feedback should lead to significant improvements in calibration, and the current study aims to demonstrate this effect. To this end, participants first specified ranges within which the true values of time series would fall with a given probability. After receiving feedback, forecasters constructed intervals for new series, changing their probability values if desired. The series varied systematically in terms of their characteristics including amount of noise, presentation scale, and existence of trend. Results show that forecasts were initially overconfident but improved significantly after feedback. Further, this improvement was not simply due to ‘hedging’, i.e. shifting to very high probability estimates and extremely wide intervals; rather, it seems that calibration improvement was chiefly obtained by forecasters learning to evaluate the extent of the noise in the series. D 2003 International Institute of Forecasters. Published by Elsevier B.V. All rights reserved.EnglishJudgmental forecastingCalibrationFeedbackOverconfidenceConfidence intervalsThe effects of feedback on judgmental interval predictionsArticle10.1016/S0169-2070(03)00009-8