Browsing by Author "Goodwin, P."
Now showing 1 - 8 of 8
- Results Per Page
- Sort Options
Item Open Access Antecedents and effects of trust in forecasting advice(Elsevier, 2013) Goodwin, P.; Gönül, M. S.; Önkal D.Forecasting support systems (FSSs) have little value if users distrust the information and advice that they offer. Two experiments were used to investigate: (i) factors that influence the levels of users’ stated trust in advice provided by an FSS, when this advice is provided in the form of interval forecasts, (ii) the extent to which stated trust is associated with users’ modifications of the provided forecasts, and (iii) the consequences of these modifications for the calibration of the interval forecasts. Stated trust was influenced by the levels of noise in time series and whether a trend was present but was unaffected by the presence or absence of point forecasts. It was also higher when the intervals were framed as ‘best-case/worst-case’ forecasts and when the FSS provided explanations. Absence of trust was associated with a tendency to narrow the provided prediction intervals, which reduced their calibration.Item Open Access Aviation risk perception: a comparison between experts and novices(Wiley-Blackwell Publishing, 2004) Thomson, M. E.; Önkal D.; Avcioǧlu, A.; Goodwin, P.This article describes an exploratory investigation of the risk perceptions of experts and novices in relation to helicopter operations, under conditions where the participants are matched on various characteristics previously found to affect perceptions, such as demographic, gender, and background factors. The study reports considerable evidence of perceptual differences between the two participant groups (i.e., expert pilots and candidate pilots). We find that the experts' perceptions of relative risks are more veridical, in terms of their higher correlation with the true relative frequencies. A significant positive correlation between the flight hours and the contextual risk-taking tendency is also shown, leading the experienced pilots' choices toward risky alternatives in scenarios - a potential result of their overconfidence based on superior task performance. Possible explanations are offered for the findings and potential avenues for future research are identified.Item Open Access Contrast effects in judgmental forecasting when assessing the implications of worst and best case scenarios(John Wiley & Sons, Ltd., 2019) Goodwin, P.; Gönül, S.; Önkal, D.; Kocabıyıkoğlu, A.; Göğüş, Celile ItırTwo experiments investigated whether individuals' forecasts of the demand for products and a stock market index assuming a best or worst case scenario depend on whether they have seen a single scenario in isolation or whether they have also seen a second scenario presenting an opposing view of the future. Normatively, scenarios should be regarded as belonging to different plausible future worlds so that the judged implications of one scenario should not be affected when other scenarios are available. However, the results provided evidence of contrast effects in that the presentation of a second “opposite” scenario led to more extreme forecasts consistent with the polarity of the original scenario. In addition, people were more confident about their forecasts based on a given scenario when two opposing scenarios were available. We examine the implications of our findings for the elicitation of point forecasts and judgmental prediction intervals and the biases that are often associated with them.Item Open Access Expectations, use and judgmental adjustment of external financial and economic forecasts: an empirical investigation(John Wiley & Sons Ltd., 2009) Gönül, S.; Önkal D.; Goodwin, P.A survey of 124 users of externally produced financial and economic forecasts in Turkey investigated their expectations and perceptions of forecast quality and their reasons for judgmentally adjusting forecasts. Expectations and quality perceptions mainly related to the timeliness of forecasts, the provision of a clear justifiable rationale and accuracy. Cost was less important. Forecasts were frequently adjusted when they lacked a justifiable explanation, when the user felt they could integrate their knowledge into the forecast, or where the user perceived a need to take responsibility for the forecast. Forecasts were less frequently adjusted when they came from a well-known source and were based on sound explanations and assumptions. The presence of feedback on. accuracy reduced the influence of these factors. The seniority and experience of users had little effect on their attitudes or propensity to make adjustments.Item Open Access Feedback-labelling synergies in judgmental stock price forecasting(Elsevier, 2004) Goodwin, P.; Önkal-Atay, D.; Thomson, M. E.; Pollock, A. C.; Macaulay, A.Research has suggested that outcome feedback is less effective than other forms of feedback in promoting learning by users of decision support systems. However, if circumstances can be identified where the effectiveness of outcome feedback can be improved, this offers considerable advantages, given its lower computational demands, ease of understanding and immediacy. An experiment in stock price forecasting was used to compare the effectiveness of outcome and performance feedback: (i) when different forms of probability forecast were required, and (ii) with and without the presence of contextual information provided as labels. For interval forecasts, the effectiveness of outcome feedback came close to that of performance feedback, as long as labels were provided. For directional probability forecasts, outcome feedback was not effective, even if labels were supplied. Implications are discussed and future research directions are suggested.Item Open Access Judgemental forecasting: a review of progress over the last 25 years(Elsevier, 2006) Lawrence, M.; Goodwin, P.; O'Connor, M.; Önkal D.The past 25 years has seen phenomenal growth of interest in judgemental approaches to forecasting and a significant change of attitude on the part of researchers to the role of judgement. While previously judgement was thought to be the enemy of accuracy, today judgement is recognised as an indispensable component of forecasting and much research attention has been directed at understanding and improving its use. Human judgement can be demonstrated to provide a significant benefit to forecasting accuracy but it can also be subject to many biases. Much of the research has been directed at understanding and managing these strengths and weaknesses. An indication of the explosion of research interest in this area can be gauged by the fact that over 200 studies are referenced in this review.Item Open Access The relative influence of advice from human experts and statistical methods on forecast adjustments(John Wiley & Sons Ltd, 2009) Önkal D.; Goodwin, P.; Thomson, M.; Gönül, S.; Pollock, A.Decision makers and forecasters often receive advice from different sources including human experts and statistical methods. This research examines, in the context of stock price forecasting, how the apparent source of the advice affects the attention that is paid to it when the mode of delivery of the advice is identical for both sources. In Study 1, two groups of participants were given the same advised point and interval forecasts. One group was told that these were the advice of a human expert and the other that they were generated by a statistical forecasting method. The participants were then asked to adjust forecasts they had previously made in light of this advice. While in both cases the advice led to improved point forecast accuracy and better calibration of the prediction intervals, the advice which apparently emanated from a statistical method was discounted much more severely. In Study 2, participants were provided with advice from two sources. When the participants were told that both sources were either human experts or both were statistical methods, the apparent statistical-based advice had the same influence on the adjusted estimates as the advice that appeared to come from a human expert. However when the apparent sources of advice were different, much greater attention was paid to the advice that apparently came from a human expert. Theories of advice utilization are used to identify why the advice of a human expert is likely to be preferred to advice from a statistical method.Item Open Access Why should i trust your forecasts?(International Institute of Forecasters, 2012) Gönül, M. S.; Önkal D.; Goodwin, P.Preview Mistrust is a serious problem for organizations. So much has been written about functional biases and misaligned incentives that one wonders how anyone can trust a forecast provider. Well, now we have some studies that shed new light on the factors that can build or impede trust in forecasting. In this article, Sinan, Dilek, and Paul discuss the latest research findings on the steps you can take to improve trust and reduce dysfunctional behavior in the forecast function. Their conclusions offer a check list of steps to eliminate or at least minimize the element of mistrust in your forecasts.