Abstract
Previous research has shown that people prefer algorithmic to judgmental forecasts in the absence of outcome feedback but judgmental to algorithmic forecasts when feedback is provided. However, all this work has used cue-based forecasting tasks. The opposite pattern of results has been reported for time series forecasting tasks. This reversal could have arisen because cue-based forecasting studies have used preference paradigms whereas the time series forecasting studies have employed advice-taking paradigms. In a first experiment, we show that when a preference paradigm is used in time series forecasting, the difference in the conclusions about the effects of feedback in the two types of forecasting disappears. In a second experiment, we show that provision of guidance showing accuracy of algorithmic and judgmental forecasts can eliminate effects of feedback. Two further experiments reveal how choices between algorithmic and judgmental forecasts are influenced by the way in which those forecasts are labeled.
Original language | English |
---|---|
Number of pages | 22 |
Journal | International Journal of Forecasting |
DOIs | |
Publication status | E-pub ahead of print - 16 Sept 2024 |
Keywords
- Algorithm aversion
- Algorithmic forecast
- Feedback effects
- Framing effects
- Guidance
- Judgmental forecast
- Labeling effects