Files
Abstract
Artificial intelligence (AI) algorithms intervene and contribute to human decision-making processes in an increasing number of contexts. Most of these contexts include a potential loss. For example, on the battlefield, a person's life is at stake; in financial decision-making situations, one's savings; and in medical decisions, one's health. We investigated whether the perception of loss as compared to a gain affects people's unwillingness to rely on algorithms' forecasts. We found that algorithm aversion is robust against the framing of the outcome as a gain versus a loss. Even though people want to avoid losses more than they want to acquire equivalent gains, this manipulation did not make people more or less algorithm averse. On the other hand, loss framing affected people's performance predictions regardless of who made the estimates (i.e., model versus own). People in the loss condition predicted lower scores for both the model's and their own estimates. However, this difference was not significant for post-performance evaluations, suggesting that loss framing may only be effective before the actual performance when the loss is still a possibility. The findings of this study can contribute to technology policies that aim to facilitate better interaction with algorithms.