Date
Abstract:
Even if algorithms make better predictions than humans on average, humans may sometimes have “private” information which an algorithm does not have access to that can improve performance. How can we help humans effectively use and adjust recommendations made by algorithms in such situations? When deciding whether and how to override an algorithm’s recommendations, we hypothesize that people are biased towards following a naïve advice weighting (NAW) heuristic: they take a weighted average between their own prediction and the algorithm’s, with a constant weight across prediction instances, regardless of whether they have valuable private information. This leads to humans over-adhering to the algorithm’s predictions when their private information is valuable and under-adhering when it is not. In a lab experiment where participants are tasked with making demand predictions for 20 products while having access to an algorithm’s recommendations, we confirm this bias towards NAW and find that it leads to a 20-61% increase in prediction error. In a follow-up experiment, we find that feature transparency - even when the underlying algorithm is a black box - helps users more effectively discriminate when and how to deviate from algorithms, resulting in a 25% reduction in prediction error.