Project Details
Theory of Individualized Machines: Indirect Algorithmic Self-Advising
Applicant
Dr. Tobias Rebholz
Subject Area
General, Cognitive and Mathematical Psychology
Social Psychology, Industrial and Organisational Psychology
Social Psychology, Industrial and Organisational Psychology
Term
since 2024
Project identifier
Deutsche Forschungsgemeinschaft (DFG) - Project number 553942000
Algorithms are becoming increasingly important in everyday life, and so is research on augmented (i.e., algorithmically supported) judgment and decision-making. Pragmatically, algorithms provide efficient means to reduce complexity in the decision-making environment (e.g., product recommendations). Nevertheless, people often prefer humans over algorithms, a phenomenon called “algorithm aversion,” particularly if they see them making errors. However, users also appreciate algorithmic advice if they are more familiar with the underlying decision process and its outcomes, or when they can influence an algorithm’s behavior. Both appreciation mechanisms benefit from a shared interaction history that allows users to improve their inferences about an algorithm’s behavior and, conversely, to train individualized algorithms that adapt to their users’ behavior. This interdependence implies that users can steer the output of such algorithms in the desired direction by deliberately making certain decisions and avoiding others. Put differently, the shared history allows users to act upon their theory of individualized machines (ToIM). Reminiscent of the so-called theory of mind, the original version of the theory of machine framework conceptualizes the idea that users attribute thought processes or mental states not only to other people, but also to algorithms. The aim of this project is to bridge the gap between aversion to and appreciation of individualized algorithms. In the first work package, three experiments will be conducted to test the behavioral consequences of ToIM-induced indirect algorithmic self-advising. Specifically, we will measure self-esteem (as a moderator of egocentrism) and manipulate task importance (as a moderator of similarity-attraction) and temporal discrepancy (as a moderator of belief updating) to test hypotheses about their moderating effects on participants’ willingness to integrate individualized algorithmic output. Building on these process explanations, two further experiments will be conducted in the second work package to assess the effects of consider-the-opposite interventions on users’ ToIM and the output generation mechanisms implemented in large language models (LLMs). This work program aims to deepen our understanding of the factors that influence users’ willingness to rely on individualized AI systems. Leveraging existing technology for which indirect algorithmic self-advising is inherently relevant (i.e., the general context-dependency of LLMs) ensures feasibility and relevance to the societal implications of increasing AI implementations.
DFG Programme
WBP Fellowship
International Connection
USA