Poster
in
Workshop: AI meets Moral Philosophy and Moral Psychology: An Interdisciplinary Dialogue about Computational Ethics
#05: Morality is a Two-Way Street: The Role of Mind Perception and Moral Attribution in AI Safety
Jacy Anthis
Keywords: [ moral agency ] [ artificial intelligence ] [ human-AI interaction ] [ mind perception ] [ moral patiency ] [ moral psychology ] [ AI safety ]
Moral psychology can be directly used to encode human values in AI development but as AI technology advances, the moral psychology of the humans who interact with AI systems may play an increasingly important role in AI development and use. In this short essay, I argue that if we can better understand how humans attribute minds, moral patiency, and moral agency to machines, then we can better prepare for the complex sociology of how engineers will interact with cutting-edge AI systems (e.g., How easily could they be deceived?), how the public will react to new AIs (e.g., What will be the next 'ChatGPT moment'?), and risks of catastrophic human-AI conflict (e.g., Can we align the interests of intelligent systems if their relationship is one of dominance or abuse?). I briefly illustrate this research direction with an empirical study in which 1,163 online participants made compared the moral patiency of 30,238 profiles of AI, in pairs, with randomized features (e.g., language, emotion) to estimate the relative effects of different features on moral consideration.