Poster
in
Workshop: AI meets Moral Philosophy and Moral Psychology: An Interdisciplinary Dialogue about Computational Ethics
#32: Foundational Moral Values for AI Alignment
Betty Hou · Brian Green
Keywords: [ moral philosophy ] [ alignment ] [ morality ] [ artificial intelligence ] [ moral values ] [ ethics ]
Solving the AI alignment problem requires having a defensible set of clear values towards which AI systems can align. Currently, targets for alignment remain underspecified and are not philosophically robust. In this paper, we argue for the inclusion of five core, foundational values, drawn from moral philosophy and built on the requisites for human existence: survival, sustainable intergenerational existence, society, education, and truth. These values not only provide a clearer direction for technical alignment work, but they also suggest threats and opportunities from AI systems to both obtain and sustain these values.