Moral Psychology of AI

Artificial intelligence (AI) is increasingly used to perform tasks with a moral dimension, such as prioritising scarce medical resources. Debates rage about the ethical issues of AI, how we should programme ethical AI, and which ethical values we need to prioritize. But machine morality is as much about human moral psychology as it is about the philosophical and practical issues of building artificial agents. To reap the benefits of AI, stakeholders need to be willing to use, adopt, and rely on these systems: they must trust in the AI agents.

My work draws on psychology and philosophy to explore how and when humans trust AI agents that act as ‘moral machines’. Drawing from classic models of trust and recent theoretical work from moral psychology on the complexity of trust in the moral domain, my work explores the characteristics of AI agents that predict trust, the individual differences that make us more or less likely to trust AI agents, and the situations in which we are more or less likely to trust in AI.

This work is currently supported by an ESRC New Investigator Grant and an ERC Starting Grant.

Jim A.C. Everett
Jim A.C. Everett
Reader (Associate Professor) in Psychology

My research interests focus around moral judgement and perceptions of moral character.

Related