Two years ago, I wrote a paper called “Stoic Ethics for Artificial Agents” that was presented at Canadian A.I. It is a somewhat unusual paper, and argues for considering virtue ethics as a framework for A.I. systems. Most work on ethical A.I. has assumed (explicitly or implicitly) either a utilitarian or deontologial perspective.
I’d like to do a follow-up in the near future that goes into more detail about what actual A.I. architectures might look like in this perspective. See Section 4.1 in the above paper for a very general sketch.
For a very clear argument that we cannot avoid talking about what it means to live a good life when we talk about ethics and justice, I highly recommend Justice by Michael Sandel. There is an accompanying website with lecture videos and other resources.
I recently came across the book Technology and the Virtues by Shannon Vallor, and would like to read and review this as soon as possible. The OUP website says that it applies the virtue ethics framework ” to specific ethical challenges from emerging technologies: military and social robotics, new social media, digital surveillance and self-tracking, and biomedical enhancement.”
What do you think? Can virtue ethics inform the development of A.I. systems?