Cadre déclaratif modulaire d’évaluation d’actions selon différents principes éthiques

Cadre déclaratif modulaire d’évaluation d’actions selon différents principes éthiques

Fiona Berreby Gauvain Bourgne Jean-Gabriel Ganascia  

CNRS & Sorbonne Universités, LIP6, 4 place Jussieu 75005 Paris, France

Corresponding Author Email: 
31 August 2018
| Citation

This paper investigates the use of high-level action languages for designing ethical autonomous agents. It proposes a novel and modular logic-based framework for representing and reasoning over a variety of ethical theories, based on a modified version of the Event Calculus and implemented in Answer Set Programming. The ethical decision-making process is conceived of as a multi-step procedure captured by four types of interdependent models which allow the agent to assess its environment, reason over its accountability and make ethically informed choices. The overarching ambition of the presented research is twofold. First, to allow the systematic representation of an unbounded number of ethical reasoning processes, through a framework that is adaptable and extensible. Second, to avoid the common pitfall of too readily embedding moral information within computational engines, thereby feeding agents with atomic answers that fail to truly represent underlying dynamics. We aim instead to comprehensively displace the burden of moral reasoning from the programmer to the program itself.  


computational ethics, answer set programming, event calculus, reasoning about actions and change

1. Introduction
2. Contexte
3. Schéma structurel
4. Modèle d’action
5. Modèle causal
6. Modèle du Bien
7. Modèle du Juste
8. Comparaison des principes éthiques
9. Conclusion

Alexander L., Moore M. (2016). Deontological ethics. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy.

Anderson M., Anderson S. (2011). Machine ethics. Cambridge University Press.

Anderson M., Anderson S. L. (2014). GenEth: A general ethical dilemma analyzer. In AAAI, p. 253–261. AAAI Press.

Arkin R. (2009). Governing lethal behavior in autonomous robots. CRC Press.

Beauchamp T., Childress J. (2001). Principles of Biomedical Ethics. Oxford University Press. Beebee H., Hitchcock C., Menzies P. (2009). The Oxford handbook of causation. Oxford University Press.

Bentham J. (2001). A fragment on government. The Lawbook Exchange, Ltd.

Berreby F., Bourgne G., Ganascia J.-G. (2015). Modelling moral reasoning and ethical responsibility with logic programming. In LPAR, p. 532–548.

Berreby F., Bourgne G., Ganasia J.-G. (2018). Event-based and scenario-based causality for computational ethics. In AAMAS, p. 147–155.

Blass J. A., Forbus K. D. (2015). Moral decision-making by analogy: Generalizations versus exemplars. In B. Bonet, S. Koenig (Eds.), AAAI, p. 501–507. AAAI Press.

Bringsjord S., Taylor J. (2012). The divine-command approach to robot ethics. Robot Ethics: The Ethical and Social Implications of Robotics’, MIT Press, Cambridge, MA, p. 85–108.

Bryson J. J. (2018). Patiency is not a virtue: the design of intelligent systems and systems of ethics. Ethics Inf. Technol., vol. 20, no 1, p. 15–26. Cadre déclaratif modulaire de raisonnement éthique 517

Cointe N., Bonnet G., Boissier O. (2016). Ethical judgment of agents’ behaviors in multi-agent systems. In AAMAS, p. 1106–1114. IFAAMAS.

Dennis L., Fisher M., Slavkovik M., Webster M. (2016). Formal verification of ethical choices in autonomous systems. Robotics and Autonomous Systems, vol. 77, p. 1–14.

Dignum V. (2017). Responsible autonomy. In Proceedings of IJCAI, p. 4698–4704.

Dodig Crnkovic G., Çürüklü B. (2012, 01 Mar). Robots: ethical by design. Ethics and Information Technology, vol. 14, no 1, p. 61–71.

Feinberg J. (1970). Doing & deserving; essays in the theory of responsibility.

Foot P. (1967). The problem of abortion and the doctrine of double effect. Oxford Review, vol. 5, p. 5–15.

Fox M., Long D. (2003, décembre). PDDL2.1: An extension to PDDL for expressing temporal planning domains. J. Artif. Int. Res., vol. 20, no 1, p. 61–124.

Ganascia J.-G. (2007). Modelling ethical rules of lying with Answer Set Programming. Ethics and information technology, vol. 9, no 1, p. 39–47.

Ganascia J.-G. (2015). Non-monotonic resolution of conflicts for ethical reasoning. In A construction manual for robots’ ethical systems, p. 101–118. Springer.

Gelfond M. (2008). Answer sets. Foundations of Artificial Intelligence, vol. 3, p. 285–316. Gelfond M., Lifschitz V. (1988). The stable model semantics for logic programming. In ICLP/SLP, vol. 88, p. 1070–1080.

Gelfond M., Lifschitz V. (1991). Classical negation in logic programs and disjunctive databases. New generation computing, vol. 9, no 3-4, p. 365–385.

Govindarajulu N. S., Bringsjord S. (2017). On automating the doctrine of double effect. In C. Sierra (Ed.), Proceedings of IJCAI, p. 4722–4730.

Halpern J., Hitchcock C. (2010). Actual Causation and the Art of Modelling In R. Dechter, H. Geffner, J. Halpern (Eds.), Heuristics, Probability, and Causality (pp. 383–406). London: College Publications.

Halpern J. Y. (2015). A modification of the Halpern-Pearl definition of causality. In Proceedings of IJCAI, p. 3022–3033.

Hopkins M., Pearl J. (2007). Causality and counterfactuals in the situation calculus. Journal of Logic and Computation, vol. 17, no 5, p. 939–953.

Hume D. (2012). A treatise of human nature. Courier Corporation.

Kant I. (1964). Groundwork of the metaphysic of morals. New York: Harper & Row.

Kim T.-W., Lee J., Palla R. (2009). Circumscriptive event calculus as answer set programming. In Proceedings of IJCAI, vol. 9, p. 823–829.

Kluckhohn C. (1951). Values and value-orientations in the theory of action: An exploration in definition and classification.

Kment B. (2014). Modality and Explanatory Reasoning. Oxford University Press.

Lifschitz V. (2008). What Is Answer Set Programming?. In AAAI, vol. 8, p. 1594–1597. 518 RIA. Volume 32 – no 4/2018

Lorini E. (2012). On the logical foundations of moral agency. In DEON, vol. 7393, p. 108–122. Springer.

Lorini E., Longin D., Mayor E. (2014). A logical analysis of responsibility attribution: emotions, individuals and collectives. J. Log. Comput., vol. 24, no 6, p. 1313–1339.

McDermott D., Ghallab M., Howe A., Knoblock C., Ram A., Veloso M. et al. (1998). PDDLthe planning domain definition language.

McLaren B. M. (2006). Computational models of ethical reasoning: Challenges, initial steps, and future directions. IEEE Intelligent Systems, vol. 21, no 4, p. 29–37.

Mueller E. T. (2008). Event calculus. In Handbook of Knowledge Representation, Foundations of Artificial Intelligence, vol. 3, p. 671–708. Elsevier.

Noothigattu R., Gaikwad S. N. S., Awad E., Dsouza S., Rahwan I., Ravikumar P. et al. (2018). A voting-based system for ethical decision making. In S. A. McIlraith, K. Q. Weinberger (Eds.), AAAI. AAAI Press.

Nozick R. (1974). Anarchy, state, and utopia. New York: Basic Books.

Pearl J. (2003). Causality: models, reasoning, and inference. Econometric Theory, vol. 19, p. 675–685.

Pereira L. M., Saptawijaya A. (2007). Modelling morality with prospective logic. In Progress in Artificial Intelligence, p. 99–111. Springer.

Pereira L. M., Saptawijaya A. (2017). Agent morality via counterfactuals in logic programming. In Bridging@cogsci, vol. 1994, p. 39–53.

Schiffel S., Thielscher M. (2006). Reconciling situation calculus and fluent calculus. In AAAI, p. 287–292. AAAI Press.

Serramia M., López-Sánchez M., Rodríguez-Aguilar J. A., Rodríguez M., Wooldridge M., Morales J. et al. (2018). Moral values in norm decision making. In E. André, S. Koenig,

M. Dastani, G. Sukthankar (Eds.), AAMAS, p. 1294–1302. IFAAMAS USA / ACM.

Singer P. (2005). Ethics and intuitions. The Journal of Ethics, vol. 9, no 3-4, p. 331–352.

Sloman S., Barbey A. K., Hotaling J. M. (2009). A causal model theory of the meaning of cause, enable, and prevent. Cognitive Science, vol. 33, no 1, p. 21–50.

Sosa E., Tooley M. (1993). Causation (vol. 27) no 1. Oxford University Press.

Struhl K., Rothenberg P. (1975). Ethics in perspective: a reader. Random House.

Tufi¸s M., Ganascia J.-G. (2015). Grafting norms onto the bdi agent model. In A construction manual for robots’ ethical systems. Springer.

Wallach W., Allen C., Smit I. (2008). Machine morality: bottom-up and top-down approaches for modelling human moral faculties. AI Soc., vol. 22, no 4, p. 565–582.

Wu Y., Lin S. (2018). A low-cost ethics shaping approach for designing reinforcement learning agents. In S. A. McIlraith, K. Q. Weinberger (Eds.), AAAI. AAAI Press.