Formal verification of moral values in MAS

Formal verification of moral values in MAS

Grégory Bonnet Bruno Mermet Gaële Simon 

Laboratoire GREYC - UMR 6072 Université du Havre, France

Corresponding Author Email: 
Bruno.Mermet@unicaen.fr
Page: 
449-470
|
DOI: 
https://doi.org/10.3166/RIA.31.449-470
Received: 
| |
Accepted: 
| | Citation

OPEN ACCESS

Abstract: 

The increasing use of autonomous artificial agents in hospitals or in transport control ystems leads to consider whether moral rules shared by many of us are followed by these agents. This a particularly hard problem because most of these moral rules are often not compatible. In such cases, humans usually follow ethical rules to promote one moral rule or another. Using formal verification to ensure that an agent follows a given ethical rule could help in increasing the confidence in artificial agents. In this article, we show how a set of formal properties can be obtained from an ethical rule ordering conflicting moral rules with respect to a value system. If the behaviour of an agent verifies these properties (which can be proven using our existing proof framework), it means that this agent follows this ethical rule.

Keywords: 

formal specification, multi-agent systems, ethic, computational ethics

1. Introduction
2. État de l’art
3. GDT4MAS
4. Prouver une éthique
5. Étude de cas
6. Conclusion et perspectives
  References

Abel D., MacGlashan J., Littman M. (2016). Reinforcement learning as a framework for ethical decision making. In AAAI workshops: AI, ethics and society.

Abramson D., Pike L. (2011). When formal Systems Kill: Computer Ethics and Formal Methods. APA Newsletter on Philosophy and Computers, vol. 11, no 1.

Abrial J.-R. (1996). The B-Book. Cambridge Univ. Press.

Alechina N., Logan B., Whitsey M. (2004). A complete and decidable logic for resourcebounded agents. In 3rd aamas.

Anderson M., Anderson S. (2014). Toward ensuring ethical behavior from autonomous systems: a case-supported principle-based paradigm. Industrial Robot, vol. 42, no 4, p. 324-331.

Arkin R. (2009). Governing lethal behavior in autonomous robots. Chapman and Hall. Atkinson K., Bench-Capon T. (2016). Value based reasoning and the actions of others. In 22nd european conference on artificial intelligence, p. 680-688.

Baldoni M., Baroglio C., Gungui I., Martelli A., Martelli M., V. Patti V. M. nd et al. (2005). Reasoning About Agents’ Interaction Protocols Inside DCaseLP. In Lncs, vol. 3476, p. 112-131.

Beavers A. (2011). Moral machines and the threat of ethical nihilism. In Robot ethics: the ethical and social implication of robotics, p. 333-386. MIT Press.

Bench-Capon T., Atkinson K. (2009). Argumentation in Artificial Intelligence. In G. Simari, I. Rahwan (Eds.), Abstract argumentation and values, p. 45-64. Springer.

Berreby F., Bourgne G., Ganascia J.-G. (2015a). Modelling moral reasoning and ethical responsibility with logic programming. In 20th lpar, p. 532-548.

Berreby F., Bourgne G., Ganascia J.-G. (2015b). Modelling moral reasoning and ethical responsibility with logic programming. In 20th lpar, p. 532–548.

Bordini R., Fisher M., Visser W., Wooldridge M. (2003). Verifiable multi-agent programs. In 1st promas.

Brazier F., Eck P. van, Treur J. (1997). Simulating social phenomena. In, vol. 456, p. 103-109. LNEMS.

Brewka G., Benferhat S., Berre D. L. (2004). Qualitative choice logic. Artif. Intell., vol. 157, no 1-2, p. 203–237.

Bringsjord S., Ghosh R., Pane-Joyce J. (2016). Deontic counteridenticals and the design of ethically correct intelligent agents: First steps. In 1st edia, p. 38–43.

Broersen J., Dastani M., Huang Z., Hulstijn J., Torre L. V. der. (2001). The BOID architecture: conflicts between beliefs, obligations, intentions and desires. In 5th aa, p. 9-16.

Calardo E., Governatori G., Rotolo A. (2015). Semantics for modelling reason-based preferences. In PRIMA 2015: Principles and practice of multi-agent systems - 18th international conference, bertinoro, italy, october 26-30, 2015, proceedings, p. 101–117.

Chisholm R. M. (1963). Contrary-to-duty imperatives and deontic logic. Analysis, vol. 24, no 2, p. 33–36.

Coelho H., Rocha Costa A. da. (2009). On the intelligence of moral agency.

Cointe N., Bonnet G., Boissier O. (2016). Ethical judgment of agents’ behaviors in multi-agent systems. In 15th international conference on autonomous agents & multiagent systems, p. 1106-1114.

Comte-Sponville A. (2012). La philosophie. PUF. Cossentino M., Potts C. (2002). A CASE tool supported methodology for the design of multiagent systems. In SERP.

Damasio A. (2008). Descartes’ error: Emotion, reason and the human brain. Random House.

Dastani M. (2008). 2APL: a practical agent programming language. JAAMAS, vol. 16, p. 214–248.

de Giacomo G., Lesperance Y., Levesque H. J. (2000). Congolog, a concurrent programming language based on the situation calculus. Artificial Intelligence, vol. 121, no 1-2, p. 109-169.

Dennis L., Fisher M., Winfield A. (2015). Towards Verifiably Ethical Robot Behaviour. In Artifial intelligence and ethics aaai workshop.

Fisher M. (1994). A survey of concurrent METATEM – the language and its applications. In 1st ictl, p. 480–505.

Friedman B. (1996). Value-sensitive design. Interactions, vol. 3, no 6, p. 16-23.

Friedman B., Kahn P., Borning A., Huldtgren A. (2013). Value sensitive design and information systems. In Early engagement and new technologies: Opening up the laboratory, p. 55-95. Springer Netherlands.

Ganascia J. (2007). Modeling ethical rules of lying with answer set programming. Ethics and Information Technology, vol. 9, p. 39-47.

Greene J., Haidt J. (2002). How (and where) does moral judgment work? Trends in Cognitive Sciences, vol. 6, no 12, p. 517-523.

Horty J. F. (1994). Moral dilemmas and nonmonotonic logic. Journal of philosophical logic, vol. 23, no 1, p. 35–65.

Hubner J., Sichman J., Boissier O. (2002). Spécification structurelle, fonctionnelle et déontique d’organisations dans les SMA. In Jfiadsm. Hermes.

Kacprzak M., Lomuscio A., Penczek W. (2004). Verification of multiagent systems via unbounded model checking. In 3rd aamas.

Life Institute F. of. (2015). Research priorities for robust and beneficial artificial intelligence.

Martelli M., Mascardi V., Zini F. (s. d.). CaseLP: a Complex Application Specification Environment base on Logic Programming. In 8th iclp.

McDermott D. (2008). Why ethics is a high hurdle for ai. In North american conference on computers and philosophy.

McIntyre A. (2014). Doctrine of double effect. In E. N. Zalta (Ed.), The stanford encyclopedia of philosophy, Winter éd..

McLaren B. (2006). Computational models of ethical reasoning: challenges, initial steps, and future directions. IEEE Intelligent Systems, vol. 21, no 4, p. 29-37.

Mermet B., Simon G. (2011). Specifying recursive agents with GDTs. JAAMAS, vol. 23, no 2, p. 273-301.

Mermet B., Simon G. (2013). A new proof system to verify GDT agents. In Idc, vol. 511, p. 181-187. Springer.

Moor J. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, vol. 21, no 4, p. 29-37.

Nissenbaum H. (2005). Values in technical design. Rapport technique. Encyclopedia of Science, Technology and Ethics.

Nowak A., Radzik T. (1994). A solidarity value for n-person transferable utility games. International Journal of Game Theory, vol. 23, p. 43-48.

Numérique d’Allistene C. de réflexion sur l’Éthique de la Recherche en science et technologies du. (2014). éthique de la recherche en robotique. Rapport technique. CERNA.

Owre S., Shankar N., Rushby J. (1992). Pvs: A prototype verification system. In 11th cade.

Raimondi F., Lomuscio A. (2004). Verification of multiagent systems via orderd binary decision diagrams: an algorithm and its implementation. In 3rd aamas.

Rao A., Georgeff M. (1995). BDI agents from theory to practice. In Technical note 56. AAII.

Ricoeur P. (1990). Soi-même comme un autre. Points Essais.

Rokeach M. (1973). The nature of human values. New York Free Press.

Russo A., Miller R., Nuiseibeh B., Kramer J. (2001). An abductive approach for analysing event-based requirements specifications. Rapport technique. Department of Computing, Imperial College.

Sabas A., Delisle S., Badri M. (2002). A comparative analysis of multiagent system development methodologies: Towards a unified approach. In Cybernetics and systems, p. 599-604. Austrian Society for Cybernetics Studies.

Saptawijaya A., Pereira L. (2014). Towards modeling morality computationally with logic programming. In 16th ispadl, p. 104-119.

Schwartz S. (2012). An overview of schwartz theory of basic values. Online Readings of Psychology and Culture, vol. 2, no 1.

Shapiro S., Lespérance Y., Levesque H. J. (2002). The Cognitive Agents Specification Language and Verification Environment for Multiagent Systems. In 2nd aamas, p. 19-26.

Shilton K., Koepfler J., Fleischmann K. (2014). How to see values in social computing: methods for studying values dimensions. In 17th acm conference on computer supported cooperative work & social computing, p. 426-435.

Swchartz S., BilskyW. (1990). Toward a theory of the universal content and structure of values: Extensions and cross cultural replications. Journal of Personality and Social Psychology, vol. 58, p. 878-891.

Timmons M. (2012). Moral theory: An introduction. Rowman and Littlefiled.

Tufis M., Ganascia J.-G. (2012). Normative rational agents: A BDI approach. In 1st workshop on rights and duties of autonomous agents, p. 37-43. CEUR Proceedings Vol. 885.

van Marrewijk M., Werre M. (2003). Multiple levels of corporate sustainability. Journal of Business Ethics, vol. 4, no 2-3, p. 107-119.

Wiener Y. (1988). Forms of value systems: A focus on organisational effectiveness and cultural change and maintenance. Academy of Management Review, vol. 13, no 4, p. 534-545.

Winfield A., Blum C., Liu W. (2014). Towards and Ethical Robot: Internal Models, Consequences and Ethical Action Selection. In Lncs, vol. 8717, p. 85-96.