CEMAA : un modèle préliminaire basé sur la variabilité des contextes éthiques

CEMAA : un modèle préliminaire basé sur la variabilité des contextes éthiques

Léa Guizol Ritta Baddoura  

Léa Guizol, Ardans SAS, France

Ritta Baddoura, Institut Mines-Télécom, France

Corresponding Author Email: 
lea.guizol@ardans.fr; rittabaddoura@yahoo.fr
Page: 
659-682
|
DOI: 
https://doi.org/10.3166/ria.32.659-682
Received: 
|
Accepted: 
|
Published: 
31 December 2018
| Citation

ACCESS

Abstract: 

Intelligent agents and robots are gaining in autonomy every day and their implementation in various social contexts is in growth. Therefore, enabling intelligent machines to integrate an ethical dimension in their decision-making, prior to action, is more than necessary. This article presents the CEMAA, Contextual Ethical Model for Artificial Agents, and compares it to other existing models. The CEMAA is an original model that focuses on a contextual representation of an ethical problem and of its potential solutions. It allows to take into account various types of ethics based on virtues, values, deontology, intent, consequences or acts. Whilst considering the plurality of an agent’s potential ethical contexts i.e. personal, social, cultural and legal, the CEMAA prioritizes a more realistic representation of ethical reasoning and decision-making.  

Keywords: 

moral decision making, contextual ethics, ethical model, knowledge representation

1. Introduction
2. Etat de l’art : vertus, valeurs et problèmes éthiques
3. Division d’un problème éthique en trois parties : le scénario, les doctrines et les résultats
4. Doctrines et éléments moraux
5. Comparaison du modèle CEMAA avec d’autres et spécificités
6. Conclusion et travaux futurs
Remerciements
  References

Baddoura R., Venture G. (2015). This robot is sociable: close-up on the gestures and measured motion of a human responding to a proactive robot. International Journal of Social Robotics, vol. 7, no 4, p. 489–496. 

Banerjee D., Cronan T. P., Jones T.W. (1998). Modeling it ethics: A study in situational ethics. Mis Quarterly, p. 31–60. 

Bauman C. W., McGraw A. P., Bartels D. M., Warren C. (2014). Revisiting external validity: Concerns about trolley problems and other sacrificial dilemmas in moral psychology. Social and Personality Psychology Compass, vol. 8, no 9, p. 536–554. 

Berreby F., Bourgne G., Ganascia J.-G. (2015). Modelling moral reasoning and ethical responsibility with logic programming. In Logic for programming, artificial intelligence, and reasoning, p. 532–548. 

Chardel P.-A. (2014). Capture des données personnelles et rationalité instrumentale. le devenir des subjectivités à l’ère hypermoderne. In 16e colloque creis-terminal." données collectées, disséminées, cachées–quels traitements? quelles conséquences". Cointe N., 

Bonnet G., Boissier O. (2016a). Ethical judgment of agents’ behaviors in multiagent systems. In Proceedings of the 2016 international conference on autonomous agents & multiagent systems, p. 1106–1114. 

Cointe N., Bonnet G., Boissier O. (2016b). Multi-agent based ethical asset management. In 1st workshop on ethics in the design of intelligent agents, p. 52–57. 

Comte-Sponville A. (2012). Le capitalisme est-il moral? Albin Michel. 

Draper H., Sorell T. (2016). Ethical values and social care robots for older people: an international qualitative study. Ethics and Information Technology, p. 1–20. 

Dupoux E., Jacob P. (2007). Universal moral grammar: a critical appraisal. Trends in cognitive sciences, vol. 11, no 9, p. 373–378. 

Durkheim É. (1963). L’éducation morale [1925]. Paris, puf , p. 110. 

Etzioni A., Etzioni O. (2016). Ai assisted ethics. Ethics and Information Technology, vol. 18, no 2, p. 149–156. 

Fiske S. T., Cuddy A. J., Glick P., Xu J. (2002). A model of (often mixed) stereotype content: competence and warmth respectively follow from perceived status and competition. Journal of personality and social psychology, vol. 82, no 6, p. 878. 

Foot P. (1967). The problem of abortion and the doctrine of double effect. Modèle d’éthique contextuelle pour agents artificiels 681 

Garland D., Wrong D. (1995). The problem of order: What unites and divides society? JSTOR. 

Gaudine A., Thorne L. (2001). Emotion and ethical decision-making in organizations. Journal of Business Ethics, vol. 31, no 2, p. 175–187. 

Greene J. (2016). Solving the trolley problem. A Companion to Experimental Philosophy, p. 175. 

Greene J. D., Sommerville R. B., Nystrom L. E., Darley J. M., Cohen J. D. (2001). An fmri investigation of emotional engagement in moral judgment. Science, vol. 293, no 5537, p. 2105–2108. 

Haidt J., Graham J. (2007). When morality opposes justice: Conservatives have moral intuitions that liberals may not recognize. Social Justice Research, vol. 20, no 1, p. 98–116. 

Ham J., Bos K. van den. (2010). On unconscious morality: The effects of unconscious thinking on moral decision making. Social Cognition, vol. 28, no 1, p. 74–83. 

Kant I. (1972). Groundwork of the metaphysics of morals. Hutchinson University Library. 

Kawai N., Kubo K., Kubo-Kawai N. (2014). "granny dumping": Acceptability of sacrificing the elderly in a simulated moral dilemma. Japanese Psychological Research, vol. 56, no 3, p. 254–262. 

Kreie J., Cronan T. P. (1998). How men and women view ethics. Communications of the ACM, vol. 41, no 9, p. 70–76. 

Li J. L. (2016). Revisiting the trolley problem on the ethics of animal model organisms. The Ethical Endeavor. 

Malle B. F. (2016). Integrating robot ethics and machine morality: the study and design of moral competence in robots. Ethics and Information Technology, vol. 18, no 4, p. 243–256. 

Mikhail J. (2007). Universal moral grammar: Theory, evidence and the future. Trends in cognitive sciences, vol. 11, no 4, p. 143–152. 

Nietzsche F. (2016). Thus spoke zarathustra. Jester House Publishing. 

Pereira L. M., Saptawijaya A. (2009). Modelling morality with prospective logic. International Journal of Reasoning-based Intelligent Systems, vol. 1, no 3-4, p. 209–221. 

Pontier M., Hoorn J. F. (2012). Toward machines that behave ethically better than humans do. In Cogsci. 

Reidenbach R. E., Robin D. P. (1988). Some initial steps toward improving the measurement of ethical evaluations of marketing activities. Journal of Business Ethics, vol. 7, no 11, p. 871–879. 

Ricoeur P. (1992). Oneself as another. University of Chicago Press. 

Schwartz S. H. (2006). Les valeurs de base de la personne: théorie, mesures et applications. Revue française de sociologie, vol. 47, no 4, p. 929–968. 

Schwartz S. H. (2007). Basic human values: theory, methods, and application. Risorsa Uomo. 

Schwartz S. H. (2012). Toward refining the theory of basic human values. In Methods, theories, and empirical applications in the social sciences, p. 39–46. Springer. 

Sinnott-Armstrong W., Wheatley T. (2012). The disunity of morality and why it matters to philosophy. The Monist, vol. 95, no 3, p. 355–377.

Sullins J. P. (2010). Robowarfare: can robots be more ethical than humans on the battlefield? Ethics and Information technology, vol. 12, no 3, p. 263–275. 

Thomson J. J. (1985). The trolley problem. The Yale Law Journal, vol. 94, no 6, p. 1395–1415. 

Thrun S. (2010). Toward robotic cars. Communications of the ACM, vol. 53, no 4, p. 99–106. 

Valdesolo P., DeSteno D. (2006). Manipulations of emotional context shape moral judgment. Psychological science, vol. 17, no 6, p. 476–477. 

Wallach W. (2010). Robot minds and human ethics: the need for a comprehensive model of moral decision making. Ethics and Information Technology, vol. 12, no 3, p. 243–250. 

Wallach W., Allen C., Franklin S. (2011). Consciousness and ethics: artificially conscious moral agents. International Journal of Machine Consciousness, vol. 3, no 01, p. 177–192. 

Waser M. R. (2013). Safe/moral autopoiesis and consciousness. International Journal of Machine Consciousness, vol. 5, no 01, p. 59–74. 

Wynsberghe A. van. (2016). Service robots, care ethics, and design. Ethics and Information Technology, vol. 18, no 4, p. 311–321. 

Yoon C. (2011). Ethical decision-making in the internet context: Development and test of an initial model based on moral philosophy. Computers in Human Behavior, vol. 27, no 6, p. 2401–2409.