A critical perspective on guidelines for responsible and trustworthy artificial intelligence
Ekmekci, Perihan Elif
MetadataShow full item record
Artificial intelligence (AI) is among the fastest developing areas of advanced technology in medicine. The most important qualia of AI which makes it different from other advanced technology products is its ability to improve its original program and decision-making algorithms via deep learning abilities. This difference is the reason that AI technology stands out from the ethical issues of other advanced technology artifacts. The ethical issues of AI technology vary from privacy and confidentiality of personal data to ethical status and value of AI entities in a wide spectrum, depending on their capability of deep learning and scope of the domains in which they operate. Developing ethical norms and guidelines for planning, development, production, and usage of AI technology has become an important issue to overcome these problems. In this respect three outstanding documents have been produced: 1. The Montréal Declaration for Responsible Development of Artificial Intelligence 2. Ethics Guidelines for Trustworthy AI 3. Asilomar Artificial Intelligence Principles In this study, these three documents will be analyzed with respect to the ethical principles and values they involve, their perspectives for approaching ethical issues, and their prospects for ethical reasoning when one or more of these values and principles are in conflict. Then, the sufficiency of these guidelines for addressing current or prospective ethical issues emerging from the existence of AI technology in medicine will be evaluated. The discussion will be pursued in terms of the ambiguity of interlocutors and efficiency for working out ethical dilemmas occurring in practical life.