The Vatican and AI: Rome Calls For AI Ethics

The Vatican and AI: Rome Calls For AI Ethics March 6, 2020

Gerd Altmann:  Artificial Intelligence /Pixabay

The Catholic Church has a reputation of being behind the times. While, in some ways, there is truth in that generalization, because the Catholic Church is often slow to deal with contemporary problems, it is an over-simplification of the Catholic Church’s legacy.  The Catholic Church has consistently promoted education and the integration of faith and reason. The Catholic Church has a history of engaging the sciences even as it has cautioned misapplication of those sciences in theories which do not have enough evidence or in fads which die out. It is interested in keeping true to itself and its tradition, which makes it seem as if it is always looking backwards, but it also understands that tradition needs to be a living tradition. The Catholic Church does not expect new wine to be put in old wineskins, for the needs of the present to lead to identical practices (or understandings) as those found in the past.

There have been many great scientists and philosophers who have found their Catholic faith as being an important foundation for their work, even as there have been fundamentalistically minded Catholics who have been extremely contrarian to any attempt of the Catholic faith to adapt itself to the insights scientists and philosophers offer to the faith. As a whole, the Catholic tradition has been supportive of the sciences, for it wants its people to have a reasonable faith that goes beyond mere fideism.

It is in this light that we should not be surprised that the Catholic Church has been interested in exploring the use and application of AI programming. It wants general guidelines to be established that will make sure AI does not disrupt the common good. On February 28, 2020, the Vatican released a document, Rome Call for AI Ethics, agreed upon by Catholics and experts from IBM, Microsoft, and other scientists,  indicating the principles which they all agree should be followed as a way of making sure AI does not lead to harming humanity.

The Catholic Church has put itself in the forefront of the dialogue. As Rebeca Heilweil indicates, the Vatican has shown significant interest in the role AI programing will have for the future of humanity, and has promoted research to learn the possibilities and limitations involved with AI in general:

The Catholic Church is actually no stranger to artificial intelligence. In recent years, much of its focus on the technology has been in consultation with expert researchers in the field and representatives of major technology companies. The Dominican Order has even supported a priest-led research organization called Optic since 2012, which, among other things, researches AI and its potential to marginalize people.[1]

For the Vatican, as with many tech companies in general, there is the recognition that AI cannot be halted, and indeed, that it offers many valuable contributions to society. However, its potential dangers mean that there will need to be regulations put in place to make sure that those dangers are minimized or eliminated. Thus, Taylor Lyles reports,  “Vatican officials are calling for stricter ethical standards on the development of artificial intelligence, with tech giants IBM and Microsoft being the first companies to sign its new initiative.”[2]  Some think the Rome Call for AI Ethics does not go far enough, especially because there is not yet any way to enforce such regulations; thus, without legal restrictions, Thomas Macaulay suggests, many tech companies can at once pretend to be interested in following ethical guidelines and use their support of them to “deflect criticism and ward of government regulation.”[3]

Granted, such regulations will need to be put in place, but before they are enacted, there needs to be an examination of what those regulations should actually be. Without contemplating the various potential risks of AI, and also the possible consequences behind particular regulations, we risk creating regulations which, far from being helpful, would create new or worse problems of their own. This, then, is why the Vatican is interested in the first stage of the work, which is discerning the principles needed to make such regulations. “The Call for AI Ethics is intended more as an abstract incitement to AI companies to work towards ethical AI,” Simon Chandler explains, “rather than a concrete blueprint for how they might actually do this on the ground.”[4]

So, what did the Rome Call For AI Ethics suggest? Primarily that all AI development must keep the good of humanity as a part of its development. And one way this is to be done is to make sure that those who work on technological developments, including those who work with AI, make sure their work is for the service of humanity:

New technology must be researched and produced in accordance with criteria that ensure it truly serves the entire “human family” (Preamble, Univ. Dec. Human Rights), respecting the inherent dignity of each of its members and all natural environments, and taking into account the needs of those who are most vulnerable.  The aim is not only to ensure that no one is excluded, but also to expand those areas of freedom that could be threatened by algorithmic conditioning.[5]

This includes an interest in the environment, recognizing that all technology must be made considering the ecological implications of its use. Likewise, it is said that no one should find such advancements lead to any form of discrimination; rather, the good of humanity as a whole includes the good of each and every person and their particular dignity:

In order for technological advancement to align with true progress for the human race and respect for the planet, it must meet three requirements. It must include every human being, discriminating against no one; it must have the good of humankind and the good of every human being at its heart; finally, it must be mindful of the complex reality of our ecosystem and be characterised by the way in which it  cares  for  and  protects the planet (our “common  and shared home”) with a highly sustainable approach, which also includes the use of artificial intelligence in ensuring sustainable food systems in the future. Furthermore, each person must be aware when he or she is interacting with a machine. [6]

How AI is to work must also be determined and be presentable to everyone, so that its algorithms do not hide any ethical deficiency:

In order for AI to act as a tool for the good of humanity and the planet, we must put the topic of protecting human rights in the digital era at the heart of public debate. The time has cometo question whether new forms of automation and algorithmic activity necessitate the development of stronger responsibilities. In particular, it will be essential to consider some form of “duty of explanation”: we must think about making not only the decision-making criteria of AI-based algorithmic agents understandable, but also their purpose and objectives. These devices must be able to offer individuals information on the logic behind the algorithms used to make decisions. [7]

The document establishes six general principles which should be followed in the development and production of any form of AI technology:

1.Transparency: in principle, AI systems must be explainable;

2.Inclusion: the needs of all human beings must be taken into consideration so that everyone can benefit and all individuals can be offered the best possible conditions to express themselves and develop;

3.Responsibility: those who design and deploy the use of AI must proceed with responsibility and transparency;

4.Impartiality: do not create or act according to bias, thus safeguarding fairness and human dignity;

5.Reliability: AI systems must be able to work reliably;

6.Security and privacy: AI systems must work securely and respect the privacy of users. [8]

Looking through these principles, it is obvious that they are very general, and each one of them requires consideration as to how they are to be addressed. Transparency in the way AI is developed and acts is good, but how exactly is it to be communicated and explained to people who have difficulty understanding how machines “think” and act? Inclusion, obviously, is important, but how exactly is that to be done, when not everyone finds themselves affected the same way by AI development (automated systems will, for example, affect some forms of work more than others). How, exactly, would programmers be held responsible for what they develop, especially if they try to create a form of AI which is self-learning and transcends the basic algorithms given to it, similar to the way children learn to think for themselves and transcend the lessons given to them? How are AI systems to be made reliable? Who, likewise, will be there to determine when security and privacy are secured, and if not, what should the consequences be for those who fail to provide such protections?

A form of AI is already here with us, and it will only be further enhanced and used in the future. Whether or not it will truly be some sort of Artificial Intelligence with a consciousness all of its own, or if all that will be developed are better, more complex, algorithms which will help humanity in its labors, we cannot regulate AI out of existence. We must, therefore, come together and establish those principles which will make sure AI will promote the common good. This is something Science Fiction writers, and other futurists, have known; Isaac Asimov’s Three Laws of Robotics is a prime example of this:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Asimov’s laws only represent an early form of the considerations which need to take place today. Indeed, we really need to consider what any and all such regulations entail. Even Asimov’s laws, as with the Rome Call For AI Ethics, we find principles which sound good, but which become more difficult to realize once we begin to question what they really mean. What, for example, does it mean to cause harm? Is it possible to prevent all kinds of harm? If not, what flexibility should be put in place to make sure an AI programmed with code following such a requirement does not find itself in an impossible situation? Certainly, these concerns show us why it is important first to establish general blueprint for going forward before actually creating regulations and enforcing them, because it is likely, without considering the implications of particular rules, various unintentional results can follow (such as allowing terrorists to give orders which must be obeyed, orders which might not be used to cause direct harm, but yet disrupt society; Peter Singer was thinking about such issues when he warned how Asimov’s laws could have been used by Osama Bin Laden for his own nefarious applications).

We are living in a world far different from what our ancestors knew and experienced. Technology advances quickly, and what is potential with such technology likewise advances exponentially. While we are unable to halt scientific progress, we can and should follow along with it, and consider the potential implications involved with it as well as the threats such applications could present to humanity. Those involved in technological advances must be concerned with the ethical implications of their work and work product. The Vatican is right to think this through, not just internally, but with those who are involved with such technological advances. The Rome Call for AI Ethics is a good start, but we need more, much more, to come out in the near future.


[1]  Rebecca Heilweil, “The Pope’s Plan to Fight Back Against Evil AI” in Vox (Feb. 28, 2020).

[2] Taylor Lyles, “The Catholic Church Proposes AI Regulations That ‘Protect People’” in The Verge (Feb. 28, 2020).

[3] Thomas Macaulay, “Vatican’s AI Ethics Plan Lacks the Legal Restrictions it Needs to be Effective,” in Neural (March 2, 2020).

[4] Simon Chandler, “Vatican AI Ethics Pledge Will Struggle To Be More Than PR Exercise,” in Forbes (Mar. 4, 2020).

[5] Rome Call For AI Ethics (Feb 28. 2020).

[6] Rome Call For AI Ethics.

[7] Rome Call For AI Ethics.

[8] Rome Call For AI Ethics.

 

Stay in touch! Like A Little Bit of Nothing on Facebook.
If you liked what you read, please consider sharing it with your friends and family!


Browse Our Archives

Follow Us!