Morality of AI depends on human choices, Vatican says in new document

The ChatGPT app is seen on a phone placed atop a keyboard in this photo taken in Rome March 8, 2024. (CNS photo/Lola Gomez)

VATICAN CITY (CNS) – "Technological progress is part of God's plan for creation," the Vatican said, but people must take responsibility for using technologies like artificial intelligence to help humanity and not harm individuals or groups.

"Like any tool, AI is an extension of human power, and while its future capabilities are unpredictable, humanity's past actions provide clear warnings," said the document signed by Cardinals Víctor Manuel Fernández, prefect of the Dicastery for the Doctrine of the Faith, and José Tolentino de Mendonça, prefect of the Dicastery for Culture and Education.

The document, approved by Pope Francis Jan. 14 and released by the Vatican Jan. 28 – the day after International Holocaust Remembrance Day – said "the atrocities committed throughout history are enough to raise deep concerns about the potential abuses of AI."

Titled, "Antiqua et Nova (ancient and new): Note on the Relationship Between Artificial Intelligence and Human Intelligence," the document focused particularly on the moral use of technology and on the impact artificial intelligence already is having or could have on interpersonal relationships, education, work, art, health care, law, warfare and international relations.

AI technology is used not only in apps like ChatGPT and search engines, but in advertising, self-driving cars, autonomous weapons systems, security and surveillance systems, factory robotics and data analysis, including in health care.

The popes and Vatican institutions, particularly the Pontifical Academy of Sciences, have been monitoring and raising concerns about the development and use of artificial intelligence for more than 40 years.

"Like any product of human creativity, AI can be directed toward positive or negative ends," the Vatican document said. "When used in ways that respect human dignity and promote the well-being of individuals and communities, it can contribute positively to the human vocation."

"Yet, as in all areas where humans are called to make decisions, the shadow of evil also looms here," the dicasteries said. "Where human freedom allows for the possibility of choosing what is wrong, the moral evaluation of this technology will need to take into account how it is directed and used."

Human beings, not machines, make moral decisions, the document said. So, "it is important that ultimate responsibility for decisions made using AI rests with the human decision-makers and that there is accountability for the use of AI at each stage of the decision-making process."

The Vatican document insisted that while artificial intelligence can quickly perform some very complex tasks or access vast amounts of information, it is not truly intelligent, at least not in the same way human beings are.

"A proper understanding of human intelligence," it said, "cannot be reduced to the mere acquisition of facts or the ability to perform specific tasks. Instead, it involves the person's openness to the ultimate questions of life and reflects an orientation toward the true and the good."

Human intelligence also involves listening to others, empathizing with them, forming relationships and making moral judgments – actions, which even the most sophisticated AI programs cannot do, it said.

"Between a machine and a human, only the human can be sufficiently self-aware to the point of listening and following the voice of conscience, discerning with prudence, and seeking the good that is possible in every situation," the document said.

The Vatican dicasteries issued several warnings or cautions in the document, calling on individual users, developers and even governments to exercise control over how AI is used and to commit "to ensuring that AI always supports and promotes the supreme value of the dignity of every human being and the fullness of the human vocation."

First, they said, "misrepresenting AI as a person should always be avoided; doing so for fraudulent purposes is a grave ethical violation that could erode social trust. Similarly, using AI to deceive in other contexts -- such as in education or in human relationships, including the sphere of sexuality -- is also to be considered immoral and requires careful oversight to prevent harm, maintain transparency, and ensure the dignity of all people."

The dicasteries warned that "AI could be used to perpetuate marginalization and discrimination, create new forms of poverty, widen the 'digital divide,' and worsen existing social inequalities."

While AI promises to boost productivity in the workplace "by taking over mundane tasks," the document said, "it frequently forces workers to adapt to the speed and demands of machines rather than machines being designed to support those who work."

Parents, teachers and students also need to be careful with their reliance on AI, it said, and they need to know its limits.

"The extensive use of AI in education could lead to the students' increased reliance on technology, eroding their ability to perform some skills independently and worsening their dependence on screens," it said.

And while AI may provide information, the document said, it does not actually educate, which requires thinking, reasoning and discerning.

Users must also be aware of AI's "serious risk of generating manipulated content and false information, which can easily mislead people due to its resemblance to the truth. Such misinformation might occur unintentionally, as in the case of AI 'hallucination,' where a generative AI system yields results that appear real but are not" because it is programmed to respond to every request for information, regardless of whether it has access to it.

Of course, the document said, AI's falsehood also "can be intentional: individuals or organizations intentionally generate and spread false content with the aim to deceive or cause harm, such as 'deepfake' images, videos and audio – referring to a false depiction of a person, edited or generated by an AI algorithm."

Military applications of AI technology are particularly worrisome, the document said, because of "the ease with which autonomous weapons make war more viable," AI's potential for removing "human oversight" from weapons deployment and the possibility that autonomous weapons will become the object of a new "destabilizing arms race, with catastrophic consequences for human rights.



Share:
Print


Menu
Home
Subscribe
Search