
It should always be remembered that the machine can, in some forms and with these new means, produce algorithmic choices. What the machine does is a technical choice between multiple possibilities and is based either on well-defined criteria or on statistical inferences. The human being, on the other hand, not only chooses, but in his heart is able to decide. The decision is an element that we could define more strategic than a choice and requires a practical evaluation. Sometimes, often in the difficult task of governing, we are called to decide with consequences even on many people. Human reflection has always spoken in this regard of wisdom, the phronesis of Greek philosophy and at least in part the wisdom of Sacred Scripture. Faced with the wonders of machines, which seem to know how to choose independently, we must be clear that the human being must always remain the decision, even with the dramatic and urgent tones with which this sometimes occurs in our lives. We would condemn humanity to a hopeless future, if we deprived people of the ability to decide on themselves and their lives by condemning them to depend on the choices of machines. We need to guarantee and protect a space of meaningful control of the human being over the process of choosing artificial intelligence programs: human dignity itself is at stake. On this very issue, let me insist: in a drama such as that of armed conflicts, it is urgent to rethink the development and use of devices such as the so-called “autonomous lethal weapons” to ban their use, starting with an active and concrete commitment to introduce ever greater and significant human control. No machine should ever choose to take the life of a human being.
It must also be added that the good use, at least of advanced forms of artificial intelligence, will not be fully under the control of either the users or the programmers who defined its original purposes at the time of conception. And this is all the more true as it is highly likely that, in the not-so-distant future, artificial intelligence programs will be able to communicate directly with each other, to improve their performance. And, if in the past, human beings who have fashioned simple tools have seen their existence shaped by the latter – the knife has allowed them to survive the cold but also to develop the art of war – now that human beings have fashioned a complex tool they will see the latter shaping their existence even more [8].
The basic mechanism of artificial intelligence I would now like to briefly dwell on the complexity of
artificial intelligence. In its essence, artificial intelligence is a tool designed to solve a problem and works by means of a logical chaining of algebraic operations, carried out on categories of data, which are compared to discover correlations, improving their statistical value, thanks to a self-learning process, based on the search for further data and on the self-modification of its calculation procedures. Artificial intelligence is thus designed to solve specific problems, but for those who use it it is often irresistible the temptation to draw, starting from the punctual solutions that it proposes, general deductions, even of an anthropological order. A good example is the use of programs designed to assist judges in decisions regarding the granting of house arrest to inmates who are serving a sentence in a prison institution. In this case, artificial intelligence is asked to predict the probability of recidivism of the crime committed by a convicted person starting from predetermined categories (type of crime, behavior in prison, psychological evaluation and other), allowing artificial intelligence to have access to
categories of data related to the prisoner's private life (ethnic origin, educational level, credit line and others).
The use of such a methodology – which sometimes risks de facto delegating to a machine the last word on a person's fate – can implicitly carry with it the reference to the prejudices inherent in the categories of data used by artificial intelligence. Being classified in a certain ethnic group or, more prosaically, having committed a minor infraction years earlier (not having paid, for example, a fine for a prohibited stop), will influence, in fact, the decision about the granting of house arrest. On the contrary, the human being is always evolving and is capable of surprising with his actions, something that the machine cannot take into account. It should be noted then that applications similar to this one just mentioned will
undergo an acceleration thanks to the fact that artificial intelligence programs will be increasingly equipped with the ability to interact directly with human beings (chatbots), supporting conversations with them and establishing relationships of closeness with them, often very pleasant and reassuring, as
such artificial intelligence programs will be designed to learn to respond, in personalized form, to the physical and psychological needs of human beings. Forgetting that artificial intelligence is not another human being and that it cannot propose general principles, is often a serious error that originates either from the profound need of human beings to find a stable form of companionship or from their subconscious assumption, that is, from the assumption that the observations obtained by a calculation mechanism are endowed with the qualities of indisputable certainty and undoubted universality.
This assumption, however, is risky, as evidenced by the examination of the intrinsic limits of the calculation itself. The artificial intelligence uses of algebraic operations to be carried out in a logical sequence (for example, if the value of X is greater than Y, multiply X to Y; otherwise, divide X by Y). This method of calculation – the so – called “algorithm” - is endowed with neither objectivity nor neutrality [9].
Being in fact based on algebra, it can examine only formalized realities in numerical terms [10].
It should not be forgotten, moreover, that the algorithms designed to solve very complex problems are so sophisticated as to make it difficult for the programmers themselves to understand exactly how they manage to achieve their results. This trend of sophistication is likely to accelerate considerably with the
introduction of quantum computers that will not operate with binary circuits (semiconductors or microchips), but according to the laws, somewhat articulated, of quantum physics. On the other hand, the continuous introduction of increasingly performing microchips has already become one of the causes of the dominance of the use of artificial intelligence by the few nations that are equipped with it. Whether sophisticated or not, the quality of the answers AI programs provide ultimately depends on the data they use and how they structure it. Finally, I would like to point out one last area in which the complexity of the mechanism of the so-called Generative Artificial Intelligence (Generative Artificial Intelligence) clearly emerges. No one doubts that today magnificent knowledge access tools are available that even allow self-learning and self-tutoring in a myriad of fields.
Many of us were impressed by the applications easily available online to compose a text or produce an image on any theme or subject. Particularly attracted by this perspective are the students who, when they have to prepare papers, make disproportionate use of them. These pupils, who are often much more prepared and accustomed to the use of artificial intelligence than their professors, forget, however, that the so-called generative artificial intelligence, strictly speaking, is not properly “generative”. The latter, in truth, looks in big data for information and packages it in the style that has been requested. It does not develop new concepts or analysis. He repeats the ones he finds, giving them an appealing shape. And the more he finds a notion or hypothesis repeated, the more he considers it legitimate and valid. More than "generative“, it is therefore” strengthening", in the sense that it reorders existing contents, helping to consolidate them, often without checking if they contain errors or preconceptions. In this way, there is not only the risk of legitimizing fake nes The education that should provide students with the possibility of authentic reflection risks being reduced to a repetition of notions, which will be increasingly evaluated as indisputable, simply because of their continuous repetition [11].

Putting the dignity of the person at the center in view of a shared ethical proposal To what has already been said, a more general observation must now be added. The season of technological innovation that we are going through, in fact, is accompanied by a particular and unprecedented social conjuncture: on the great themes of social living it is possible with less and less ease to find understandings. Even in communities characterized by a certain cultural continuity, heated debates and confrontations are often created that make it difficult to produce reflections and shared political solutions, aimed at seeking what is good and just. Beyond the complexity of legitimate visions that characterize the human family, a factor emerges that seems to unite these different instances. It is recorded as a loss or at least an eclipse of the sense of the human and an apparent insignificance of the concept of human dignity [12].
It seems that the value and deep meaning of one of the fundamental categories of the West is being lost: the category of human person. And this is how in this season in which artificial intelligence programs question the human being and his action, precisely the weakness of the ethos connected to the perception of the value and dignity of the human person risks being the greatest vulnus in the implementation and development of these systems. We must not forget that no innovation is neutral. Technology is born for a purpose and, in its impact on human society, always represents a form of order in social relations and an arrangement of power, which empowers someone to perform actions and prevents others from doing others. This constitutive power dimension of technology always includes, in a more or less explicit way, the worldview of those who realized and developed it. This also applies to artificial intelligence programs. In order for the latter to be instruments for the construction of the good and of a better tomorrow, they must always be ordered to the good of every human being. They must have ethical inspiration.
The ethical decision, in fact, is one that takes into account not only the results of an action, but also the values at stake and the duties that derive from these values. For this reason I welcomed the signing in Rome, in 2020, of the Rome Call for AI Ethics [13] and its support for that form of ethical moderation of algorithms and artificial intelligence programs that I called “algoretica” [14].
In a plural and global context, in which different sensitivities and plural hierarchies in the value scales are also shown, it would seem difficult to find a single hierarchy of values. But in ethical analysis we can also resort to other types of tools: if we struggle to define a single set of global values, we can however find shared principles with which to face and dissolve any dilemmas or conflicts of living. For this reason the Rome Call was born: in the term “algoretica” a series of principles are condensed that prove to be a global and plural platform able to find the support of cultures, religions, international organizations and large companies protagonists of this development. The policy that is needed We cannot, therefore, hide the concrete risk, since inherent in its fundamental mechanism, that artificial intelligence limits the vision of the world to realities expressed in numbers and enclosed in ready-made categories, removing the contribution of other forms of truth and imposing uniform anthropological, socio-economic and cultural models. The technological paradigm embodied by artificial intelligence then risks making room for a much more dangerous paradigm, which I have already identified with the name of “technocratic paradigm” [15].
We cannot allow such a powerful and indispensable tool as artificial intelligence to reinforce such a paradigm, but rather, we must make artificial intelligence a bulwark precisely against its expansion. And it is precisely here that political action is urgent, as the Encyclical Fratelli Tutti recalls. "For many, politics today is a bad word, and it cannot be ignored that behind this fact there are often mistakes, corruption, inefficiency of some politicians. Added to this are the strategies that aim to weaken it, to replace it with the economy or to dominate it with some ideology. And yet, can the world work without politics? Can it find an effective way towards universal fraternity and social peace without a good policy?» [16].
Our answer to these last questions is: no! Politics is needed! I want to reiterate on this occasion that "in the face of so many petty forms of politics aimed at immediate interest [...] political greatness is shown when, in difficult times, one operates on the basis of great principles and thinking of the common good in the long term. Political power finds it very difficult to accept this duty in a project of a Nation and even more so in a common project for present and future humanity" [17].
Dear Ladies, distinguished Gentlemen! This reflection of mine on the effects of artificial intelligence on the future of humanity thus leads us to the consideration of the importance of “sound politics” to look with hope and confidence to our future. As I have already said elsewhere, " world society has serious structural deficiencies that are not solved by patching up or merely occasional quick solutions. There are things that need to be changed with background resets and major transformations. Only a sound policy could guide it, involving the most diverse sectors and the most varied knowledge. In this way, an economy integrated into a political, social, cultural and popular project that tends towards the common good can "open the way to different opportunities, which do not imply stopping human creativity and its dream of progress, but rather channeling this energy in a new way" (Laudato si', 191)" [18].