September 19, 2020

The Challenges of Ethics and Roboethics in the conduct of hostilities and in everyday technology

By Marco Pizzorno.

The new types of conflict are oriented towards the use of autonomous intelligence capable of correcting and evolving from previous errors. The current dynamics of warfare change the rules of combat and consequently, even the fundamental guarantees are forced to chase the future and its challenges.

Currently, technological science has focused on new forms of intelligence, or AI, artificial intelligence. The use of drones in recent conflicts opens up many reflections on how and when a virtual conscience is able to recognize and distinguish what is considered a military target from a civilian or a civilian asset. How does a machine interpret the principle of distinction or proportionality in war theater?

How does it work

The decision-making structure of these new realities is built on a decision tree through which the action is assessed. This choice will refer to the settings of the software set for the purpose of the program.

Different solutions are part of different action plans defined by algorithms.These algorithms allow you to define basic knowledge and expanded knowledge, that is, created through experience. The evolution of the machine fueled by experience refers, however, to ML machine learning, or better to say to machines through an increasingly precise and detailed language and commands.

The knowledge of man transferred to machines is carried out in different ways, the most important of which are those based on the Theory of Formal Languages ​​and on Theory of Decision. The first is based on dynamics: “generative, recognitive, denotational, algebraic and transformational that refer to the theories of Strings. The second on Decision Theory, or on a decision Tree, designed to evaluate actions, reactions and possible consequences.

Ethics and Roboethics applied to Robotics

To perceive the terminology of ethics in an initiatory way and understand the meaning is evident from the research on the word that its derivation comes from the ancient Greek ἔθος, which means behavior, custom. The definition indicates it as a branch of philosophy that studies the rational foundations that allow human behavior to be assigned a deontological status.

Aristotle, in Book I on Nicomachean Ethics, defines it: «Every technique and every research, as well as every action and every choice, tend to some good, as it seems; therefore the good has been rightly defined as what everything tends to “Roboethics, on the other hand, is ethics “applied” to robotics. It is necessary to specify that this ethics is administered by humans and at the moment, not by robots that are used differently and object of professionals who design and build them.

The elaboration of roboethics involves international commissions composed ad hoc belonging to different categories such as jurisprudence, medicine. The common path to this new universe which also includes the military and security sector must unite the factors of human ethics with those of roboethics to be applied to robotics which, through artificial intelligence and machine learning, has its own development and its own. consciousness.

Will an artificial conscience succeed in avoiding death and suffering to human life in the theater of war? Who distinguishes a civilian not involved in hostilities but unable to leave a target state of military operations, from a belligerent? How is the offense proportion principle transferred to software? Fears are not to be underestimated and doubts puzzling

What is the future?

 There are many concerns also due to the technological arms race of military superpowers. Although current roboethics policies require to comply with the fundamental principles and norms, sanctioned and universally accepted in the Charter on Human Rights, a thorough monitoring of the international community will be necessary.

The world of cyborgs is a reality and so will the world of new cybercrimes. A very serious commitment is necessary for the protection and safeguard of human life. The Principles of Asilomar at the moment buffer a phenomenon that develops at impressive speed, but rightly not for all professionals it is sufficiently useful to guarantee the right and life of mankind. The policies related to the protection of IHL, report early deterrent work on AI and ML.

The work published by the ICRC is very interesting: “Artificial intelligence and machine learning in armed conflict -A human-centered approach- Geneva, 6 June 2019. All worries on the planet could focus on point 3 which says: “Use of AI and machine learning by conflict parties Many of the ways in which parties to armed conflict – whether States or non-State armed groups – might use AI and machine learning in the conduct of warfare, and their potential implications, are not yet known. However, there are at least three overlapping areas that are relevant from a humanitarian perspective, including for compliance with international humanitarian law. “

“Not Yet known”, is it a starting point or is it the beginning of the end? The question now is: “How is human life protected and from what? Is software that self-develops and commits crimes punishable and how?

Comments are closed.