Open letter on Artificial Intelligence and the Prohibition of Autonomous Weapons Systems (Hellenic Informatics Union – General Assembly Resolution, Dec/2018)
Hellenic Informatics Union (HIU) is the scientific and professional organization representing the Informatics sector graduates in Greece. Since 2001 we are working on promoting Informatics in every aspect of the society, economy and education, upholding the highest academic standards and our own first-ever Code of Ethics for the Informatics in Greece. On December 2018, the General Assembly of the HIU unanimously voted in favor of the following intervention-proposal regarding the basic principles of Artificial Intelligence (AI), its implementation framework and the international ban on autonomous weapons systems.
|To:||European Commission’s High-Level Expert Group on Artificial Intelligence (AI HLEG)|
|CC:|| European Parliament (Secretariat) |
Members of the European Parliament
European & international bodies of experts in AI
European & international News Media
6-Feb-2020, Athens, Greece
Subject: Open letter on Artificial Intelligence and the Prohibition of Autonomous Weapons Systems (Hellenic Informatics Union – General Assembly Resolution, Dec/2018)
Hellenic Informatics Union (HIU) is the scientific and professional organization representing the Informatics sector graduates in Greece. Since 2001 we are working on promoting Informatics in every aspect of the society, economy and education, upholding the highest academic standards and our own first-ever Code of Ethics for the Informatics in Greece.
On December 2018, the General Assembly of the HIU unanimously voted in favor of the following intervention-proposal regarding the basic principles of Artificial Intelligence (AI), its implementation framework and the international ban on autonomous weapons systems.
Basic definitions and implementation framework
Artificial Intelligence (AI):
As a concept, AI it refers to the general ability of an algorithm to produce results that simulate the accuracy, reliability and cognitive value ("understanding") of solving a specialized or generic problem, which usually cannot be solved by simple mathematical or analytical methods and requires the assistance of a domain expert . To this end, such an algorithm is often required to be capable of exemplifying, discovering and linking general concepts (abstraction), coding knowledge and drawing conclusions based on it, exploiting past "experience" (adapting to mistakes), etc. On a practical level, such algorithms make it possible to optimally solve "closed" cognitive problems, such as games like chess, but also much more complex real-world problems such as voice recognition, human language understanding, face recognition, handwriting recognition, analysis and automatic diagnosis in medical images, etc.
Autonomous Weapons Systems (AWS):
The AWS category includes weapons systems that incorporate to a greater or lesser extent the ability to correct or fully autonomously guide for optimal targeting and target destruction . This capability encompasses a wide range of more or less "intelligent" functions, from the proximity fuze or depth charges of WWII to the fully autonomous cruise missiles with satellite / inertial / ground guidance and detonators with perforation measurement (at a specific point inside buildings).
In view of the above, it is clear that AI and AWS or "lethal" AWS (LAWS) are increasingly involved. Over the last three decades, digital technology and the ever-increasing computing resources available on ultra-small-scale circuits have allowed the implementation of increasingly complex, more demanding and more "intelligent" algorithms in modern weapons systems. At the same time, the complexity and speed of processing required make human intervention not only less necessary but often a “bottleneck” in decision-making processes, e.g. in the guidance system of a rocket traveling at 3-5 times the speed of sound towards the target. As a result, in recent years modern weapon systems have become increasingly autonomous , not only after the basic decision to use them, but also before that.
Unfortunately, indications are that the investment in the development of increasingly "smart" autonomous weapons systems will continue to increase in the coming years. Just last September (2018), US DARPA announced  the launch of a new $2 billion AI systems research and development program for better human-machine collaboration . A similar $2.1 billion program has already been announced  by China in January (2018), while last year the government has launched a three-year plan  to upgrade the country to a world leader in AI. It is worth noting that in its long-term plan  named "A.I. Next", DARPA points out  the upgrade of AI from a passive decision support tool to a real human "partner" as a key objective - which in the case of weapons systems development (DARPA's main investment area over time), means a significant upgrade of the autonomy of these systems to a level of decision making without mediation or direct human control.
In addition to the key issues related to the ethical, social and legal dimensions of any weapon system use (e.g. weapons of mass destruction), modern technology allows almost 100% transfer of responsibility for the decision of deployment or not towards the "machine". Modern Unmanned Combat Aerial Vehicles (UCAVs)  have the ability to analyze the battlefield, identify targets, guide or launch their own missiles and destroy them, without the intervention of the human factor at any stage. The ethical and legal liability problems that arise in other non-combat areas in cases of malfunction, such as fully autonomous car driving systems or even standard braking or airbag control systems, in weapon systems become of multiple importance as they relate to the decision of using or not a by default lethal instrument. Removing the human factor from the control loop creates the potential for multiple and generally fatal consequences that still are extremely difficult to predict. Already, even very simple "passive" robotic systems, such as in the cases of Uber  and Amazon , have demonstrated the seriousness of the problem, both at the technical / design level, but mainly at the legal / moral level concerning responsibility of attribution (legal liability / moral hazard).
In 2018 there were, among other things, two public calls for an international ban on AWS / LAWS: The first in the form of an open letter  signed by some of the largest companies and private organizations (eg Google DeepMind), as well as a number of academic-research institutions from around the world. The second in the form of a formal statement / resolution  to the UN (August 2018) by the nearly 4,000 scientists who co-sign the aforementioned letter. Unfortunately, Greece has not voted in favor of the corresponding resolution, which was put before the European Parliament, and is one of the countries  that currently have no formal policy on AI and AWS.
As a scientific and professional Union, our core positions and Principles are defined by the Code of Conduct for Information Technology , the first to be introduced for this field in Greece. Following the international concern of academic, research, social, political and governmental bodies around the world and in view of the mobilization that takes place on this subject, HIU proposes the following positions as fundamental rules for the proper development and use of Artificial Intelligence, in particular related to AWS.
HIU proposals on Artificial Intelligence and AWS
- The European Parliament resolution  of the 16/2/2017 on "Civil Law Rules on Robotics".
- The European Parliament resolution  of the 12/9/2018 on the international ban on autonomous weapons systems or "autonomous weapons ban".
- Actions in progress by UN  towards this direction.
- The open letter  of scientists and academics from around the world through the Future of Life Institute (FLI), which focuses on the problem of the "robotic weapons arms race"  in general.
- Collective actions of bodies and organizations for the proper use of Artificial Intelligence  and Robotics, such as "Campaign to Stop Killer Robots" .
- Generally accepted positions / conditions for the safe development of the relevant technology, such as I. Asimov's "Three Laws of Robotics" .
We propose the following:
Basic Principles of Artificial Intelligence & Robotics
1. The *purpose* of developing and implementing Artificial Intelligence & Robotics (AI&R) is not without a goal. It must serve the common good and protect life.
2. The *investment* in the development and implementation of AI&R must always be accompanied by assurance of transparency and proper direction in matters of ethical, humanitarian, social, legal and economic dimensions.
3. The *access* to the scientific, technological and productive dimension of AI&R must be equal for all, over time, as a human right to knowledge and goods.
4. The *results* of the development and implementation of AI&R should be equally relevant to society as a whole, with particular attention to issues of security, protection of personal data and respect for personal freedoms.
5. The *compliance* with the Law, especially the protection of life, the liability, the risk minimization and the damage control in case of malfunction (fail-safe) are the top priorities.
6. The *control* of AI&R systems must always be maintained by or to be assigned by priority to humans, in order to accomplish human-defined objectives.
7. The human *understanding* of the internal operations and the ability of auditing the decision-making processes in an AI&R system should be ensured to the maximum extent possible.
8. The *implementation* of AI&R systems in large-scale daily life should respect and ensure, to the maximum extent possible, the individual's personal choice as to whether or not use it.
9. The *integration* of AI&R scientific and technological achievements into practical applications and their use for peaceful purposes is an obligation of all.
10. The *self-improvement* and *self-protection* of AI&R systems, as a key aspect of their design, should always be subject to human assessment and ensure that they are carried out in a beneficial way.
As a scientific and professional organization, but also as plain citizens, we the members of HIU are at the disposal of the State, the Hellenic Parliament and the members of the European Parliament for further contribution to this extremely critical issue for the future generations.
Hellenic Informatics Union (A.C. board)
|President||Vice-President||General Secretary||Special Secretary||Financial Manager|
|Dimitris Kiriakos||Marios Papadopoulos||Harris Georgiou||Fotis Alexakos||Lena Kapetanaki|
Hellenic Informatics Union, P.O. 13801, P.O. Box 10310, Athens, Greece
E-mail: info<στο>epe.org.gr – Tel/sms: (+30)6981.723690
: "CODE Demonstrates Autonomy and Collaboration with Minimal Human Commands" (DARPA, 19/11/2018) – is.gd/owBO5B
: "...contextual reasoning in AI systems to create more trusting, collaborative partnerships between humans and machines..."
: "...DARPA envisions a future in which machines are more than just tools that execute human-programmed rules or generalize from human-curated data sets. Rather, the machines DARPA envisions will function more as colleagues than as tools..."
: Typically, Unmanned Combat Aerial Vehicles (UCAV) include both Remote Piloted Aircraft Systems (RPAS) and fully autonomous flying vehicles capable of locating, recognizing and attacking targets of their choice, without the human-pilot intervention from a distance. The second category of UCAV is mainly based on advanced AI systems and is one of the AWS / LAWS systems discussed here. The term "drone" is often identified with the term UCAV, but in civilian applications drones are usually RPAS (not fully autonomous).
: Code of Conduct for Information Technology (HIU), Jul.2016 (Greek) – is.gd/Zc16ri
- L20-0206-01V1_HIU-AI-resolution-EU_final.pdf181 Ki