One of the tasks of the expert group will be to draw up a proposal for guidelines on AI ethics.
More than a year after the European Parliament voted a resolution that urges the European Commission to draft rules for the field of robotics to be applied across the European Union, the Commission started working on this task. The first step in this direction will be the creation of an expert group, whose main task will be to issue guidelines on AI ethics.
Let’s recall that the debate in the European Parliament has demonstrated disagreement over the role of robots at the workplace and the risks they pose to human jobs. Examples from the financial services sector are numerous, as virtual assistants have been taking over roles in certain departments, such as customer service at banks, brokers and insurers.
The expert group on AI will have to come forward by the end of the year with draft guidelines for the ethical development and use of artificial intelligence based on the EU’s fundamental rights. In doing so, it will consider (inter alia) today’s statement by the European Group on Ethics in Science and New Technologies (EGE), an independent advisory body to the European Commission.
In its statement, EGE notes that software agents, such as bots in financial trade, and deep learning in medical diagnosis, are among the most prominent examples of AI applications. The AI lodged in these systems can redefine work or improve work conditions for humans and reduce the need for human contribution, input and interference during operation. It can help to assist or replace humans with smart technology in difficult and dull work.
One of the risks is, however, that the actions of intelligent machines or so-called autonomous systems, are often no longer intelligible, and no longer open to scrutiny by humans. This is the case because, first, it is impossible to establish how they accomplish their results beyond the initial algorithms.
Bots used in the financial services sector are among the examples of ‘autonomous’ software, EGE says. Trade, finance and stock markets are largely run by algorithms and software. Without human intervention and control from outside, smart systems today conduct dialogues with customers in online call-centres; speech recognition interfaces and recommender systems of online platforms, e.g. Siri, Alexa and Cortana, make suggestions to users. Beyond the straightforward questions of data protection and privacy, EGE asks whether people have a right to know whether they are dealing with a human being or with an AI artefact.
Moreover, the question arises whether there should be limits to what AI systems can suggest to a person, based on a construction of the person’s own conception of their identity.
EGE stresses the importance of the principle of human dignity. A relational conception of human dignity which is characterised by human social relations, requires that we are aware of whether and when we are interacting with a machine or another human being, and that we reserve the right to vest certain tasks to the human or the machine.
The call for applications for an expert group in artificial intelligence will end on April 9, 2018, and the Commission aims to set this group up by May.