The European Economic and Social Committee (EESC) suggests that the EU should develop a certification for trustworthy AI applications, to be delivered by an independent body after testing the products for key requirements such as resilience, safety, and absence of prejudice, discrimination or bias. The proposal has been put forward in two recent EESC opinions assessing the European Commission's ethical guidelines on AI.
Both EESC opinions – one covering the communication on Building trust in human-centric artificial intelligence as a whole and the other on its specific implications for the automotive sector – stress that such a certification would go a long way towards increasing public trust in artificial intelligence (AI) in Europe.
The issue of how to build confidence in AI is central to the conversation on AI in Europe, which has focused on ethics and a
human-in-command approach. While some insist that, for people to trust AI products, algorithms need to be explainable, the fact is that AI systems and machine learning are so complex that even people who are developing them do not really know what their outcome will be, and have to develop testing tools to see where their limits are.
The EESC proposes entrusting the testing to an independent body – an agency, a consortium or some other entity to be determined – which would test the systems for prejudice, discrimination, bias, resilience, robustness and particularly safety. Companies could use the certificate to prove that they are developing AI systems that are safe, reliable and in line with European values and standards.
AI products can be compared to medicines, says Franca Salis-Madinier, rapporteur for the EESC's general opinion on the European Commission's communication.
Medicines can be beneficial, but they can also be dangerous, and before they can be put on the market they need to be certified. The manufacturers need to prove that they have done enough trials and testing to ensure that their product is beneficial. The same approach should be taken for AI machines.
The EESC believes that such a certification system would give Europe a competitive edge on the international scene, at a time when even the OECD, with the support of some 50 countries, is looking to ensure an ethical use of AI, in line with human rights and democratic values.
The Committee also stresses the need for clear rules on responsibility.
Responsibility must always be linked to a person, either natural or legal. Machines cannot be held liable in the case of failure, says Ulrich Samm, rapporteur of the opinion on AI in the automotive sector. The insurability of AI systems is a question that needs to be looked into as a matter of priority, highlights the EESC.
In December 2018 the European Commission's high-level expert group on AI published a set of draft ethical guidelines for developing AI in Europe in a way that put people at the centre. The guidelines, revised in March 2019, identify the following seven key requirements that AI applications should respect to be considered trustworthy:
- human agency and oversight
- technical robustness and safety
- privacy and data governance
- diversity, non-discrimination and fairness
- societal and environmental well-being
The Commission has also drawn up an assessment list that operationalises the key requirements and offers guidance on how to implement them in practice.
As a next step, the Commission has launched a piloting phase where stakeholders are invited to test the assessment list and provide practical feedback on how it can be improved.
In early 2020, the assessment list will be reviewed and if appropriate the Commission will propose further measures.