The EESC issues between 160 and 190 opinions and information reports a year.
It also organises several annual initiatives and events with a focus on civil society and citizens’ participation such as the Civil Society Prize, the Civil Society Days, the Your Europe, Your Say youth plenary and the ECI Day.
The EESC brings together representatives from all areas of organised civil society, who give their independent advice on EU policies and legislation. The EESC's326 Members are organised into three groups: Employers, Workers and Various Interests.
The EESC has six sections, specialising in concrete topics of relevance to the citizens of the European Union, ranging from social to economic affairs, energy, environment, external relations or the internal market.
The EESC suggests that the EU should develop a certification for trustworthy AI, to be delivered by an independent body after testing the products for key requirements such as resilience, safety, and absence of prejudice, discrimination or bias. The proposal has been put forward in two recent EESC opinions assessing the European Commission's ethical guidelines on AI
The EESC believes that such certification would go a long way towards increasing public trust in AI in Europe. While some people insist that, for people to trust AI applications, algorithms need to be explainable, the fact is that AI systems and machine learning are so complex that even people who are developing them do not really know what their outcome will be, and have to develop testing tools to see where their limits are.
The EESC proposes entrusting the testing to an independent body – an agency, a consortium or some other entity to be determined – which would test the systems for prejudice, discrimination, bias, resilience, robustness and particularly safety. Companies could use the certificate to prove that they are developing AI systems that are safe, reliable and in line with European values and standards.
"AI products can be compared to medicines", says Franca Salis-Madinier, rapporteur for the EESC's general opinion on the European Commission's communication. "Medicines can be beneficial, but also dangerous, and before they can be put on the market they need to be certified. The manufacturers need to prove that they have done enough trials and testing to ensure that their product is beneficial. The same approach should be taken for AI machines."
The EESC also stresses that need for clear rules on responsibility. "Responsibility must always be linked to a person, either natural or legal. Machines cannot be held liable in the case of failure", says Ulrich Samm, rapporteur of the EESC opinion on AI on the implications of the guidelines on the automotive sector. The insurability of AI systems is also a question that needs to be looked into as a matter of priority, highlights the EESC. (dm)