The EESC issues between 160 and 190 opinions and information reports a year.
It also organises several annual initiatives and events with a focus on civil society and citizens’ participation such as the Civil Society Prize, the Civil Society Days, the Your Europe, Your Say youth plenary and the ECI Day.
The EESC brings together representatives from all areas of organised civil society, who give their independent advice on EU policies and legislation. The EESC's326 Members are organised into three groups: Employers, Workers and Various Interests.
The EESC has six sections, specialising in concrete topics of relevance to the citizens of the European Union, ranging from social to economic affairs, energy, environment, external relations or the internal market.
Speakers at the EESC’s second Stakeholder Summit on Artificial Intelligence have called for more inclusive consultation on fundamental rights and a ban on biometric recognition. They also advocated human supervision of AI in the workplace, support for SMEs to adopt it and universal training to prepare society for AI.
Organised by the European Economic and Social Committee (EESC) and the European Parliament on 8 November, the summit focused on the Artificial Intelligence Act (AIA), the European Commission’s proposed legal framework on AI. This act – a world first – aims to ensure that AI is used in ways that people can trust and that achieve excellence for society.
In the morning session, at the EESC offices in Brussels, representatives from business, civil society, workers, research and the European Commission discussed how legislation could protect fundamental rights while enabling society to benefit from AI.
Include more voices, build confidence
The first panel discussed how the AIA should be adapted to address challenges in Europe. EESC rapporteurs Catelijne Muller and Marie-Françoise Gondard-Argenti outlined worker, small business and legal concerns to Lucilla Sioli, the Commission’s Director for the Digital Industry, while European Parliament rapporteur Brando Benefei argued that the Act must offer stronger protection of fundamental rights.
Ms Muller called for the Act to go beyond its current technology-focused approach: You need to involve AI scientists in that and talk to legal experts, she said.
She said that the listing of permitted high-risk uses of AI in the Act, such as biometric identification, is dangerous in sensitive areas such as the judiciary. For instance, AI didn’t go to law school and has no understanding of the facts and of a legal case. Putting use in the judiciary on the list runs the risk of normalising all sorts of AI that we are not ready for.
Ms Gondard-Argenti added that more work is needed for small companies to trust AI, noting that these businesses make up 75 % of the EU’s GDP. She called for awareness-raising, society-wide training and better access to finance and data for businesses to allow Europe to remain an AI pioneer.
She said that all social partners must be included in directing the AI transition. The people who are living with this day to day know the topic best. It would be a pity to miss this transformation at a social and workplace level.
High-risk AI under the spotlight
Panel debates followed on the EESC’s two main areas of concern: high-risk uses of AI in the workplace, such as emotional recognition or automated supervision, and biometric identification systems.
Speakers on workplace AI called for algorithms to be more transparent and for the AIA to protect fundamental rights. They agreed that to limit potential risks, such as unfair dismissal, humans should always monitor how AI is used to manage workers. While speakers were sensitive to companies’ liability risks, some argued that employers have a responsibility to understand the technology they acquire.
In the session on biometric identification, speakers argued that the risks of false results in practices such as automatic facial recognition in public spaces outweigh their benefits, especially as the technology would be applied on a much larger scale than human-based profiling.
Some speakers pointed out that biometric identification would still be beneficial for individual authentication or medicine. But in general, the panel advocated caution and noted that the pandemic had exposed AI issues that previously had made only a small impact.
The morning session included a detailed talk from Professor Luciano Floridi from the Oxford Internet Institute. He emphasised the importance of understanding AI: We are incurring the opportunity costs of not using AI because we are worried and don’t know what is going to happen.
He explained that despite its cost and technological advantages, it is not ready to replace humans. He called for AI to be developed to improve decisions for society, the environment, to mitigate harm such as cyber attacks, and for reliable certification and auditing. The new challenge is not digital innovation but the governance of the digital.
In the afternoon, the summit moved to the European Parliament for the Interparliamentary Committee meeting on AI and the Digital Decade, where EESC president Christa Schweng joined European Parliament Vice-President, Dita Charanzová , and Executive Vice-President of the European Commission, Margrethe Vestager.
Ms Schweng reinforced the call to support small companies. There is no need to stress how important AI is for European business, especially in the context of recovery from the coronavirus crisis, she said.
She shared the EESC’s recommendations for human-directed AI, a ban on biometric identification in public spaces, a redress mechanism and the inclusion of social partners in governance, and called for the Act to be a driving force for building human-centric, safe and inclusive AI.
Members of national parliaments then shared their views, with many echoing those of the EESC.