AI in Europe: not all decisions can be reduced to ones and zeros

In two reports on draft EU legislation on AI, the EESC calls for an all-out ban on social scoring and for a complaint and redress mechanism for people who have suffered harm from an AI system.

At its September plenary, the EESC welcomed the proposed Artificial Intelligence Act (AIA) and Coordinated Plan on AI.

The EESC feels that the new legislation really places health, safety and fundamental rights at its centre, and resonates globally by setting a series of requirements with which developers both in and outside Europe will have to comply if they want to sell their products in the EU.

There are some weaknesses in the proposals in the EESC's view, including in the area of "social scoring". The Committee flags up the danger of this practice gaining currency in Europe as it is doing in China, where the government can go so far as to deny people access to public services.

The draft AIA does include a ban on social scoring by public authorities in Europe, but the EESC would like to see it extended to private and semi-private organisations so as to rule out such uses as establishing whether an individual is eligible for a loan or a mortgage.

The EESC also points out the dangers of listing "high-risk" AI, warning that this listing approach can normalise and mainstream quite a number of AI practices that are still heavily criticised. Biometric recognition including emotion or affect recognition, where a person's facial expressions, tone of voice, posture and gestures are analysed to predict future behaviour, detect lies and even to see if someone is likely to be successful in a job, would be allowed. And so would assessing, scoring and even firing workers based on AI, or assessing students in exams.

In addition, the proposed requirements for high-risk AI cannot always mitigate the harm to health, safety and fundamental rights that these practices pose. Hence the need to introduce a complaint and redress mechanism which will give people who have suffered harm from AI systems the right to challenge decisions taken solely by an algorithm.

More generally, in the EESC's view, the AIA works on the premise that, once the requirements for medium- and high-risk AI are met, AI can largely replace human decision making.

"We at the EESC have always advocated a human-in-command approach to AI, because not all decisions can be reduced to ones and zeros," says Cateljine Muller, rapporteur for the EESC's opinion on the AIA. "Many have a moral component, serious legal implications and major societal impacts, for instance on law enforcement and the judiciary, social services, housing, financial services, education and labour regulations. Are we really ready to allow AI to replace human decision making even in critical processes like law enforcement and the judiciary?" (dm)