Key points
The EESC is pleased that the proposal puts health, safety and fundamental rights at its centre and is global in scope.
The EESC sees areas for improvement regarding:
- the scope, definition and clarity of the prohibited AI practices;
- the implications of the categorisation choices made in relation to the "risk pyramid";
- the risk-mitigating effect of the requirements for high-risk AI;
- the enforceability of the AIA; and
- the relation to existing regulation and other recent regulatory proposals.
In addition, the EESC:
- recommends clarifying the prohibitions regarding "subliminal techniques" and "exploitation of vulnerabilities" so as to reflect the prohibition of harmful manipulation;
- welcomes the prohibition of "social scoring" and recommends that the prohibition also apply to private organisations and semi-public authorities;
- calls for a ban on use of AI for automated biometric recognition in publicly and privately accessible spaces, except for very specific cases;
- welcomes the alignment of the requirements for high-risk AI with elements of the Ethics guidelines for trustworthy AI and recommends to include all requirements from these guidelines;
- recommends making third party conformity assessments obligatory for all high-risk AI and including a complaints and redress mechanism for organisations and citizens that have suffered harm from any AI system.
In line with its long advocated "human-in-command" approach to AI, the EESC strongly recommends that the AIA provide for certain decisions to remain the prerogative of humans.
For more information please contact the INT Section Secretariat.