Building Trust in Human-Centric Artificial Intelligence (Communication)

EESC opinion: Building Trust in Human-Centric Artificial Intelligence (Communication)

Key points

AI systems must comply with existing legislation. It is important to identify which challenges can be met by means of codes of ethics, self-regulation and voluntary commitments and which need to be tackled by regulation and legislation supported by oversight and, in the event of non-compliance, penalties.

 The EESC:

  • reiterates the need to consult and inform workers when AI systems are introduced that are likely to alter the way work is organised, supervised and overseen. The Commission must promote social dialogue with a view to involving workers in the uses of AI systems;
  • calls for the development of a robust certification system based on test procedures that enable companies to state that their AI systems are reliable and safe. It proposes developing a European trusted-AI Business Certificate based partly on the assessment list put forward by the high-level experts' group on AI;
  • recommends that clear rules be drawn up assigning responsibility to natural persons or legal entities in the event of non-compliance;
  • also urges the Commission to review the General Data Protection Regulation (GDPR) and related legislation on a frequent basis in light of developments in technology.

 

For more information please contact the INT Section Secretariat.

Downloads

INT/887 European Commission position