The EESC congratulates the Commission's for its strategy to encourage the uptake of AI technologies while also ensuring their compliance with European ethical norms, legal requirements and social values.
Coordinated Plan on the development of Artificial Intelligence in Europe - Related Opinions
The annual Union work programme for European standardisation for 2020 identifies priorities for European standardisation. The EESC agrees with the Commission that standardisation is crucial to the strategy for the single market and that it should be constantly updated. Moreover, the EESC considers that there is an urgent need to modernise the European standardisation system to meet global challenges with an innovative process of cooperation.
AI systems must comply with existing legislation. It is important to identify which challenges can be met by means of codes of ethics, self-regulation and voluntary commitments and which need to be tackled by regulation and legislation supported by oversight and, in the event of non-compliance, penalties.
The EESC flags up the potential of AI and would like to give its input to efforts to lay the groundwork for the social transformations which will go hand in hand with the rise of AI and robotics.
The EESC believes that AI and automation processes have enormous potential to improve European society in terms of innovation and positive transformation, but they also pose significant challenges, risks and concerns.
Artificial Intelligence (AI) technologies offer great potential for creating new and innovative solutions to improve peoples lives, grow the economy, and address challenges in health and wellbeing, climate change, safety and security.
Like any disruptive technology, however, AI carries risks and presents complex societal challenges in several areas such as labour, safety, privacy, ethics, skills and so on.
A broad approach towards AI, covering all its effects (good and bad) on society as a whole, is crucial. Especially in a time where developments are accellerating.