Workshop 6: Artificial Intelligence as a common good
JDE 63 / Friday, 25 May 2018, 9:30 – 11:30
Organised by Solidar and the EESC section Single Market, Production and Consumption
9.30 – 9.35 Welcome and explication of the flow of the workshop
- Ms Lenneke Hoedemaker, moderator
9.35 – 9.50 Setting the scene - societal impact domains of AI
- Ms Catelijne Muller, Rapporteur of the Opinion on Artificial Intelligence, EESC
9.50 – 11.20 Panel discussion and exchange with the audience
- How to ensure that AI improves society as a whole?
Mr Paul Lukowicz, Embedded Intelligence Department, German Research Centre for Artificial Intelligence (DFKI)
- AI for good: opportunities of using AI to solve major societal challenges
Chiara Tomasi, Public Policy and Government Relations Analyst, Google
- Characteristics and effects of algorithmic decision making processes
Ms Virginia Dignum, Associate Professor, Delft University of Technology
- Towards an ethical and legal framework for AI – what should be in for EU civil society
Mr Jim Dratwa, Head of the European Group on Ethics in Science and New Technologies, European Commission
- Safe, sustainable and responsible AI for better and more efficient jobs?
Ms Anna Byhovskaya, Trade Union Advisory Committee, OECD
11.20 – 11.30 Concluding remarks
Mr Conny Reuter, Solidar, EESC Liaison Group co-chair
Draft Concept Note
Artificial intelligence (AI) is currently undergoing a number of important developments and is rapidly being applied in society and in our daily lives. Although important, the discussion on superintelligence is currently predominating and this is overshadowing the debate on the impact of the current applications of AI.
AI has the potential to have significant advantages for society. Amongst the positive uses, we can name applications in sustainable agriculture, safer transport, a safer financial system, more environmentally friendly production processes, better medicine, safer work, more personalised education, better jurisprudence and a safer society.
As with every disruptive technology, a number of concerns regarding the long term societal impact of AI have caught the public’s attention: the possibility of creating superintelligence, the impact on jobs and the challenges related to lethal autonomous weapon systems. AI also entails risks and complex policy challenges in areas such as safety and monitoring, socio-economic aspects, ethics and privacy, reliability, etc.
Therefore, it is important to manage developments surrounding AI, not only from a technical perspective but also specifically from an ethical, safety and societal perspective. Civil society organisations could, through a multi-stakeholder approach, help to achieve safe, responsible, robust, dependable, ethical AI, by avoiding and addressing the risks of AI, mitigating the negative effects of AI on society, while at the same time untapping (hidden) opportunities of AI.
This workshop will address amongst other the following questions/topics:
- How to achieve a joint strategy for the further responsible and ethical development and deployment of AI?
- How to create an ecosystem for a centralised, informed, balanced and solution driven debate on AI, involving all stakeholders: policy-makers, industry, social partners, consumers, NGOs, (semi-)public organisations, academics from various disciplines etc.?
- Joint Code of Ethics for AI - for the development, application and use of AI so that throughout their entire operational process AI systems remain compatible with the principles
of human dignity, integrity, freedom, privacy and cultural and gender diversity, as well as with fundamental human rights.
- How to Increase public knowledge and awareness of AI to gain trust in the potential benefits of AI and to avoid and address irrational fears of AI?
- system for verifying, validating and monitoring AI systems, based on a wide range of standards in the areas of safety, transparency, comprehensibility, accountability and ethical values.
- Which job sectors will be affected the most by AI, to what extent and on what timescale?
- Human-in-command approach or rather specific legal status for robots?