Recent initiatives on regulating AI systems and digital services represent a significant turning point and mark a growing recognition that the negative impacts of digitalisation need to be urgently addressed. However, there are still many legislative loopholes and gaps that need to be bridged to secure a safe online space that will respect and protect human dignity and not merely serve as the profit-making arena for Big Tech
This was the main message of a conference jointly held by the European Economic and Social Committee (EESC) and the European Commission's DG Justice and Consumers on 11 February. The conference, entitled
Participants included EU lawmakers, representatives of civil society organisations, and academics. They welcomed the Commission's recent initiatives to regulate the digital sector at the EU level and ensure that core EU values such as democracy, the rule of law and fundamental rights are respected in the online world as much as in the offline world.
However, stakeholders stressed that the latest legislative proposals in the digital domain – among them the draft AI Act and Digital Services Act – still put too much faith in technical solutions to deter discrimination, disinformation and hate speech in the virtual world. Instead of putting in place strong safeguards and independent oversight measures, too much power is left to interest- and revenue-driven platforms to decide how technology is used.
Opening the conference, EESC president Christa Schweng said:
We have entered into a world where AI will soon be used for many decisions having major impacts on all aspects of our lives. It is therefore essential that fundamental rights fully apply to AI and the digital world in general. We should bring the digital world back into the remit of law, or we will risk letting a parallel world develop that will reproduce and amplify societal biases, leading to unacceptable violations of human rights.
Director-general of the Commission's DG Justice Ana Galego said:
There are many different benefits and risks brought about by the use of digital technology, and it is a highly political task to decide which benefits we pursue and how to minimise the risks.
Alliance of tech democracies: EU can lead the way
The Commission's efforts to develop a human-in-command approach to digital technologies have been backed by all EU institutions, including the EESC, which has adopted a number of opinions on the subject in recent years.
The European move to regulate the online space is not just efforts to put into norms the new technologies and their role in our economies and our societies. It is also very much a statement of intent, which is as political as it can be. It is a political drive to affirm that the digital world of tomorrow needs to be a world that is based on the values we believe in, said MEP Dragoş Tudorache, chair of the Special Committee on Artificial Intelligence in a Digital Age (AIDA).
Mr Tudorache said similar efforts are being undertaken by U.S. lawmakers. He stressed the need to reach out to partners around the world and form
alliances with tech democracies to find common ground and consolidate democracies based on new digital realities.
Pauline Dubarry, Justice Advisor at the Permanent representation of France to the European Union, confirmed that the regulation of the digital sector was among the priorities of the current French presidency of the Council, which supported the emergence of a legal framework to foster respect for fundamental rights in the digital world.
This is a global challenge, and we think that it is essential that the EU stands up to competing models which are at odds with the democratic values that we defend, she stated.
For Michael O'Flaherty, director of the European Union Fundamental Rights Agency (FRA), creating a digital space that is respectful of human dignity can only be achieved by adopting a rights-based approach and by mobilising all forces in society. He encouraged wide engagement of civil society in all its diversity.
However, despite applauding the Commission's work in regulating the digital sphere, participants in the conference also saw much room for improvement.
Max Schrems, honorary chairman of the non-profit organisation My Privacy is None of Your Business (NOYB), said there was a huge challenge ahead in implementing rules like the General Data Protection Regulation (GDPR) on the ground, as regulations often lacked clarity, leaving possibilities for interpretation and misuse.
This can especially hit individuals, consumers and SMEs which would like to comply with regulations but
don’t know where to start and what it means. On the other hand, this plays into the hands of big companies, which then adopt a risk-based approach and use the uncertainty to their own advantage.
We see in practice that anything that is ambiguous and can be misunderstood will be misunderstood on purpose. So I think it's extremely important to really think about high-quality, clear regulations. This is really hard to achieve, but I think it absolutely pays off, Mr Schrems said.
Discrimination is in the code
The conference held expert panels on two of the topics flagged in the Commission's report, on AI and the right to non-discrimination, and on the fundamental rights impact of online content moderation.
In the first panel, speakers debated risks of bias in the AI code or in the data used to generate an AI system. An example of discrimination further exacerbated by the deployment of AI systems is facial recognition systems being specifically set up in areas whose inhabitants belong to, for example, particular religious groups or to LGBTIQ communities. Another example is the use of policing systems which predict the occurrence of crimes. Those have been fundamentally biased against people from working class, ethnic minority or migrant backgrounds.
Although the Commission in its AI Act recognises that AI systems do perpetuate discrimination, it responds to the problem by narrowly focusing on a technocentric approach and placing the responsibility primarily with AI providers.
The Commission's approach runs the risk of actually enabling inherently harmful deployments of technology by promoting the flawed assumption that these systems can be fixed for discrimination, said Sara Chander of European Digital Rights (EDRi).
"If we are to truly tackle the full extent of discriminatory AI, we need governance responses that focus on harm prevention, the accountability of institutions, and the empowerment of the affected people and groups," she maintained. Some organisations, such as Amnesty Tech, call for an outright ban on facial and emotion recognition systems as they are intrusive and enable massive discriminatory surveillance.
Another issue is workplace artificial intelligence, as the gap between labour law and AI regulation is still looming large, warned Laura Nurski of the Bruegel Think Tank. With its increasing impact on selection, recruiting, promotion and dismissal processes, AI poses risks to equal treatment in employment and equal access to jobs, and threatens to exacerbate existing inequalities and discrimination, for example against women.
According to Ms Nurski, the best tool to regulate workplace AI is social dialogue, as the involvement of workers in the design and implementation of workplace AI decreases the risk of undesirable outcomes such as discrimination. Therefore, in her view, the lack of any mention of social dialogue represents a crucial gap in the AI proposal.
Down the rabbit hole
The second panel focused on the impact of online content moderation on fundamental rights. Although acknowledging the need for content moderation to address illegal or harmful content, civil society activists argue that content removal, including automatic content removal, risks being arbitrary if left to Big Tech in the absence of public scrutiny or oversight by national regulators, said Eliska Pirkova from Access Now.
This could have a chilling effect on freedom of expression and may further discrimination and endanger the right to information and privacy or freedom to conduct a business. To counter this, civil society organisations advocate a more holistic EU approach, relying less on self-assessment and self-regulation by AI platforms. EU proposals should include safeguards like yearly audits of big tech platforms by independent auditors.
We are now standing on the crossroads where we have to acknowledge that core regulatory or self-regulatory measures consisting of voluntary commitments by platforms are not enough anymore, Ms Pirkova said.
Instead of focusing only on stricter rules for content moderation, EU regulations should instead address the root cause of the mushrooming of illegal or harmful content online – the data and advertising-driven business model of big tech companies
that make almost all of their revenue from surveilling people on a mass scale for the purpose of serving us their relevant ads, said Rasha Abdul Rahim of Amnesty Tech.
It has been shown time and again that the recommended algorithms of big tech companies are harmful. Algorithms of some major platforms are slated to be drawing the users "down the rabbit hole" by recommending them videos with more and more extreme content and conspiracy theories to keep them on the platform as long as possible, Ms Abdul Rahim said.
Facebook and other platforms do not want to fix these problems as this actually drives their profit. Amnesty Tech is therefore calling for a ban on surveillance advertising, including a ban on the use of patents that trick people into giving up their data, on the use of sensitive data, and the targeting of children, Ms Abdul Rahim concluded.
The 2021 Annual Report on the Application of the EU Charter of Fundamental Rights issued by the European Commission in December 2021 is entitled
Protecting Fundamental Rights in the Digital Age. For the first time, this yearly report adopted a thematic approach which this year took stock of digital developments and explored challenges in protecting fundamental rights online. It focused on five topics: artificial intelligence; the digital divide; the protection of people working through platforms; the supervision of digital surveillance; and the challenges of online moderation.