European Economic
and Social Committee
ALGORITHMS AREN’T NEUTRAL: WHY EU LAWS NEED TO LISTEN TO PEOPLE’S EXPERIENCES
Anastasia Karagianni from VUB (Vrije Universiteit Brussel) explores how digital technologies increasingly influence how people are judged and treated, from online images to access to jobs and public services. Although these systems are often presented as neutral, they can reinforce existing inequalities and cause real harm to marginalised communities, showing why EU digital regulation must move beyond technical compliance and take people’s lived experiences seriously when addressing algorithmic discrimination.
Algorithmic discrimination refers to automated systems producing outcomes that systematically disadvantage particular groups, not due to technical 'errors' alone but because of how data, design choices, and historical patterns of inequality shape machine decision‑making. These effects are especially pressing where gender, race, class, disability, or other identity axes intersect, undermining equality, privacy, and non‑discrimination.
For example, beauty filters encode normative, often Eurocentric and gendered ideals of attractiveness by algorithmically 'correcting' faces toward lighter skin tones or feminised features, disproportionately affecting women and people of colour and reinforcing existing hierarchies of social value. Similarly, smart wearable technologies, such as Ray-Ban Meta AI glasses, raise concerns about surveillance, privacy, and image-based sexual abuse, as biased vision and speech systems can misidentify marginalised groups and expose bystanders to recording without their consent, reinforcing existing power imbalances in public spaces.
In the EU, where digital systems increasingly determine access to public services, employment opportunities, and social support, addressing these harms is central to protecting fundamental rights and democratic accountability.
EU frameworks, such as the General Data Protection Regulation (GDPR) and the EU Artificial Intelligence Act (AI Act) represent important steps towards a rights‑based approach to data and automated systems. The GDPR’s emphasis on transparency, human oversight, and mechanisms for individuals to contest automated decisions gives civil society tools to challenge discriminatory practices and to demand accountability from both private and public actors. The AI Act adopts a risk-based approach to regulating AI, with explicit obligations for high-risk systems ─ AI applications considered likely to significantly affect people’s rights, safety, or access to essential services, such as healthcare or employment. This creates avenues for oversight and structured scrutiny of technologies that could produce harmful outcomes.
Civil society organisations have played a key role in bringing these frameworks to life. Forums such as the CPDP (Computers, Privacy and Data Protection Conference), Privacy Camp, and FARI engage activists, researchers, and policy-makers in evaluating algorithmic systems and shaping best practices. Successes achieved by European Digital Rights (EDRi) and the Digital Freedom Fund (DFF) demonstrate how sustained civil engagement can improve transparency obligations, strengthen enforcement, and widen public understanding of digital harms. These initiatives show that EU regulation can empower civil society, fostering participatory approaches to regulation rather than leaving oversight solely to state or corporate actors.
Despite these positive developments, significant gaps remain that limit the capacity of EU regulation to address structural discrimination and algorithmic harm in a comprehensive way. At the heart of this critique is the nature of the AI Act’s risk classification system. The Act’s reliance on a top‑down model, where regulators pre‑define categories of high‑risk systems, leaves little space for bottom‑up identification of emerging harms discovered through lived experience or civil society monitoring. Once systems are deployed, there are limited mechanisms for affected communities to trigger risk reassessments or demand remediation outside predefined categories.
The Digital Omnibus Proposal illustrates another worrying trend. By allowing providers of AI systems to self‑register and determine whether their technology qualifies as high‑risk, the proposal risks delegating critical regulatory judgments to the very actors whose commercial interests may conflict with public safety and rights protection.
Even where bias-mitigation obligations (efforts designed to reduce discrimination in AI systems) exist, they often require the processing of sensitive data. Yet gender and LGBTQIA+ characteristics, such as non-binary, transgender, or intersex identities, are frequently not recognised as protected categories and therefore remain insufficiently safeguarded. This creates blind spots in understanding how AI systems can reinforce overlapping forms of discrimination.
These gaps become most apparent with emerging harms, such as sexualised deepfakes. While it is likely that such technologies could fall under Article 5’s prohibited practices, the regulatory text leaves ambiguity around classification and enforcement. In the absence of clear obligations on platforms to prevent or remediate image‑based abuse and deepfake dissemination, victims may find limited legal recourse, despite substantive harms to privacy, dignity, and safety.
Another limitation lies in standardisation obligations, which apply only to high‑risk AI systems. This leaves vast swathes of widely deployed technologies, including generative AI and content moderation applications, without systematic safety, fairness, and discrimination safeguards. For civil society, this means that many discriminatory or harmful systems may never be subject to robust conformity assessments or accountability pathways.
Finally, the way EU law handles intersectionality ─ the idea that people can face overlapping forms of discrimination ─ shows that current regulations don’t always reflect people's lived realities. While the Directive on Combating Violence Against Women and Domestic Violence (GBV Directive) introduces the concept of 'intersectional discrimination', its practical scope remains limited in the text of the (GBV) Directive. It also does not fully account for the concerns of LGBTQIA+ communities across EU equality policy. Academic analysis of the AI Act shows that references to 'gender equality' are sparse, and inclusive terminology for diverse gender identities is largely missing. As a result, the regulatory framework still remains rooted in binary understandings of gender.
These critiques point to a broader issue: simply following procedural safeguards is not enough to tackle algorithmic discrimination in society. What is needed are approaches that start from people's experiences and identify harms early, assessments that consider how different forms of discrimination overlap, and participatory oversight that meaningfully includes civil society in decision-making. Tools such as gender‑responsive impact assessments and community‑driven evaluation frameworks ─ which involve testing systems for bias and listening to affected users ─ can help make sure that regulation actually protects those most vulnerable to algorithmic harms. Without such mechanisms, EU digital regulation risks enshrining a 'neutral' approach that obscures the inequalities people face in everyday life, instead of confronting them.
Anastasia Karagianni is a doctoral researcher at the Law, Science, Technology and Society (LSTS) research group of the Law and Criminology Faculty at Vrije Universiteit Brussel (VUB) and former FARI scholar. Her thesis focuses on the 'Divergencies of Gender Discrimination in the EU AI Act Through Feminist Epistemologies and Epistemic Controversies'. She has been a visiting researcher at the iCourts research team of the University of Copenhagen and the Joint Research Centre of the European Commission in Seville as well as a visiting lecturer at the ITACA Institute of the UPV Universitat Politèchnica de València.