Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL establishing the Union support for internal security for the period from 2028 to 2034

Download — COM542-2025_PART1_EXT — (SOC/0844)

Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL establishing the Union support for asylum, migration and integration for the period from 2028 to 2034

Download — COM540-2025_PART1_EXT — (SOC/0844)

Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL establishing the Union support for the Schengen area, for European integrated border management and for the common policy on visas for the period from 2028 to 2034

Download — COM541-2025_PART1_EXT — (SOC/0844)

2026 European Semester - Autumn Package

Download — EESC-2025-03794-00-00-PA-TRA — (ECO/0688)

Notice of meeting - ECO section 5.2.2026

Download — EESC-2025-04320-00-00-CONVPOJ-TRA — (Agenda)

With more than 1.6 billion users globally, including over 200 million across Europe, TikTok has become a major space for political expression and information-sharing and a main source of news for a significant share of young people. As a result, it has come under growing scrutiny from regulators and civil society. The European Commission opened formal proceedings in 2024 to examine whether TikTok is adequately assessing and mitigating systemic risks related to election integrity and civic discourse. We asked Francesca Scapolo, TikTok's Election Integrity Expert for Public Policy in Europe, how TikTok understands its responsibility for these risks in practice, how it cooperates with authorities, and what safeguards it has in place to protect democratic processes.

With more than 1.6 billion users globally, including over 200 million across Europe, TikTok has become a major space for political expression and information-sharing and a primary source of news for a significant share of young people. As a result, it has come under growing scrutiny from regulators and civil society. The European Commission opened formal proceedings in 2024 to examine whether TikTok is adequately assessing and mitigating systemic risks related to election integrity and civic discourse. We asked Francesca Scapolo, TikTok's Election Integrity Expert for Public Policy in Europe, how TikTok understands its responsibility for these risks in practice, how it cooperates with authorities, and what safeguards it has in place to protect democratic processes.

 

 

Given TikTok’s scale and its growing role as a source of political information for millions of users across the EU, how does the company approach accountability for systemic risks on the platform more broadly, such as the spread of disinformation, coordinated behaviour, or fake and inauthentic accounts? How do these efforts translate into cooperation with national authorities and EU institutions, particularly during sensitive moments like elections?

TikTok is a discovery platform where more than 200 million Europeans come to connect, share their passions, and find inspiration. We recognise that with scale comes responsibility, and we work continually to protect our platform and maintain a civil place for people to express themselves and build community, including during elections. We’ve invested significantly in systems, specialised teams and partnerships to address systemic risks such as harmful misinformation, fake and inauthentic accounts, coordinated inauthentic behaviour, and deceptive behaviours.

Across the EU, our work includes proactive enforcement of our Community Guidelines, investment in features, tools and resources to empower our community, including media literacy initiatives, and partnerships with external experts. In fact, through TikTok's global fact-checking programme, we work closely with more than 20 IFCN-accredited fact-checking organisations, including AFP in France, DPA in Germany and Newtral in Spain.

Our technical and enforcement work is complemented by ongoing cooperation with national authorities and EU regulators. Under the Digital Services Act (DSA) and the Code of Conduct on Disinformation, we engage with Digital Service Coordinators and the European Commission. We also provide regular updates on our content moderation efforts through our transparency reports.

During high‑stakes periods, such as elections, we also collaborate with national authorities and electoral commissions, and participate in the Code’s rapid response system, which enables swift, coordinated information-sharing between civil society organisations, fact‑checkers and platforms to address urgent or emerging threats, a critical capability during elections.

Taken together, these efforts demonstrate how we blend proactive risk mitigation, user‑empowering tools, and regulatory cooperation to help safeguard democratic discourse across the EU, especially during sensitive electoral moments.

From your perspective, are the measures TikTok currently has in place sufficient to address systemic risks to democratic processes during elections, particularly those linked to recommendation algorithms, visibility dynamics, and coordinated campaigns? Or do you see a need for stronger or more proactive safeguards?

During elections, we work continually to protect our platform and maintain a civil place for people to express themselves and build community. Thousands of trust and safety and security professionals have safeguarded TikTok through over 200 elections around the world over the last five years. Our comprehensive strategy is based on three key pillars:

  • Protecting election integrity: Removing harmful misinformation, disrupting attempts to influence our community, including covert influence operations, collaborating with fact-checkers to assess content accuracy, and labelling unverifiable claims.

  • Empowering users: Providing access to reliable information through Election Centres, enabling users to separate fact from fiction.

  • Collaborating with experts: Partnering with electoral commissions and fact-checking organisations to counter emerging threats.

Through these efforts, in 2025, we disrupted more than 75 covert influence networks, and removed tens of thousands of accounts for violating our covert influence policies. We stay accountable to our community with regular updates on how we protect election integrity and frequent reports on the covert influence operations we have disrupted.

Looking ahead, we remain committed to strengthening these pillars and to evolving our safeguards as risks change.

Project Clover has been presented as a key pillar of TikTok’s European data-governance strategy, including a long-term investment of around EUR 12 billion, yet it remains relatively unknown to the public. How does this initiative concretely change how TikTok handles European user data, and what relevance does it have for election integrity and democratic safeguards in the EU?

Project Clover is one of the most advanced and comprehensive data protection programmes to be found anywhere. Its core tenets include storing European user data in a dedicated European enclave by default and putting additional safeguards and restrictions around that data, building on our existing controls on who can access data.

We've also engaged a respected European cybersecurity firm, NCC Group, to independently monitor and verify these safeguards. NCC Group's oversight provides third-party accountability over our work to protect European user data. We've also deployed tools to further protect European user privacy called 'privacy enhancing technologies'.

These measures go further than regulatory requirements, while being aligned with principles in the GDPR, and our general efforts towards safeguarding our platform and users through robust processes, policies, and procedures.

Francesca Scapolo oversees TikTok’s Europe-wide election integrity public policy efforts, coordinating among product, trust and safety, and policy teams. Collaborating with internal and external stakeholders, she implements regional public policy strategies that reinforce civic trust and safeguard electoral integrity. Before joining TikTok, she worked at the Meta Oversight Board and the European Commission. 

The EESC is calling for a larger EU budget than proposed in the Commission’s draft 2028-2034 multiannual financial framework (MFF), which totals EUR 1.816 trillion. The EESC discussed the draft during its plenary session in December as part of the preparations for an opinion that is due in January 2026 and builds on the EESC’s April 2025 mid-term revision assessment.

The EESC is calling for a larger EU budget than proposed in the Commission’s draft 2028-2034 multiannual financial framework (MFF), which totals EUR 1.816 trillion.

The EESC discussed the draft during its plenary session in December as part of the preparations for an opinion that is due in January 2026 and builds on the EESC’s April 2025 mid-term revision assessment. The debate saw the participation of Commissioner for Budget, Anti-Fraud and Public Administration Piotr Serafin, MEP Carla Tavares and former Italian minister in the Draghi government and Scientific Director of the Italian Alliance for Sustainable Development (ASviS) Enrico Giovannini.

‘Our Union can only remain resilient if those closest to the grassroots level – regional and local actors, social partners and organised civil society – remain fully involved in shaping where and how funds are spent,’ EESC President Séamus Boland said.

During the debate, EESC members warned that merging cohesion, agricultural and fisheries funding into new national and regional partnership plans (NRPPs) could risk centralising fund management. They also highlighted the need to avoid repeating the consultation shortcomings seen with the recovery and resilience plans. Concerns were raised about linking NRPPs to European Semester priorities, which could impose undue macroeconomic conditionality.

The Committee supported using revenue from the emissions trading system and the carbon border adjustment mechanism, but opposed a new corporate levy, recommending a digital services tax instead. It called for increased funding for the European Social Fund Plus, the Just Transition Fund, Horizon Europe and the Connecting Europe Facility. The new AgoraEU programme was welcomed as a boost for culture, media pluralism, democratic participation and civil society.

Clearer targets, transparency and greater local involvement would bolster democratic governance and improve the MFF proposal.

The first EU Water Resilience Forum, co‑organised by the EESC, the Committee of the Regions and the European Commission, gathered policymakers and stakeholders to chart solutions for Europe’s growing water challenges. 

The first EU Water Resilience Forum, co‑organised by the EESC, the Committee of the Regions and the European Commission, gathered policymakers and stakeholders to chart solutions for Europe’s growing water challenges.

Commissioner Jessika Roswall warned that ‘water is no longer an infinite resource’ and called for urgent collective action, while Executive Vice‑President Teresa Ribera underlined that ‘water connects everything we care about… water is life, a shared responsibility.’ The Forum also launched the new Water Resilience Stakeholder Platform, designed to turn shared ideas into coordinated implementation.

Water resilience at the heart of EU priorities

For the EESC, the Forum reinforced momentum behind its EU Blue Deal, which has helped push water security up the EU’s political agenda and inspired the creation of a dedicated Commissioner portfolio. The updated Blue Deal Declaration now includes 31 specific actions, including an EU Water Test to assess the impact of new legislation on water resources and pollution. EESC President Séamus Boland stressed the social dimension of water: ‘Fair access to water is a matter of justice… Europe’s water future is ultimately about protecting people, livelihoods and future generations.’

Local action, shared responsibility

Cities and regions play a pivotal role. The President of the European Committee of the Regions, Kata Tüttő, reminded participants that ‘water is everywhere in our lives… and we feel the anxiety of water every day.’  She stressed that cross‑border collaboration is essential, noting how pollution in one city affects communities far downstream. Forum participants exchanged concrete solutions on restoring the water cycle, improving water efficiency, deploying digital tools and ensuring equitable access, especially for vulnerable groups.

From commitment to action

The Forum concluded with a shared determination to translate political ambition into practical measures and investments in order to achieve water resilience by 2050. With the launch of the Water Resilience Stakeholder Platform, the EESC reaffirmed its readiness to help connect policymakers with workers, businesses, farmers and communities. ‘This platform is a chance to turn ideas into practical, people‑centred solutions and ensure that no one is left behind’, the EESC President concluded. (gb)

At its December 2025 plenary, the EESC adopted an own‑initiative opinion urging the EU to formally recognise permanent materials – steel, aluminium and glass – as key to a truly circular economy. 

At its December 2025 plenary, the EESC adopted an own‑initiative opinion urging the EU to formally recognise permanent materials – steel, aluminium and glass – as key to a truly circular economy.

These materials retain their properties through endless recycling, delivering major climate and resource savings: recycling aluminium cuts energy use by 95% and reduces emissions from 15.1 tonnes of CO₂ per tonne of primary aluminium to just 0.52 tonnes. Rapporteur Andrea Mone highlighted the social dimension of the transition, stating ‘We need access to up‑skilling and re‑skilling to facilitate smooth job transitions and enable workers to benefit from the circular economy.’ Co‑rapporteur Michal Pintér called for stronger policy action, saying ‘We need concrete legislation to move from slogans to practical and viable models.’

Why permanent materials matter

Permanent materials allow circular, closed‑loop recycling without quality loss, unlike materials that degrade with each cycle. High recycling rates already show their potential: tinplate packaging exceeds 80% recycling in several Member States, and every 10% rise in recycled glass content cuts energy use by 3% and CO₂ emissions by 5%. These gains make permanent materials key to meeting EU climate‑neutrality goals while reducing dependence on virgin raw materials.

What must change

The EESC stresses that the EU needs clearer legislation to distinguish permanent from non‑permanent materials and set ambitious recycling and collection targets. Achieving 90% separate collection of packaging waste by 2030, harmonising extended producer responsibility systems, investing in modern recycling infrastructure and improving consumer participation are key priorities. The Committee also emphasises that the circular transition must be socially fair, ensuring access to training, job‑to‑job support and strong social dialogue as new circular business models emerge. (gb)

Anastasia Karagianni from VUB (Vrije Universiteit Brussel) explores how digital technologies increasingly influence how people are judged and treated, from online images to access to jobs and public services. Although these systems are often presented as neutral, they can reinforce existing inequalities and cause real harm to marginalised communities, showing why EU digital regulation must move beyond technical compliance and take people’s lived experiences seriously when addressing algorithmic discrimination.

Anastasia Karagianni from VUB (Vrije Universiteit Brussel) explores how digital technologies increasingly influence how people are judged and treated, from online images to access to jobs and public services. Although these systems are often presented as neutral, they can reinforce existing inequalities and cause real harm to marginalised communities, showing why EU digital regulation must move beyond technical compliance and take people’s lived experiences seriously when addressing algorithmic discrimination.

Algorithmic discrimination refers to automated systems producing outcomes that systematically disadvantage particular groups, not due to technical 'errors' alone but because of how data, design choices, and historical patterns of inequality shape machine decision‑making. These effects are especially pressing where gender, race, class, disability, or other identity axes intersect, undermining equality, privacy, and non‑discrimination.

For example, beauty filters encode normative, often Eurocentric and gendered ideals of attractiveness by algorithmically 'correcting' faces toward lighter skin tones or feminised features, disproportionately affecting women and people of colour and reinforcing existing hierarchies of social value. Similarly, smart wearable technologies, such as Ray-Ban Meta AI glasses, raise concerns about surveillance, privacy, and image-based sexual abuse, as biased vision and speech systems can misidentify marginalised groups and expose bystanders to recording without their consent, reinforcing existing power imbalances in public spaces.

In the EU, where digital systems increasingly determine access to public services, employment opportunities, and social support, addressing these harms is central to protecting fundamental rights and democratic accountability.

EU frameworks, such as the General Data Protection Regulation (GDPR) and the EU Artificial Intelligence Act (AI Act) represent important steps towards a rights‑based approach to data and automated systems. The GDPR’s emphasis on transparency, human oversight, and mechanisms for individuals to contest automated decisions gives civil society tools to challenge discriminatory practices and to demand accountability from both private and public actors. The AI Act adopts a risk-based approach to regulating AI, with explicit obligations for high-risk systems ─ AI applications considered likely to significantly affect people’s rights, safety, or access to essential services, such as healthcare or employment. This creates avenues for oversight and structured scrutiny of technologies that could produce harmful outcomes.

Civil society organisations have played a key role in bringing these frameworks to life. Forums such as the CPDP (Computers, Privacy and Data Protection Conference), Privacy Camp, and FARI engage activists, researchers, and policy-makers in evaluating algorithmic systems and shaping best practices. Successes achieved by European Digital Rights (EDRi) and the Digital Freedom Fund (DFF) demonstrate how sustained civil engagement can improve transparency obligations, strengthen enforcement, and widen public understanding of digital harms. These initiatives show that EU regulation can empower civil society, fostering participatory approaches to regulation rather than leaving oversight solely to state or corporate actors.

Despite these positive developments, significant gaps remain that limit the capacity of EU regulation to address structural discrimination and algorithmic harm in a comprehensive way. At the heart of this critique is the nature of the AI Act’s risk classification system. The Act’s reliance on a top‑down model, where regulators pre‑define categories of high‑risk systems, leaves little space for bottom‑up identification of emerging harms discovered through lived experience or civil society monitoring. Once systems are deployed, there are limited mechanisms for affected communities to trigger risk reassessments or demand remediation outside predefined categories.

The Digital Omnibus Proposal illustrates another worrying trend. By allowing providers of AI systems to self‑register and determine whether their technology qualifies as high‑risk, the proposal risks delegating critical regulatory judgments to the very actors whose commercial interests may conflict with public safety and rights protection.

Even where bias-mitigation obligations (efforts designed to reduce discrimination in AI systems) exist, they often require the processing of sensitive data. Yet gender and LGBTQIA+ characteristics, such as non-binary, transgender, or intersex identities, are frequently not recognised as protected categories and therefore remain insufficiently safeguarded. This creates blind spots in understanding how AI systems can reinforce overlapping forms of discrimination.

These gaps become most apparent  with emerging harms, such as sexualised deepfakes. While it is likely that such technologies could fall under Article 5’s prohibited practices, the regulatory text leaves ambiguity around classification and enforcement. In the absence of clear obligations on platforms to prevent or remediate image‑based abuse and deepfake dissemination, victims may find limited legal recourse, despite substantive harms to privacy, dignity, and safety.

Another limitation lies in standardisation obligations, which apply only to high‑risk AI systems. This leaves vast swathes of widely deployed technologies, including generative AI and content moderation applications, without systematic safety, fairness, and discrimination safeguards. For civil society, this means that many discriminatory or harmful systems may never be subject to robust conformity assessments or accountability pathways.

Finally, the way EU law handles intersectionality ─ the idea that people can face overlapping forms of discrimination ─ shows that current regulations don’t always reflect people's lived realities. While the Directive on Combating Violence Against Women and Domestic Violence (GBV Directive) introduces the concept of 'intersectional discrimination', its practical scope remains limited in the text of the (GBV) Directive. It also does not fully account for the concerns of LGBTQIA+ communities across EU equality policy. Academic analysis of the AI Act shows that references to 'gender equality' are sparse, and inclusive terminology for diverse gender identities is largely missing. As a result, the regulatory framework still remains rooted in binary understandings of gender.

These critiques point to a broader issue: simply following procedural safeguards is not enough to tackle algorithmic discrimination in society. What is needed are approaches that start from people's experiences and identify harms early, assessments that consider how different forms of discrimination overlap, and participatory oversight that meaningfully includes civil society in decision-making. Tools such as gender‑responsive impact assessments and community‑driven evaluation frameworks ─ which involve testing systems for bias and listening to affected users ─ can help make sure that regulation actually protects those most vulnerable to algorithmic harms. Without such mechanisms, EU digital regulation risks enshrining a 'neutral' approach that obscures the inequalities people face in everyday life, instead of confronting them.

Anastasia Karagianni is a doctoral researcher at the Law, Science, Technology and Society (LSTS) research group of the Law and Criminology Faculty at Vrije Universiteit Brussel (VUB) and former FARI scholar. Her thesis focuses on the 'Divergencies of Gender Discrimination in the EU AI Act Through Feminist Epistemologies and Epistemic Controversies'. She has been a visiting researcher at the iCourts research team of the University of Copenhagen and the Joint Research Centre of the European Commission in Seville as well as a visiting lecturer at the ITACA Institute of the UPV Universitat Politèchnica de València.