Adopted on 21/01/2026 - Bureau decision date: 20/01/2026
Reference
NAT/980-EESC-2026
Plenary session number
602
-
  • Record of proceedings NAT/980
Download — EESC opinion: Amendment to market stability reserve for the buildings, road transport and additional sectors

By the EESC Workers’ Group

The outlook for digital rights in the European Union had, until a few years ago, given reasons to breed optimism. Moving away from the wild west of data harvesting, the Digital Services and Digital Market Acts, along with further regulation on AI and data protection, set world-leading standards for a ‘human-centred approach’ to technological development, despite all their shortcomings, particularly in enforcement. 

By the EESC Workers Group

The outlook for digital rights in the European Union had, until a few years ago, given reasons to breed optimism. Moving away from the wild west of data harvesting, the Digital Services and Digital Market Acts, along with further regulation on AI and data protection, set world-leading standards for a ‘human-centredapproach to technological development, despite all their shortcomings, particularly in enforcement.

However, regulation soon became the source of all the Union’s woes, real or imagined. A defective and biased reading of the Letta and Draghi reports on the one hand, and a generous dose of magical thinking on the other, framed Europe’s productivity gap and its lack of an adequate number of unicorn start-ups as the result of overregulation. Never mind the fact that in related fields, such as AI, the relevant regulation was not even in force at the time.

Now, in the hope that this will somehow magically spark a world-leading, energy-consuming and water-gutting word-salad generator with some statistical accuracy (namely, large AI models leaving an enormous environmental footprint and using vast amounts of water to cool data centres), the Commission has put forward two omnibus proposals that undermine the foundations of personal data protection ─ GDPR and ePrivacy ─ by enabling broader data use for AI training and dismantling protections and safeguards in the AI Act.

Given the fact that the emergence of tech unicorns appears, at the very least, uncorrelated with relevant regulation, and setting aside strong ideological assumptions about the supposed evils of consumer protection, civil society must reflect on the dangers of the Digital fitness check before we become a data farm for US companies. Copilot, which invasively suggests summaries of this text, seems to agree. 

Designers and developers of digital interfaces should be more aware of people’s varying information needs. Bart Simons of the European Blind Union (EBU) urges them to step into the shoes of people with disabilities ─ even if only for a moment ─ to make the digital revolution benefit us all.

Designers and developers of digital interfaces should be more aware of people’s varying information needs. Bart Simons of the European Blind Union (EBU) urges them to step into the shoes of people with disabilities ─ even if only for a moment ─ to make the digital revolution benefit us all.

Access to information is essential. Blind and partially sighted people have long been pioneers in developing technical tools to consult printed information, as this is crucial for an independent life. We were among the first to use scanners to read text from paper and even ten years ago we already had AI tools on our smart phones to describe our surroundings.

We are generally grateful that information and many processes are becoming available digitally. However, digitalisation must be implemented in a smart and inclusive way. There is great potential for designing and developing websites, apps, banking services, books and shopping platforms that are accessible to users with widely diverse needs. Legislation such as the European Accessibility Act is in place, standards have been developed and smartphones and computers can be personalised and equipped with assistive technologies so that everyone can use them.

However, we need more awareness among designers and developers of digital interfaces. They need to be trained so that the potential of digitalisation to meet our information-access needs becomes a reality.

We also want to check the amount we are paying before pressing the OK button on the payment terminal. We cannot drive cars yet so we rely on home delivery of groceries, but that only works if the shop's website can be used without a mouse. We want to read books released yesterday and find accurate information on the internet, but that requires sufficient colour contrast. In addition, information needs to be provided in text rather than just images.

Everyone involved in creating products and services with a digital interface can help unlock this potential by looking at things from different perspectives: how do I find the double espresso button on a touch screen when I close my eyes or forget my glasses? Can users order food from this kiosk when they are short, tall or seated? On an e-learning platform, can users answer questions without using a mouse? Is there an alternative to drag and drop? Are exercises designed not to rely solely on colour codes, image recognition or other sensory characteristics?

When products and services are designed and developed inclusively more customers are reached, and those customers will feel more independent and recommend them to others. Let us make the digital revolution a reality for all.

Bart Simons is accessibility expert and representative of the European Blind Union (EBU) at the European Consumer Voice in Standardisation (ANEC).

Presentation by
Constantinos MASONOS - Co-chair of the MFF Ad Hoc Working Party
Organisation
Cyprus Presidency of the Council
  • Presentation of Cyprus Presidency of the Council of the EU - Priorities relating to the Multiannual Financial Framework
Presentation by
Catherine LION / Catherine PAJARES Y SANCHEZ
Organisation
Economic Social and Environmental Council of France
  • Public debate on Latest twists and turns on the road to the next Multiannual Financial Framework

The European Economic and Social Committee (EESC) has called for urgent action to strengthen labour rights for journalists and media professionals across Europe, emphasising that decent working conditions are vital to protect the independence of journalism and ensure the general public has access to reliable, pluralist information. 

The European Economic and Social Committee (EESC) has called for urgent action to strengthen labour rights for journalists and media professionals across Europe, emphasising that decent working conditions are vital to protect the independence of journalism and ensure the general public has access to reliable, pluralist information.

In an opinion based on extensive research and stakeholder input, adopted at its December plenary, the EESC recommended improving working conditions, supporting media pluralism, and protecting journalists from economic and physical threats. The opinion has since been welcomed by the European Federation of Journalists (EFJ).

'Today, the working environment for journalists is increasingly hostile: lies and rumours - as well as job insecurity and poor working conditions for information workers - undermine not only the quality of information but freedom itself', said rapporteur José Antonio Moreno Díaz in a video message.

In the same message, co-rapporteur Christian Moos emphasised that 'Europe is at a crossroads: either we take decisive action to protect journalists, or we risk weakening one of the pillars of our democracy'.

The EESC called for full application of the European Media Freedom Act (EMFA) and urged the European Commission to ensure that Member States complied with it. Independent support for media outlets, including VAT reductions, was needed to counteract the dominance of large online platforms and sustain the European media sector.

The Committee stressed the importance of social dialogue and collective bargaining for all journalists, including freelancers, and called for governments to implement minimum wage directives and guidelines for collective agreements. Action against bogus self-employment and full application of EU occupational safety and health directives are also being called for, alongside increased funding for quality jobs in the media sector.

Journalists face insecurity, stress, burnout and harassment, with freelancers particularly vulnerable to this, due to declining collective agreements and inadequate social protection. The EESC calls for deeper engagement with journalists’ organisations to build structures that safeguard safety and well-being, and supports the adoption of a directive on psychosocial risks at the workplace.

EU AI legislation should be monitored to balance innovation with protection for journalists, and AI literacy should be encouraged, the EESC said, highlighting the threat of disinformation and challenges to work-life balance. The Committee expressed concern about media ownership concentration and the vulnerability of public service media, calling for strict enforcement of the EMFA and sustainable support for independent journalism initiatives. (lm)

In an online world where generative AI can fabricate a headline, an image and a source in seconds, 'breaking news' may soon give way to 'fact-checked news'. At a time when lies travel faster than facts, fact-checking is quickly becoming one of journalism's most powerful tools. The European Digital Media Observatory (EDMO tracks Europe's most persistent false narratives through its monthly disinformation briefs. We spoke with EDMO coordinator Tommaso Canetta about how fact-checking is evolving — and what it takes to push back against disinformation in the age of AI.

In an online world where generative AI can fabricate a headline, an image and a source in seconds, 'breaking news' may soon give way to 'fact-checked news'. At a time when lies travel faster than facts, fact-checking is quickly becoming one of journalism's most powerful tools. The European Digital Media Observatory (EDMO) tracks Europe's most persistent false narratives through its monthly disinformation briefs. We spoke with EDMO coordinator Tommaso Canetta about how fact-checking is evolving — and what it takes to push back against disinformation in the age of AI.

 

Could you tell us a little bit more about the monthly briefs of the EDMO fact-checking network? How do you collect information and decide what to include in the briefs? Who are your fact-checkers?

Every month, we send a questionnaire to the fact-checking organisations that are members of the EDMO fact-checking network (55 organisations covering all EU Member States plus Norway). The questionnaire includes both quantitative and qualitative questions about the disinformation detected during the previous month. We then analyse all the responses and include the most relevant information emerging from this analysis in the briefs.

 

Your October brief stated that AI-generated disinformation hit a new record amid the crumbling of information integrity. What is an 'AI slop' and how is it used to produce fake news or political discreditation? Can you give us some recent and blatant examples?

'AI slop' can be defined as low- to mid-quality content created using AI tools. The deluge of AI-generated content circulating on social media platforms during crises, before, during or after elections, and more generally around sensitive topics, can significantly distort public perception.

Recent examples include the many false videos and images allegedly showing Venezuelans celebrating in the streets following the abduction of Maduro by the United States. Another example is the circulation, in November, of AI-generated videos depicting Ukrainian soldiers surrendering. In the political sphere more broadly, deepfakes of politicians saying things they never said are increasingly being created and disseminated to discredit them (for example, this one in Hungary).

 

Are there some recurrent topics or issues where disinformation and false narratives have thrived lately? Could you name a few based on your research for the briefs?

The war in Ukraine, migration, climate change, the EU, the Israel–Hamas war in Gaza and its consequences, pandemics and vaccines, and LGBTQ+ communities have all been recurring targets of disinformation narratives and campaigns in recent months (as reflected in the briefs monitoring these topics).

Moreover, national politics are frequently targeted by disinformation, although the specific dynamics naturally vary from country to country. In addition, virtually all newsworthy crises tend to become disinformation targets, at least for as long as traditional media coverage gives prominence to them (e.g. Hurricane Melissa, the theft at the Louvre, Charlie Kirk's death, the 12-day Israel–Iran war, the presidential elections in Romania, etc.).

 

In a recent report by the Reuters Institute for the Study of Journalism, experts forecast that verification will take centre stage in the years to come, with 'breaking verification' replacing 'breaking news'. What is your take on the evolution of fact-checking journalism and its importance in the future?

My view is that its importance will only continue to grow. We are rapidly moving toward a situation in which the main source of information for entire generations - social media - is being flooded with unreliable content, while many users are increasingly unable to distinguish what is real from what is AI-generated.

Disinformation, FIMI (foreign information and manipulation interference), scams, non-consensual AI-generated pornography (including of minors), and other illegal or harmful content and operations will thrive. These are fuelled by platforms' algorithms and business models, by unscrupulous actors exploiting the system for profit, and by adversarial/extremist forces (domestic or foreign) that benefit from polarisation and the societal crises of European states.

If democracies want to survive, they will need to address this issue decisively, and fact-checking is a fundamental tool in this effort. Even if it becomes impossible to verify all false content in the future, there should at least be a strong effort to verify what is true. Traditional sources of information could even benefit from such a shift.

 

Can people be taught how to detect disinformation? How can we spot a fake when we read, see or hear one? Will this even be possible given the rapid rise of AI technology, or will we again need AI to detect fakes created by AI?

A great deal can be taught. Awareness of disinformation and its main characteristics is a powerful first line of defence, and media literacy is of the utmost importance. However, education alone is not sufficient. We certainly need tools, including AI-powered ones, but currently these tools are not always reliable. Their development requires effort and investment, as bad actors are usually one step ahead. Moreover, beyond the identification of disinformation narratives, it is vital to detect and track their dissemination dynamics, including the actors, the targets and cross-platform distribution. For this type of analysis, improved AI tools can provide valuable insights for timely responses. We also need stronger regulation of the digital space and of AI. EU initiatives such as the Digital Services Act and the AI Act are a good starting point, but much more is needed, notably in terms of enforcement.

In addition, we need a strong traditional media sector capable of providing reliable, high-quality information, and more fact-checking at all levels. Above all, however, democratic governments must step up politically, boldly addressing these challenges and ensuring that the public is properly informed.

 

Where can people read your briefs?

You can find all of our briefs published here.

 

Tommaso Canetta is a journalist and a fact-checker, deputy director of Pagella Politica/Facta news, coordinator of the fact-checking activities of EDMO and Italian Digital Media Observatory (IDMO), and member of the Governance Body of the European Fact-Checking Standards Network (EFCSN) as well as of the Taskforce of the Code of Practice on Disinformation.

EDMO is an EU-funded network that brings together researchers, fact-checkers and media literacy experts to detect, analyse and counter disinformation across Europe. Its fact-checking network is made up of 15 hubs across the EU and the EEA.

Europe’s hospitals faced nearly 300 cybersecurity incidents in 2024, making healthcare the most targeted essential sector. Widely attributed to Russian-linked groups, major incidents cost around EUR 300 000 each — but the damage goes far beyond financial losses. The European Commission’s 2025 Action Plan on the Cybersecurity of Hospitals and Healthcare Providers is a critical step towards protecting EU healthcare from hybrid threats. Samuel Goodger and Elizabeth Kuiper of the European Policy Centre outline the priorities for ensuring the plan’s successful implementation.

Europe’s hospitals faced nearly 300 cybersecurity incidents in 2024, making healthcare the most targeted essential sector. Widely attributed to Russian-linked groups, major incidents cost around EUR 300 000 each — but the damage goes far beyond financial losses. The European Commission’s 2025 Action Plan on the Cybersecurity of Hospitals and Healthcare Providers is a critical step towards protecting EU healthcare from hybrid threats. Samuel Goodger and Elizabeth Kuiper of the European Policy Centre outline the priorities for ensuring the plan’s successful implementation.

The growing number of cyberattacks against the EU’s health infrastructure form part of broader hybrid warfare intended to intimidate, destabilise and test European resolve, chiefly led by Russia. As digital health and artificial intelligence reshape healthcare provision, the cyberattack surface expands in tandem. Since 2023, pro-Russia hacker groups – such as Killnet and Anonymous Sudan – have launched coordinated attacks on hospitals and health authorities in Denmark, the Netherlands, Spain and Sweden. In 2024 alone, at least 289 cybersecurity incidents affected EU healthcare providers – more than in any other essential sector.

Graph plotting reported cybersecurity incidents in critical sectors

The cost of inaction is staggering. Major incidents cost an average of EUR 300 000 each, meaning the cumulative burden on health systems may reach billions.

Disinformation, for instance shared on social media, can also multiply attacks’ impact. When hospitals are targeted, false claims about patient data breaches can amplify public anxiety, erode trust in healthcare institutions and compound the already-concerning effects of low health literacy.

Why healthcare?

Several factors make health systems attractive targets. Personal health records enable identity theft or extortion. Fragmented IT environments – legacy systems alongside modern infrastructure – contribute to vulnerabilities. Supply-chain dependencies create additional entry points, as one system’s breach can cascade into others.

Cybersecurity preparedness in healthcare varies dramatically across the EU. While some Member States have mature ecosystems – such as the Dutch Z-CERT, which provides sector-specific threat intelligence and incident response – others lack health-specific expertise. This fragmentation creates vulnerabilities that hostile actors can exploit. Limited cross-border threat-intelligence sharing allows attackers to reuse the same vulnerabilities across countries.

Workforce shortages also exacerbate such gaps: in 2024, the EU lacked an estimated 300 000 cybersecurity professionals. The problem is particularly acute in healthcare, where roughly two-thirds of cybersecurity roles are filled by non-specialist IT professionals.

AI - A pivotal opportunity

In these circumstances, the Commission’s January 2025 Action Plan on the Cybersecurity of Hospitals and Healthcare Providers is a critical step forward. Building on substantial existing legislation – such as NIS2, GDPR and the European Health Data Space Regulation – the Plan charts a path to protect EU health systems through four pillars: Prevent, Detect, Respond and Recover, and Deter.

Today’s AI-based tools offer significant defensive potential: continuous surveillance, subtle compromise detection, alert prioritisation and automated early threat containment. However, adversaries also benefit from such evolutions – for instance by manipulating AI models with adversarial inputs or data poisoning. Ensuring system integrity therefore requires continuous monitoring and secure development pipelines. Validation by human analysts remains crucial for accountability.

AI also significantly strengthens disinformation actors. By analysing stolen data, attackers can generate phishing emails tailored to individuals’ specific roles. During incidents, coordinated disinformation can erode public confidence precisely when trust is most fragile.

Recommendations

Prior to further action by the Commission to implement the action plan, we identify six priorities:

First, leverage AI for threat detection and response. Health systems should pilot specialised AI for automated vulnerability management and behavioural analysis. Closed but explainable AI systems are preferable, to reduce data leakage risks.

Second, enhance cross-border threat intelligence. The Commission must establish vulnerability watch systems contextualised in clinical workflows. International cooperation should be strengthened through the International Counter Ransomware Initiative and G7.

Third, strengthen joint procurement for supply-chain security. Establishing common procurement mechanisms at EU level would aggregate demand and facilitate oversight of secure-by-design requirements.

Fourth, address the workforce capacity crisis. Healthcare workers themselves are both the first line of defence and a key vulnerability. Cyber hygiene training must include counter-disinformation skills and recognition of AI-enhanced social engineering attacks.

Fifth, target disinformation risks. The Commission should develop healthcare-specific AI literacy initiatives explaining decision-making processes and privacy implications. Citizens must be empowered to distinguish genuine communications from manipulated content.

Sixth, ensure adequate funding is available. In addition to redirecting existing resources, public investments should qualify under the Stability and Growth Pact’s escape clause. The EU should explore creating a dedicated EUR 10 billion Resilience Fund for sectors most exposed to cyber threats.

Ensuring the cyber resilience of EU health systems requires a shift towards a collaborative, proactive approach. This means moving beyond fragmentation and favouring integrated, innovative collective action. By capitalising on AI, deepening cross-border cooperation, investing in workforce development and empowering patients, the EU can transform the healthcare sector from a vulnerable target to resilient infrastructure.

Samuel Goodger is Policy Analyst, and Elizabeth Kuiper is Associate Director at the European Policy Centre. This article draws on their November 2025 Policy Brief 'From ransomware to statecraft: Protecting EU healthcare in the new threat landscape'.

The European Policy Centre (EPC) is an independent, not-for-profit think tank dedicated to fostering European integration through analysis and debate, supporting and challenging decision makers at all levels to make informed decisions based on evidence and analysis, and providing a platform to engage partners, stakeholders and individuals in EU policy making and in the debate about the future of Europe.

Location
Charlemagne Building, Brussels
-
Location
European Parliament, Brussels