EESC demands ethical, inclusive AI and Big Data in rare disease care

At its September 2025 plenary session, the European Economic and Social Committee (EESC) adopted a pioneering opinion on the use of Artificial Intelligence (AI) and Big Data in rare disease diagnosis and treatment. It sets out a comprehensive vision for harnessing digital innovation to improve the lives of rare disease patients while safeguarding rights, equity, and transparency.

In this opinion, requested by the Danish Presidency to the Council of the EU, the EESC welcomes the transformative potential of AI and Big Data to optimise patient pathways, accelerate diagnosis, and support the development of personalised medicines for rare diseases. With over 7,000 rare diseases affecting 300 million people globally (including 30 million in the EU) AI-driven tools can reduce diagnostic errors, shorten diagnostic journeys, and enable innovative treatments. However, the Committee stresses that these advances must be balanced with robust ethical and legal safeguards.

Universal Data Standards and patient protection

The Committee calls for all EU Member States to digitise health data and adopt high-quality registration standards, including the use of ORPHA codes, to facilitate cross-border research and the optimal functioning of the European Health Data Space (EHDS). AI healthcare models should only access anonymised and encrypted patient data, with strict sanctions for misuse. Clear patient consent frameworks and independent monitoring bodies are essential to ensure transparency and protect patient rights.

Recognising persistent gender and ethnic disparities in rare disease diagnosis and treatment, the EESC urges the EU AI Office to promote gender-diverse training data, bias audits, and pre-market gender testing for medical AI. Targeted funding and mentorship programmes should increase female participation in AI-driven healthcare, while research clusters should focus on conditions disproportionately affecting women and underrepresented groups.

Empowering Patients and Ensuring Data Sovereignty

Patients must have control over their health data, including the right to withdraw consent and understand how their data is used. The EESC highlights models such as patient-led registries and data cooperatives as alternatives to corporate or state monopolies. Consent should be ongoing, especially before any sale of AI-processed health data to third parties.

To prevent market dominance by large corporations, the EU should dedicate funding to startups and SMEs developing AI for rare disease diagnostics. Public-private partnerships and fair data access are vital to ensure that AI-driven health innovations remain affordable and accessible to all.

The EESC demands EU-wide ethical guidelines for AI in healthcare, ensuring equal access and patient safety. Physician oversight must remain central to medical decision-making, with AI serving as a support tool rather than a replacement. Professional training and AI literacy campaigns should empower healthcare workers and patients to use AI responsibly.

Addressing key challenges: bias, privacy, and governance

The opinion identifies serious risks, including algorithmic bias, data privacy, affordability, and unequal access. Women and minorities are often underdiagnosed due to biased datasets, and rural-urban divides persist in digital health infrastructure. The Committee insists that AI models must be explainable and auditable, with strong public oversight to prevent data commodification and ensure accountability.

AI and Big Data will reshape healthcare roles, automating some tasks while augmenting others. Tailored training and upskilling programmes are needed, with trade unions and worker representatives involved in AI governance. The EESC continues to monitor the impact of AI on the labour market, advocating for fair collaboration and updated skillsets.