Posts with non-consensual sexualised images are mushrooming across the internet, targeting women in more than 90% of cases. Although EU law clearly defines the sharing of such images as violence, the implementation of the AI Act and other rules is stalling amid a broader deregulatory mood in Europe, Oliver Marsh of digital rights watchdog AlgorithmWatch tells EESC Info.

 

# The rise of generative AI has made it easier than ever to create non-consensual sexualised images. How significant is this problem today in Europe, and who tends to be most affected?

It is hard to estimate the exact scale of the problem, but it is clearly significant.  For example, various research has found thousands of ads on Meta platforms (Indicator Media) and tens of thousands of sexualising posts produced by the Grok chatbot (AI Forensics) ─ potentially even millions (CCDH) ─ appearing in a matter of months.  And this is based on research into publicly accessible material ─ we do not know how much is being shared privately, for example schoolboys sharing modified images of their classmates (AlgorithmWatch) .  Regarding who is most affected, different research projects find that the subjects are usually women, whether in 80% or as high as more than 90% of the cases (AI Forensics, French Foreign Ministry).

 

#These images are often described as a new form of digital gender-based violence. What are the real-world consequences for victims, particularly women and girls?

 The EU has explicitly included non-consensual sharing of intimate images as a form of violence in its Directive of 14 May 2024 on combating violence against women and domestic violence .  This is important in terms of using the Digital Services Act to address the problem (see below), as the rules there are explicitly supposed to address gender-based violence as an example of systemic risk ─ and the EU is clear that non-consensual sharing of intimate images is violence under its terms. 

 I may not have the expertise to add much beyond what is available online about how this affects victims, but a point that is commonly noted is that just because the images aren't real does not make them less traumatic.  Our Journalism Fellow Ana Ornelas has worked on this topic, I believe, including in discussions such as this for Media Diversity.  We should also consider how this is another threat against women doing activities which require a public profile, such as running for elected office.  Though people are also subjected to this even without a public profile, e.g. by people they know, as mentioned in the AlgorithmWatch link above.

 

#From your research, what role do online platforms and algorithmic systems play in amplifying or enabling this type of abuse?

A lot of these images, and the apps or websites which can create them, are shared via platforms such as Discord and Telegram (Wired, Graphika). There are also Reddit forums in which people share tips, such as how to 'jailbreak' popular AI tools to get them to make sexualised images (Guardian). Very large platforms such as X, Meta, and app stores can spread these to very wide audiences, including via ads. Such platforms can and should use moderation ─  both algorithmic and human ─ to find and remove the accounts doing this (Indicator). Some are better than others. From our research, we see that X, for example, does not even remove some clear and easy-to-find examples of accounts that help people make non-consensual nude images. Grok is an extreme example of how bad the problem is on X, but the problem goes well beyond Grok (AlgorithmWatch).

 

#The EU has recently adopted major digital regulations such as the Digital Services Act and the AI Act. In your view, will these frameworks be sufficient to address this issue, or do important gaps remain?

In theory, they provide a series of tools to (i) make companies conduct risk assessments and (ii) provide data and reporting options for external parties to identify when platforms fail to mitigate risks properly.  The DSA could help ensure large platforms and search engines take measures to mitigate the spread of such images (and when it relates to illegal imagery, on other platforms too).  The AI Act could potentially help address the creation and deployment of the tools themselves.  For example, so far the EU Commission's response to the Grok case earlier this year was to say they are 'looking very seriously into this matter' and to announce further investigations. This is completely insufficient for such a serious failure by X. Also, together with others, we were highlighting the issue of non-consensual nudity on X, and other platforms, for months before the Grok case. Implementation of the AI Act is being snarled up in a general deregulatory mood in Europe, exemplified in debates around the Digital Omnibus.

 

#What role can civil society organisations play in protecting victims and pushing for stronger accountability from platforms and technology developers?

Despite how serious this problem is, there are still forces slowing regulation.  Companies argue that forcing them to put safeguards in place holds back the development of their technologies. Many politicians and administrations in Europe are swayed by these arguments and are worried that Europe will 'fall behind' in AI if they try to regulate it too much. Civil society can be a counterweight to these arguments ─ albeit far less well-resourced than technology companies ─ and speak up against the harms when companies are not held accountable, ensuring that politicians and regulators actually stand by their statements that non-consensual sexualisation is a horror that must be strongly addressed.

Dr Oliver Marsh is head of Tech Research at AlgorithmWatch, where he leads research work and partnerships on policy areas including the Digital Services Act and the AI Act. He previously worked on platform and data governance as an official in Downing Street in the UK, and as an analyst of online harms for CASM Technology, The Institute for Strategic Dialogue and Demos. 

AlgorithmWatch is a Berlin and Zurich based NGO whose mission is to ensure that algorithms and AI strengthen justice, human rights, democracy and sustainability instead of authoritarianism.