“They’re basically attacking our entire digital existence – if we don’t like it, then we shouldn’t be posting it at all.” – Dr Daisy Dixon, Cardiff University [1]
Image-based sexual abuse (IBSA) includes the non-consensual creation and/or sharing of intimate images. This includes practices such as upskirting, hidden cameras, sextortion, cyber-flashing, semen images, and sexualised deepfakes. The Revenge Porn Helpline (RPH) – “a UK service supporting adults…experiencing intimate image abuse” [2] – saw a 400% increase in cases of non-consensual “synthetic” (AI-generated) intimate images (NSII), between 2017 and 2024. [3]
While the term “revenge porn” is perhaps more widely known, it can carry problematic implications of victim blame and obscure the reality that this is a form of abuse, often perpetrated by complete strangers. Such violations also introduce differential vulnerabilities; i.e., even when experiencing the similar circumstances, different populations face different types and degrees of security, privacy, and safety risks with serious consequences to their lives.
Of the cases reported to the RPH, 72% were women. Among these, 44% of reported that the perpetrator was “a known male”, while 53% reported the perpetrators were “completely unknown” [3]. Research consistently shows that women are overwhelmingly the targets of NSII. On the notorious deepfake video sharing site ‘Mr Deepfakes’, 95.3% of all targeted individuals were women, constituting 91% of all videos on the platform [4]. As early as 2019, a report by Deeptrace/Sensity AI found that, of the deepfakes they found, 96% were pornographic, and 100% of those depicted women. Indeed their case study of a computer app called ‘DeepNude’ further illustrates this imbalance and suggests a contributing factor: the model was trained only on images of women, so was unable to generate comparable nude images of men [5].
More recently, the Grok AI chatbot (built into the social media platform X) enabled the mass creation of non-consensual sexualised images. User requests soared after CEO Elon Musk posted a Grok AI-edited photo of himself in a bikini, showing how platform leadership and design decisions can permit the normalisation of image abuse. The New York Times [6] estimates that 41% (approximately 1.8 million) of images generated and posted by Grok in response to user requests over the nine days were sexualised images of women. The Center for Countering Digital Hate found that 65% of images in its random sample were sexualised, with 101 showing children – suggesting that over 23,000 total sexualised images of children may have been posted on X as a result of user requests to Grok AI.
With all this in mind, it is clear that the evidence demonstrates that sexualised deepfakes are a deeply gendered harm, and thus are understood as a type of technology-facilitated gender-based violence (TF-GBV). Yet a stark imbalance exists not only in who is targeted, but also in the perceived harms and response. Research shows that men are more likely to find the creation and sharing of synthetic intimate images acceptable [7], and are more likely to place less responsibility on perpetrators [8]. Men also tend to perceive sexualised deepfakes of themselves as more acceptable than non-male participants, with some participants (mostly men) responding that a partner creating sexual deepfakes of them would be “flattering” or “a compliment” [7]. Overall, men generally perceive less harm to victims of NSII [8].
These perception gaps may help to contextualise why “four times as many” victims report a negative police reporting experience than a positive experience [3], given the overrepresentation of men in the police force, e.g. women made up only 27.1% of the Metropolitan Police Service in 2021 [9].
By contrast, people from marginalised genders are more likely to find the creation and sharing of sexual deepfakes more unacceptable than non-sexual deepfakes [7], and women are perceived to experience greater harm from such abuse [8]. This dynamic is shaped by and reinforces wider societal structures that shame and suppress female sexuality. In this way, sexual deepfakes are wielded as a disciplinary force to silence women and marginalised genders who stand up against image abuse, such as happened to Dr Daisy Dixon (quoted above) [1]. The vicarious trauma of watching other women be targeted could act as a chilling mechanism, encouraging self-censorship, withdrawal from public platforms, and reduced trust and engagement with AI, reinforcing gendered patterns of digital exclusion and unequal technological participation [MM].
A further dimension of this is reflected in racialised perception of harm. Evidence of ‘misogynoir’ (a term coined by Moya Bailey to describe gendered anti-Black racism) emerges in findings [8] that US participants uniquely (compared to UK and Australian participants) judged Black female victims as less harmed by the creation of sexual deepfakes than white or Asian women. This demonstrates enduring harmful stereotypes, like the “Strong Black Woman” [8], and highlights the need for intersectionality, as when harm to Black women is systematically minimised and their suffering normalised, adequate recognition, responses and support are less likely to follow.
Racialised patterns of deepfake sexual abuse were also clear in the creation and consumption of sexual deepfake content on ‘Mr Deepfakes’, where four out of the top ten video categories (by number of videos) were explicitly racial (Asian, Korean, Indian/Bollywood, and Interracial) [4], suggesting that race is an important factor shaping activity. Notably, the second most common nationality targeted (after American), both in the Sensity AI report [5] and on ‘Mr Deepfakes’ [4], was (South) Korean, with K-pop singers a core target. These trends may represent not only global popularity, but also a racialised sexualisation or exoticism within Western platform cultures.
It is also important to consider cultural differences, especially around the meaning of “intimate images”, since narrow legal and societal definitions of intimacy may fail to capture real-world harm, for example using AI to remove a woman’s hijab. The RPH reported that 1% of its cases were of “culturally sensitive content” and that 7.5% reported cultural sensitivity as an additional impact [3]. In certain contexts, the retaliatory threat of honour-based violence may pose a severe and immediate danger to victims. LGBTQ+ individuals may also face additional risks of exposure and retaliation from images outside of what may traditionally be considered “intimate”.
Another often overlooked group of victims are those whose intimate media is non-consensually used as the ‘body’ onto which another individual’s face is edited, primarily sex workers. A study [10] showed that participants attribute more victim blame to the ‘body victim’ than the ‘face victim’, and ‘face victims’ were considered to experience greater harms, especially when the ‘body victim’ was labelled as a sex worker. This reinforces hierarchies of respectability in which some bodies are treated as disposable inputs rather than victims of abuse.
Finally, recent work [11] has brought to light the longstanding ethical bad practice involving the non-consensual use and distribution of nude images in datasets for academic research, e.g. nudity detection. Out of 150 computer science papers using real nude images, none mentioned the consent or safety of the image subjects, or data deletion plans, and only two had received institutional review board (IRB) review and approval. Some nude datasets knowingly contained non-consensual images, for example upskirting and hidden camera images, and one scraped images from subreddits dedicated to sexual violence and borderline child sexual imagery. Serious concerns were flagged around the 813 example images published in the papers, of which 9 were completely uncensored and 28 were still identifiable.
On 6th February 2026, it will become illegal to create, or request the creation of, non-consensual sexual deepfakes, after legislation passed in the Data (Use and Access) Bill 2025 was finally signed on 15th January [12]. This follows years of tireless activism to end image abuse by many, from organisations like End Violence Against Women, to survivor-led campaigns like #NotYourPorn and Jodie Campaigns, to Glamour UK Magazine and academics like Clare McGlynn, Professor of Law at Durham University [13].
While Jodie said in a statement [14] “My hope is that this marks a genuine turning point”, she expressed frustration that swift action had only been taken in response to the public outcry against X’s Grok AI in recent weeks. “It should never have taken days of outrage and new victims being created for action to be taken, when this legislation has been sitting ready, with Royal Assent, for months. Survivors and campaigners warned, again and again, that delaying this law would cause real harm. We were right.”
References [Accessed: 26-Jan-2026].
[1] J. Davies, “Quicker action would have stopped Grok AI deepfakes, victim says,” BBC News, Jan. 2026. [Online]. Available: https://www.bbc.co.uk/news/articles/c98p4214577o.
[2] The Revenge Porn Helpline, “Intimate Image Abuse,” 2026. [Online]. Available: https://www.revengepornhelpline.org.uk.
[3] Women and Equalities Committee, “Oral evidence: Tackling non-consensual intimate image abuse, HC 336,” House of Commons, Nov. 6, 2024. [Online]. Available: https://committees.parliament.uk/oralevidence/14982/pdf/.
[4] C. Han, A. Li, D. Kumar, and Z. Durumeric, “Characterizing the MrDeepFakes Sexual Deepfake Marketplace,” in Proceedings of the 34th USENIX Security Symposium, Seattle, WA, USA, Aug. 2025, pp. 5169–5188. Available: https://www.usenix.org/conference/usenixsecurity25/presentation/han.
[5] H. Ajder, G. Patrini, F. Cavalli, and L. Cullen, “The State of Deepfakes: Landscape, Threats, and Impact,” Deeptrace Labs, Amsterdam, Netherlands, Sep. 2019. [Online]. Available: https://regmedia.co.uk/2019/10/08/deepfake_report.pdf.
[6] K. Conger, D. Freedman, and S. A. Thompson, “Musk’s Chatbot Flooded X With Millions of Sexualized Images in Days, New Estimates Show,” The New York Times, Jan. 2026. [Online]. Available: https://www.nytimes.com/2026/01/22/technology/grok-x-ai-elon-musk-deepfakes.html.
[7] N. G. Brigham, M. Wei, T. Kohno, and E. M. Redmiles, “Violation of my body: Perceptions of AI-generated non-consensual (intimate) imagery,” in Proceedings of the Twentieth Symposium on Usable Privacy and Security, Philadelphia, PA, USA, 2024, pp. 373–392. Available: https://www.usenix.org/conference/soups2024/presentation/brigham.
[8] A. A. Eaton, A. J. Scott, A. Flynn, and A. Powell, “Perceptions of sexualized deepfake abuse across three nations: An exploration of how victim gender and race shape attitudes towards deepfake abuse in the United States, the United Kingdom, and Australia,” Computers in Human Behavior, vol. 177, p. 108899, 2026. Available: https://doi.org/10.1016/j.chb.2025.108899.
[9] Metropolitan Police Service, “Workforce diversity in Metropolitan Police Service,” 2021. [Online]. Available: https://www.police.uk/pu/your-area/metropolitan-police-service/performance/workforce-diversity/
[10] D. Fido, H. Goldfinch, D. Ruddy, and C. A. Harper, “Judgements of Deepfake Sexual Abuse Victims Differ as a Function of Facial Versus Body Likenesses,” SSRN, Apr. 25. [Online]. Available: https://ssrn.com/abstract=5191739 or http://dx.doi.org/10.2139/ssrn.5191739.
[11] P. Cintaqia, A. Arya, E. M. Redmiles, D. Kumar, A. McDonald, and L. Qin “Stop the Nonconsensual Use of Nude Images in Research,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 8(1), 628-629. Available: https://doi.org/10.1609/aies.v8i1.36576.
[12] S. Wingate, “Legislation to ban non-consensual sexual images signed amid Grok AI backlash,” The Independent, Jan. 2026. [Online]. Available: https://www.independent.co.uk/news/uk/politics/keir-starmer-government-liz-kendall-david-lammy-deputy-prime-minister-b2901344.html
[13] A. Moore, “I don’t take no for an answer: how a small group of women changed the law on deepfake porn,” The Guardian, Dec. 2025. [Online]. Available: https://www.theguardian.com/society/ng-interactive/2025/dec/04/i-dont-take-no-for-an-answer-how-a-small-group-of-women-changed-the-law-on-deepfake-porn
[14] L. Morgan, “Campaign win! It will finally be illegal to create AI sexualised images using Grok,” Glamour UK, Jan. 2026. [Online]. Available: https://www.glamourmagazine.co.uk/article/glamour-grok-campaign-win
