<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=135336290359709&amp;ev=PageView&amp;noscript=1">
IT & Cybersecurity News Australia

First Study of Victims and Perpetrators Sheds Light on Sexualised Deepfakes in Australia

By
1 Minute Read

A new study from Monash University has for the first time rounded up perspectives not only from survivors, but also from people who admitted they created sexualised deepfake content. The findings, published 4 December 2025, reveal disturbing patterns: normalisation of abuse, peer-driven motivations, weak deterrents, and widespread under‑reporting.

The Monash team interviewed 25 individuals, 15 who identified as victims and 10 who admitted creating or sharing deepfake sexual content. Among key findings:

  • Many perpetrators described creating fake nude or sexual images not with financial motive, but as a way to bond with peers, show off technical skill, or elevate status within a social group.

  • Some rationalised their actions: claiming that “AI tools make it so easy” it doesn’t feel like wrongdoing. Others treated it as a “prank” or dismissed it as harmless, echoing victim‑blaming attitudes seen in broader sexual violence.

  • On the victims’ side, the harm was real and severe: the emotional and psychological impact of seeing one’s likeness misused, often with no meaningful recourse. In many cases, attempts to report to police failed to lead to legal consequences.

According to the report, women are overwhelmingly the main targets, especially in cases involving sexualised or controlling deepfakes, although men were also victimised in scenarios tied to sextortion, humiliation or blackmail

The Security & Societal Implications

As generative‑AI tools proliferate and become easier to use, the barrier to creating convincing, damaging fake intimate content is falling rapidly. The study warns that normalisation among certain peer groups (especially younger males) may lead to an increase in both creation and distribution of non-consensual deepfake content.

Moreover, current legal frameworks and enforcement in Australia remain limited. A separate statistical review by Australian Institute of Criminology (AIC) of image‑based sexual abuse (IBSA) offences across several jurisdictions found that the majority of cases in 2022–23 involved distribution of explicit material, but did not distinguish between digitally manipulated and real‑image offences.

Because deepfake creation itself is often not criminalised separately (or difficult to prove), many perpetrators escape legal liability, even when victims come forward. That reflects a gap in both legislation and the structures for victim support.

 

Subscribe to The Security Briefing for monthly updates!

Karyee Lee

Karyee Lee

Karyee Lee is a Content Executive for the Safety & Security Event Series, contributing to the digital content strategy and audience engagement across a diverse range of online platforms through The Security Briefing, Workplace Unplugged, and Pro Integration Insider. Passionate about bringing industry professionals together, Karyee develops engaging digital content and supports initiatives that keep industry audiences informed and connected.

Author