The inevitable rise of deepfakes: NSW's criminalisation of AI generated harm
By Najat Malulein
Published
Topic
Legal Commentary
See
Disclaimer: Views expressed herein are solely those of the author and do not necessarily reflect the views of other writers or the Law Student Review

I The inevitable rise of deepfakes: NSW’s criminalization of AI generated harm
Highly realistic “deepfake” images and audio have recently been on the rise due to the recent technological advancements in artificial intelligence. Deepfakes are essentially fabrications of sexually explicit material which depicts real individuals, without their consent. New South Wales law has long criminalised image-based abuse, however prior legislative frameworks did not acknowledge content that is created entirely through AI generation. This gap was the source of legal challenges for victims seeking legal protection, and for law enforcement entities attempting to control emerging forms of digital harm. In response, the New South Wales Government introduced amendments to the Crimes Act 1900 (NSW) to criminalise the creation, distribution, and threatened distribution of sexually explicit deepfake material, as well as non-consensual intimate audio. This article critically examines the reform from a legal perspective, assessing its benefits, limitations, and practical effectiveness, and considers whether the amendments adequately respond to the complexities of AI-enabled abuse or require further refinement.
II The advancement of image-based abuse
The amendments to the Crimes Act were outlined in the Crimes Amendment (Deepfake Sexual Material) bill 2025.[1]The bill was an extension of the abuse offences, rather than a stand-alone reform, specifically an addition to the reforms made by the Crimes Amendment (Intimate Images) Act 2017.[2] It extended offences relating to the non-consensual recording or distribution of intimate images to add the production and/or distribution of deepfake intimate images and sexually explicit deepfake audios and text. Section 91N of the Crimes Act was the root of the amendments, in which multiple definitions were modified to fit the newly criminalized deepfake agenda.[3]
The restructuring of section 91N gave rise to the addition and change of new definitions, notably: the change of ‘intimate image’ to ‘intimate material’; the change of ‘recording’ to ‘production’; the inclusion of ‘digitally creating the material’ by generation or altering or manipulating another image; and the addition of ‘sexually explicit deepfake audio and text.[4]The changes came as a response to the law’s inadequacy in recognizing deepfakes as a part of abuse offences, leaving loopholes that put affected victims at risk of unfair judicial processing. It should be noted that consent, as outlined in s 91O of the Crimes Act,[5] is irrelevant, even when the discussion revolves around AI generated material and deepfakes, and that the amendment is more concerned with the definitions at hand.
III The benefits: shifting focus to harm through fabrication
The reform closes a legislative gap in which the express introduction of AI generated deepfakes into the Crimes Act ensures that the law no longer solely bases the conditions of criminality on the factual and realistic accuracy of images and depictions. This shift further recognizes a victim cantered approach to the rise of deepfakes across New South Wales, with a sentence of maximum three years in jail and an $11,000 fine being issued to offenders, ensuring that the law reflects the scale of harm that the digital results cause .
Furthermore, the alignment of deepfake abuse with existing image-based abuses principles, also solicits doctrinal consistency in the Crimes Act. The fact that the reform does not stand alone and is rather an extension of the already existent criminal law framework, thereby maintaining coherence within the Crimes Act. This approach reduces the risk of interpretive inconsistency, preserves judicial discretion, and avoids fragmenting the law in a way that could undermine the victims at hand.
The reform also makes sure to include the threat to distribute deepfake material, in which criminal liability is extended into the preparatory stages of harm, and reinforces the preventive function of the criminal law, rather than limiting it to the intervention to post-distributed content.
IV The struggles: anonymity, scope, and enforcement
A central disadvantage to the law reform lies in the challenge of anonymity and attribution. The offence criminalizes the creation, distribution, or threat of distribution of intimate deepfake material. However, proving beyond a reasonable doubt that a specific entity generated and distributed the content would prove to be difficult.[6] This is due to common circumstances tied to the rise of AI, such as overseas servers and encrypted platforms, which help maintain anonymous accounts. This may limit the enforceability of the reform and reduce its targeted effect, where criminal liability cannot be pursued due to practical impediments to satisfying evidentiary burdens. It could also prove to be a problem for victims attempting to sue under the tort of defamation, as an offender cannot always be located.[7]
A further disadvantage is the ambiguity in statutory language. What is deemed as ‘sexual’? What is deemed as ‘intimate’? While courts may decide to use precedent from the old law involving sexually explicit images to determine the outcome, deepfakes and AI generated media may prove to be more complex in nature, falling into a grey area where harm is caused but criminal responsibility is unclear. Deepfake material may cause sexualised harm without meeting traditional thresholds of explicitness, leaving significant interpretive discretion to the judiciary. Minor edits, face swapping, filters, and enhancements are becoming extremely common among AI generated content, which can cause the uncertainty surrounding the technological scope of the offence.
V Conclusion: Evaluation and Reform
Overall, the amendments were a necessary reform to combat emerging digital harms, closing a gap in the Crimes Act and better reflecting contemporary forms of digital abuse in New South Wales. However, with the continuous and rapid rise in AI generated content, there is no knowing what other struggles the law may have to overcome to protect society from fabricated content. In regard to deepfakes, there must first be a clearer statutory guide on the scope of intimate material. Considering that the age of artificial intelligence is still novel, an explanation of the definitions of deepfakes and other related ai generated harms are needed in writing in the Crimes Act to avoid legal gaps. Procedure around evidentiary standards in digital contexts must also be clarified, in order to ensure smooth legal dealings around digital harm. Ultimately, the amendments provide a sound doctrinal foundation, but their long-term efficiency will depend on the law’s ability to rapidly adapt to the continued development of AI generated content.
VI Footnotes
[1] Crimes Amendment (Deepfake Sexual Material) Bill 2025 (NSW) www.parliament.nsw.gov.au/bills/Pages/bill-details.aspx?pk=18782.
[2] Crimes Amendment (Intimate Images) Act 2017 (NSW) No 29.
[3] Crimes Act 1900 (NSW) s 91N.
[4] Crimes Amendment (Deepfake Sexual Material) Bill 2025 (NSW)
[5] Crimes Act 1900 (NSW) s 91O.
[6] NSW Government, ‘Minns Labor Government Strengthens Protections Against Deepfakes and Image-Based Abuse’ (Media Release, 19 September 2025) www.nsw.gov.au/ministerial-releases/minns-labor-government-strengthens-protections-against-deepfakes-and-image-based-abuse.
[7] Sun, Raymond, et al. ‘Facing the Facade – Legal Challenges in the Age of Deepfake’ (Herbert Smith Freehills Kramer, 6 June 2024), www.hsfkramer.com/insights/2024-06/facing-the-facade-legal-challenges-in-the-age-of-deepfakes.