insight

Swift action needed on AI and deepfakes

13 February 2024

The deepfake campaign against Taylor Swift on X (formerly Twitter) in recent weeks has again exposed how regulators and lawmakers are struggling to keep pace with fast-developing AI technologies.

Deepfakes – manipulated or synthetic visual and/or audio content – are of especial concern; not least because Sensity AI, an Amsterdam-based verification platform that has been tracking deepfake videos since December 2018, has found consistently that 90% to 95% of them are pornographic.

Other uses can be equally damaging. These include:

  • Using deepfake images of famous people to endorse scams – e.g., AI-generated videos of Wendy Petrie to promote a gambling scheme and Professor Michael Baker to promote a ‘blood balance’ capsule, and
  • For political purposes – e.g., the faked video of Jacinda Ardern smoking crack in 2021 and faked robocalls purporting to be from US President Joe Biden urging New Hampshire voters to stay home for the primary election on 23 January.

Responses globally

Many countries are moving quickly to regulate AI and deepfakes. Responses include:

The position in New Zealand

New Zealand law is not well-positioned to deal with AI in general or deepfakes in particular.  

Explicit deepfakes

Parliament turned down the opportunity to expressly include deepfakes within the definition of “intimate visual recording” in 2021 when introducing section 22A into the Harmful Digital Communications Act 20151.

The effect of this decision is that victims have to rely on section 22 of the Act which requires proof that (1) there was an intention to cause harm and (2) that the post did cause harm and (3) was of a type that would ordinarily cause harm. 

While these tests might be met in situations involving sexual content where the deepfake is explicit, they might prove difficult in other, less clear cut cases. Neither is it clear whether other torts would easily apply. Deepfakes may give rise to defamation, but that will not be the case in every instance. And the protections of the Privacy Act won’t necessarily be available either as some court decisions have indicated that for a person’s privacy to be invaded, the disclosure needs to be of true facts.

Scam deepfakes

Deepfakes of public figures endorsing products or services are covered by sections 9 and 13 of the Fair Trading Act 1986: section 9 covers misleading or deceptive claims and section 13 prohibits the unauthorised use of someone’s image or identity to imply sponsorship, approval, endorsement or affiliation. The Fair Trading Act may be used in private civil actions, or in action brought by the Commerce Commission but, as noted below, the enforcement of removal orders may be problematic.

Political deepfakes

In contrast, political deepfakes are largely unregulated. While sections 197 of the Electoral Act 1993 (interfering with or influencing voters) and 199A (publishing false statements to influence voters) would seem to be relevant, they only apply on polling day in respect of section 197 and in the two days before polling day in respect of section 199A. 

On all other occasions, the only remedies available would be through defamation law. 

Removing deepfakes

And even where a deepfake is identified, and a New Zealand court order is obtained to have it removed, removal isn’t straightforward where the platform is based overseas, particularly in the United States where the Securing the Protection of Enduring and Established Constitutional Heritage (SPEECH) Act provides extensive protections against foreign orders that impact freedom of expression.

A targeted regulatory approach is needed that addresses the misuse and wider concerns around AI technology.

 

1. Section 22A provides that it is an offence to post an intimate digital communication of another person without reasonable excuse and knowing that the victim had not consented to the publication or being reckless as to whether consent had been given.  Persons under 16 cannot give consent. Penalties for non-compliance are up to two years’ imprisonment or a fine of up to $50,000 for individuals and fines of up to $200,000 for corporates.  

 

Thanks to Jana Stokes who prepared this publication. 

Related insights

See all insights