A Hong Kong company recently lost HK$200 million (A$40 million) in a deepfake scam. The scammers used generative AI tools to create replicas of senior company officials during a video conference call, tricking an employee into transferring funds. The rise of these tools has raised concerns about intimate image abuse and the disruption of democratic processes. However, the legislation surrounding AI deepfakes is still catching up.
In the case of a deepfake scam, it is unclear who is responsible for providing compensation or redress for the victim’s losses. There are four possible targets: the fraudster (who often disappears), the social media platform that hosted the fake, the bank that paid out the money, and the provider of the AI tool.
Seeking damages from a social media platform can be challenging due to their framing as mere conduits of content. While platforms in the United States are shielded from liability, this protection does not exist in most other common law countries.
The Australian Competition and Consumer Commission (ACCC) is testing the possibility of making digital platforms directly liable for deepfake scams. They argue that platforms should promptly remove deepfake content used for fraudulent purposes.
The legal obligations of banks to reimburse victims of deepfake scams are not settled in Australia. The UK’s Supreme Court suggests that banks do not have a duty to refuse payment instructions to suspected fraudsters, although they have a duty to act promptly once the scam is discovered. The UK is introducing a mandatory scheme for banks to reimburse victims of push payment fraud, and similar proposals have been presented in Australia.
Providers of generative AI tools are currently not legally obligated to prevent their tools from being used for fraud or deception. However, they may soon be required to design their tools in a way that allows synthetic content to be detected, as proposed by the EU AI Act.
While legal and technical measures may not entirely prevent deepfake fraud, they can help slow down its prevalence and reduce harm. Pressure should be placed on platforms, banks, and tech providers to stay vigilant against the risks.
In conclusion, while it may not be possible to completely prevent deepfake scams, the development of legal and technical solutions offers hope for seeking compensation in case of victimization. Multi-layered strategies of prevention, education, and compensation are necessary to address the growing threat of deepfakes.