The San Francisco City Attorney’s office recently filed a groundbreaking lawsuit against 16 “nudify” websites, accusing them of violating US laws related to non-consensual intimate images and child abuse material.
“Nudify” sites and apps are user-friendly platforms that allow anyone to upload a photo of a real person and generate a realistic image of what they might look like undressed. Within seconds, the uploaded photo can be transformed into an explicit image.
In the first half of 2024, the 16 websites mentioned in the lawsuit received over 200 million visits. One of the sites even promotes the idea of using their platform to obtain explicit images instead of going on dates.
These sites are also advertised on social media, with a 2,400% increase in advertising for nudify apps or sites since the beginning of the year.
Victims of deepfake abuse can experience significant harm, even if the images appear fake. It can damage their reputation, career prospects, and have negative effects on their mental and physical health, including social isolation, self-harm, and a loss of trust in others.
Many victims are unaware that their images have been created or shared. Even if they do discover it, they may struggle to have the content removed from private devices or from “rogue” websites with minimal protections.
Victims can report the non-consensual sharing of fake intimate images to digital platforms. In Australia, they can report to the eSafety Commissioner, who can assist in having the content taken down.
Digital platforms have policies against non-consensual sharing of sexualized deepfakes, but enforcement is inconsistent. While most nudify apps have been removed from app stores, some still exist, allowing users to create near-nude images.
Tech companies can take several actions to combat the spread of deepfakes. Social media, video-sharing platforms, and porn sites can ban or remove nudify ads, block specific keywords, and issue warnings to users searching for such content.
Technology companies can also develop tools to detect fake images and incorporate safeguards to prevent the creation of harmful or illegal content. Watermarking, labeling, and digital hashing can also help prevent the sharing of non-consensual content.
Search engines can reduce the visibility of nudify and non-consensual deepfake sites. Google, for example, has implemented measures to address deepfake abuse, such as removing reported explicit deepfakes from search results.
Governments can introduce laws and regulatory frameworks to address deepfake abuse, including blocking access to sites. However, VPNs can bypass these blocks.
In Australia, criminal laws exist for the non-consensual sharing of intimate images and possessing child abuse material. State and territory laws broadly define “intimate image” to include digitally altered or manipulated images. Efforts are being made to amend federal laws to create a standalone offense for the non-consensual sharing of private sexual material.
While laws are helpful, they cannot fully solve the problem. Law enforcement often has limited resources, and international cooperation can be challenging. Pursuing the criminal justice path can also be emotionally taxing for victims.
Civil remedies under the federal Online Safety Act, administered by the eSafety Commissioner, can provide penalties for users and tech companies that share or threaten to share non-consensual images.
Improving digital literacy is crucial in distinguishing between real and fake images. It involves fostering critical thinking skills to assess and challenge misinformation. Raising awareness about the harms of deepfake abuse, promoting education on respectful relationships and sexuality, and improving porn literacy are also important measures.
Accountability should be held for perpetrators of deepfake abuse, tech developers who enable the tools, and tech companies that allow its spread. Addressing this issue will require creative solutions from various stakeholders.
If you or someone you know is affected by these issues, you can contact 1800RESPECT on 1800 737 732 or visit the eSafety Commissioner’s website for online safety resources. In case of immediate danger, call 000.