Australia launches world-first crackdown on ‘deepfake’ porn

Save articles for later

Add articles to your saved list and come back to them any time.

Tech giants including Apple, Google and Meta will be forced to do more to tackle online child sexual abuse material and pro-terror content, including “deepfake” child pornography created using generative AI, in world-first industry standards laid out by Australia’s eSafety Commissioner.

Following more than two years of work, and after rejecting draft codes created by the tech industry, eSafety Commissioner Julie Inman Grant will release draft standards on Monday covering cloud-based storage services like Apple iCloud, Google Drive and Microsoft OneDrive, as well as messaging services like WhatsApp, requiring them to do more to rid their services of unlawful content.

Australia’s eSafety Commissioner Julie Inman-Grant.Credit: Rhett Wyman

Inman Grant, a former Twitter executive, said that she hopes Australia’s industry standards would be the “first domino” of similar regulations globally to help tackle harmful content.

She said the requirements would not force tech companies to break their own end-to-end encryption, which is turned on by default on some services, including WhatsApp.

All major tech platforms have policies that ban child sex abuse material from their public services, but Inman Grant said they have not done enough to police their own platforms.

“We understand issues around technical feasibility, and we’re not asking them to do anything that is technically infeasible.”

“But we’re also saying that you’re not absolved of the moral and legal responsibility to just turn off the lights or shut the door and pretend this horrific abuse isn’t happening on your platforms.

“What we’ve found working with WhatsApp, it’s an end-to-end encrypted service, but they pick up on a range of behavioural signals that they’ve developed over time, and they can scan non-encrypted parts of the services, including profile and group chat names, and things like cheese pizza emojis, which is known to stand for child pornography.”

“These and other interventions enable WhatsApp to make 1.3 million reports of child sexual exploitation and abuse each year,” she added.

The standards will also cover child sexual abuse material and terrorist propaganda created using open-source software and generative AI. A growing number of Australian students for example are creating so-called “deepfake porn” of their classmates and sharing it in classrooms.

“We’re seeing synthetic child sexual abuse material being reported through our hotlines, and that’s particularly concerning to our colleagues in law enforcement, because they spend a lot of time doing victim identification so that they can actually save children who are being abused,” she said.

“I think the regulatory scrutiny has to be at the design phase. If we’re not building in and testing the efficacy and robustness of these guardrails at the design phase, once they’re out in the wild, and they’re replicating, then we’re just playing probably an endless and somewhat hopeless game of whack-a-mole.”

Inman Grant’s office has commenced public consultation on the draft standards, a process that will run for 31 days. She said the final versions of the standards will be tabled in federal parliament and come into effect six months after they’re registered.

“The standards also require these companies to have sufficient trust and safety, resourcing and personnel. You can’t do content moderation if you’re not investing in those personnel, policies, processes and technologies,” she said.

Elon Musk, chief executive officer of X, which has refused to pay a $610,500 fine from the eSafety Commissioner for allegedly failing to adequately tackle child exploitation material on its platform.Credit: Bloomberg

“And you can’t have your cake and eat it too. And what I mean by that is, if you’re not scanning for child sexual abuse, but then you provide no way for the public to report to you when they come across it on your services, then you are effectively turning a blind eye to live crime scenes happening on your platform.”

The introduction of the standards comes after social media giant X – formerly known as Twitter – refused to pay a $610,500 fine from the eSafety Commissioner for allegedly failing to adequately tackle child exploitation material on its platform.

X has filed an application for a judicial review in the Federal Court.

“eSafety continues to consider its options in relation to X Corp’s non-compliance with the reporting notice but cannot comment on legal proceedings,” a spokesman for the commissioner said.

Get news and reviews on technology, gadgets and gaming in our Technology newsletter every Friday. Sign up here.

Most Viewed in Technology

From our partners

Source: Read Full Article