HomeTechAI advances fuel rise in child abuse deepfakes, UK safety watchdog warns

AI advances fuel rise in child abuse deepfakes, UK safety watchdog warns

Date:

Related stories

Fuel Ventures secures £20m Chinese investment round, strengthening UK-China tech ties

Fuel Ventures, one of the UK’s most prominent venture...

Space tech consortium secures government funding for in-orbit sandbox

A consortium of space tech firms has been approved...

Christmas shopping chaos as ‘server goes down’ at major UK supermarket

Last-minute Christmas shoppers are facing chaos this morning as...

Amazon Workers Go On Strike Across US | Silicon UK Tech News

Amazon staff in seven cities across US go on...
spot_imgspot_img


Recent advancements in artificial intelligence (AI) are being exploited by predators to create AI-generated videos of child sexual abuse, raising concerns about a potential increase in such content as technology progresses, The Guardian reports, citing a UK-based safety watchdog.


The Internet Watch Foundation (IWF) reports that most of these instances involve the manipulation of existing child sexual abuse material (CSAM) or adult pornography, where a child’s face is superimposed onto the footage. A smaller number of cases feature entirely AI-generated videos lasting about 20 seconds.


The IWF, which tracks CSAM globally, alerted that more AI-generated CSAM videos could proliferate as AI tools become more accessible and user-friendly.


Dan Sexton, IWF’s chief technology officer noted a worrying trend, “If AI video tools follow the same pattern as AI-generated still images, we can expect a rise in CSAM videos.” He said that future videos could be of “higher quality and realism”.


IWF analyst also observed that most videos found on a dark web forum used by paedophiles are partial deepfakes. These involve using freely available AI models to superimpose a child’s face, including images of known CSAM victims, onto existing CSAM videos or adult pornography. The IWF identified nine such videos.


While fewer in number, some wholly AI-generated videos are of a more basic quality. Analysts warn, however, that this could represent the ‘worst’ of fully synthetic video production for now.


The IWF highlighted that AI-generated CSAM images have become more photorealistic this year compared to 2023 when they first detected such content. A snapshot study of a single dark web forum revealed 12,000 new AI-generated images posted over a month-long period. The IWF found that nine out of ten of these images were so realistic they could be prosecuted under UK laws governing real CSAM.


The organisation, which operates a public hotline for reporting abuse, found examples of offenders selling AI-generated CSAM images online in place of non-AI-generated CSAM.

 


Susie Hargreaves, IWF’s chief executive, issued a stark warning: “Without proper controls, generative AI tools provide a playground for online predators to realise their most perverse and sickening fantasies. Even now, the IWF is starting to see more of this type of material being shared and sold on commercial child sexual abuse websites on the internet.”


The IWF is advocating for legal reforms to criminalise the creation of guides for generating AI-made CSAM and the development of ‘fine-tuned’ AI models capable of producing such material.

 


In a related move, Baroness Kidron, a crossbench peer and child safety campaigner, proposed an amendment to the data protection and digital information bill this year to criminalise the creation and distribution of these AI models. However, the bill was shelved after Rishi Sunak called for a general election in May.

 

First Published: Jul 22 2024 | 5:40 PM IST

- Never miss a story with notifications

- Gain full access to our premium content

- Browse free from up to 5 devices at once

Latest stories

spot_img