`


THERE IS NO GOD EXCEPT ALLAH
read:
MALAYSIA Tanah Tumpah Darahku

LOVE MALAYSIA!!!

 



Friday, December 12, 2025

Clamp down on AI child-abuse images at the source, say experts

They call for tougher safeguards, real age checks and enforceable platform obligations as AI-generated child-abuse cases spike.

MCMC data showed that AI pornographic content removals jumped from under 200 instances in 2022 to over 1,200 cases in 2024. (Envato Elements pic)
PETALING JAYA:
 Cybersecurity experts have warned that the fight against AI-generated child sexual abuse material must begin before content is created, as opposed to trying to remove it once it has spread online.

Chasseur Group executive director Munira Mustaffa said AI tools must be built with “safety by design”, with the addition of real age checks, strong content filters and technical limits that stop photos of minors from being processed for sexualised content.

“The standard should be: if a tool can be weaponised against children, access controls must be substantive, not performative,” she told FMT.

Munira said rapid-response systems to detect and remove confirmed abuse material should also be mandatory, noting that victims of explicit AI deepfakes often suffer severe psychological harm.

“Children often do not understand informed consent or the permanence of digital content, and some have already been groomed or blackmailed into sharing personal images,” she said.

“The psychological impact can be devastating. There have been cases where the distress caused by having these images weaponised against them has led to suicide.”

Statistics released by the Malaysian Communications and Multimedia Commission show AI pornographic content removals jumping from under 200 instances in 2022 to over 1,200 cases in 2024.

There have also been reports of students generating fake explicit images of classmates, as was the case at a secondary school in Johor earlier this year.

Regulatory responses include the upcoming Online Safety Act 2025, which imposes legal duties on platforms to remove harmful content quickly and be held accountable when harmful material remains online.

LGMS Bhd founder and CEO Fong Choong Fook said many AI tools already block obscene images, but a clear legal framework is needed to compel providers to ban explicit content generation, whether involving minors or adults.

He said the European Union already regulates AI use, and expects similar frameworks to emerge elsewhere.

SL Rajesh, head of the computer forensics unit at the International Association for Counterterrorism and Security Professionals Centre, said rapid-response obligations must be enforceable, not merely encouraged.

He said websites and apps should provide simple reporting tools, maintain 24/7 response centres, remove verified deepfake images of minors immediately, use hash‑matching to block reuploads, and work with a specialised law enforcement unit focussed on AI-generated abuse involving children.

“With critical response centres in every major platform and a dedicated law enforcement department, harmful content can be removed quickly, and children can be safeguarded,” he said. - FMT

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.