On April 8, a private secondary school in Kulai, Johor released a statement acknowledging complaints regarding a student who was reported to have produced, sold, and distributed artificial intelligence (AI)-generated deepfake images of his female schoolmates and school alumni.
Kulai MP and Deputy Communications Minister Teo Nie Ching confirmed that 38 individuals have been identified as victims, while the Johor police chief has recorded 29 reports related to this case as of April 15.
Alarmingly, the youngest survivor is reportedly 12 to 13 years old, stressing the urgent need to address the issue of digital safety and online gender-based violence, especially among minors.
What are deepfakes?
According to the United Nations Population Fund, deepfakes, non-consensual sexual imagery created using AI tools, are a form of image-based abuse.
Applications such as DeepNude and DeepFake Telegram bots are adaptable and eerily easy to use, enabling anyone to manipulate photos or videos to falsely portray individuals saying or doing things they never actually did.
Reduced regulations on social media platforms such as X, as well as instant messaging platforms such as Telegram, allow individuals to share and reshare deepfakes widely.

Such was the case in the Johor school incident: One of the deepfake images, which was shared on X, was viewed over 10,000 times.
This is on top of the over 200 individuals who are reportedly part of the Telegram group, who have access to view, screenshot, save, and further share the created deepfakes.
This multiplier effect exacerbates the harmful and long-lasting impact of tech-facilitated gender-based violence (TFGBV) on survivors - primarily girls and women.
Impact of deepfake, non-consensual sharing
From the survivors’ statements, it is clear how significantly the non-consensual use and sharing of their images have impacted their wellbeing.
Some reported having nightmares, while others noted feeling anxious and extremely uncomfortable having to share spaces such as classrooms with those who actively created, commented on, or shared deepfakes.

Additionally, one survivor claimed that a disciplinary teacher had victim-blamed and brushed off the gravity of the situation, noting that the 16-year-old perpetrator is “just a kid” and that “he needs counselling.”
Although the school’s board noted that the allegation against the disciplinary teacher was unfounded, the school’s delays in recognising and addressing the urgency of this matter deepen the negative impacts on survivors.
We commend the school for offering counselling services to the survivors and affected family members.
However, educators, parents, and policymakers must adopt a “survivor-centric approach” when it comes to counselling and advocating for justice for the survivors, in order to avoid victim-blaming and re-traumatisation.
What can be done
1. Improve digital laws, platform oversight
Malaysia has made some progress in updating its laws regarding personal data and digital safety, such as the Online Safety Act 2024.
MCMC has ensured that actions like platform licensing would improve safety, enhance users’ experiences, and protect against harmful content - a promise we have yet to see fulfilled.

It is time for Malaysia to adopt clearer laws that target AI-generated deepfakes, as this is not an isolated case where women and girls are put at risk in digital spaces.
As chair for Asean, it is time for the government to step up its initiatives in expanding the scope of the Asean Guide on AI Governance & Ethics, especially pertaining to the accountability and integrity of developers and users.
Countries such as South Korea, Australia, and the European Union can provide models for this:
South Korea’s Act on Special Cases Concerning the Punishment of Sexual Crimes Amendment 2024 criminalises the possession and viewing of deepfake materials; clarifies the government’s responsibility in removing explicit content and supporting victims; expands maximum sentencing for producing and distributing illicit deepfakes.
Australia’s Criminal Code Amendment (Deepfake Sexual Material) 2024 bans AI-generated sexually explicit material with penalties of up to seven years. Although this only relates to adults and the scope of this law should be broadened to include children.
European Union’s AI Act 2024 is a framework for how AI is used, prioritising the safeguarding of health, safety, human rights, democracy, and the rule of law.
2. Policies on sexual misconduct in schools
In our previous press statement, we emphasised that the creation, distribution, and monetisation of AI-generated obscene images is a criminal offence under Section 293 of the Penal Code and Section 5 of the Sexual Offences Against Children Act 2017.
However, the perpetrator’s expulsion and arrest in this case do not sufficiently address deeply rooted harmful online behaviours disproportionately affecting girls and women.
All educational institutions need to implement policies on sexual misconduct, enabling them to actively prevent sexual misconduct, and act appropriately in cases it occurs – something the Education Ministry reiterated with no clear follow-up on how institutions are being monitored.
This could include offering adequate counselling and rehabilitation services, as well as support systems, to both survivors and perpetrators.

Policies can be created by consulting educators, counsellors, and NGOs advocating for children’s rights, digital safety, and/or gender equity.
Whether public or private, institutions need to be periodically monitored to ensure these policies are effective, survivor-centric, and appropriately enacted.
3. Comprehensive sexuality education for all
The 16-year-old perpetrator was not alone in his actions. It points to a larger problem of young, impressionable boys being easily influenced by harmful content, called the manosphere.
To tackle this, Comprehensive Sexuality Education (CSE) can be implemented across educational institutions, both public and private, so that children are raised to embody a culture of respect.
Specifically, Education Ministry, alongside educators, can reference the International Technical Guidance on Sexuality Education (ITGSE) which highlights the “safe use of Information and Communication Technologies (ICTs)” while emphasising the importance of consent, privacy, and bodily autonomy and integrity.
CSE should not be limited to students. Parents, teachers, and staff should also be trained/engaged in CSE to ensure conversations on respect and appropriate digital behaviour continue at home as well.
Enough is enough
It is critical for society and the government to recognise deepfakes as a form of violence against women and girls, and to take immediate action to address this issue.
Failing to treat this jeopardises our future, as children will continue to be made vulnerable online and in real life, leading to long-lasting or irreversible consequences, including anxiety, depression, and suicide.

We urge the public and policymakers to stay vigilant regarding the increasing misuse of AI-generated content and its impacts towards victims and survivors.
Available reporting mechanisms, policies, and laws need to be further improved to make Malaysia safer for everyone at home, in schools, and online.
If you or anyone you know requires information, counselling, or support, you may access Kryss Network’s OGBV Toolkit, Awam’s Telenita Helpline 016-237 4221/016-228 4221, or WAO’s Tina Helpline 018-988 8058. - Mkini
KRYSS NETWORK, ALL WOMEN’S ACTION SOCIETY (AWAM) and WOMEN’S AID ORGANIZATION (WAO) are NGOs working with women and youth.
They issued this statement which is also endorsed by Perak Women for Women Society, Family Frontiers, Association of Women Lawyers, Autism Inclusiveness Direct Action Group, Justice For Sisters, Women's Centre for Change, Penang, Persatuan Kesedaran Komuniti Selangor, Kolektif Feminis Malaysia, Sisters in Islam, Sabah Women's Action Resource Group, Sarawak Women for Women Society and Persatuan Sahabat Wanita Selangor.
The views expressed here are those of the author/contributor and do not necessarily represent the views of MMKtT.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.