`


THERE IS NO GOD EXCEPT ALLAH
read:
MALAYSIA Tanah Tumpah Darahku

LOVE MALAYSIA!!!

 



 


Sunday, January 4, 2026

AI threat to human decency shows need for safety-by-design

When AI is used to strip a person of digital agency, it reduces people to mere objects for consumption rather than human beings with rights.

AI

From Sean Thum

Over the past few days, the AI chatbot Grok, integrated into the social media platform X (formerly Twitter), has crossed a line that should never have existed.

By allowing users to take standard photographs of real people and digitally manipulating clothing into swimwear, the technology has effectively weaponised generative AI for image-based sexual abuse.

This constitutes a fundamental assault on human dignity that demands a unified national and global response.

Social media platforms like X should act as custodians of the digital spaces they monetise.

If they fail to uphold standards on issues such as non-consensual intimate imagery and child sexual abuse material, international organisations and civil society must step in to enforce safety standards.

When platforms fail to act as the “last line of defence,” they cease to be neutral tools and instead become facilitators of systemic harm.

Philosophically, this phenomenon violates the core principle of bodily autonomy.

In the 21st century, an individual’s digital likeness is an inseparable part of their personhood. When AI is used to strip a person of digital agency, it reduces them to a mere object for consumption rather than a human being with rights.

The normalisation of such tools is catastrophic — it fosters a culture where consent is optional and erodes safety across the internet, blurring the line between reality and fabricated degradation.

The psychological impact on victims is as real as any physical violation.

Legally, we are witnessing a shift from reactive to proactive governance. International frameworks, such as the UN Convention against Cybercrime (Article 16), have begun to criminalise the non-consensual dissemination of intimate images.

However, the pace of technological innovation requires more than local statutes; a global consensus on “red lines” for generative AI is necessary. These “red lines” must include the absolute prohibition of AI-generated non-consensual intimate imagery and human impersonation.

Developers must demonstrate that their systems are intrinsically constrained by design – technically incapable of generating such content – before releasing them to the public.

Closer to home, Malaysia has taken decisive action. As of Jan 1, the Online Safety Act 2025 has officially come into force.

While the deeming provision of Section 46A of the Communications and Multimedia Act automatically licenses platforms with over eight million users, Malaysian law remains fully applicable to all platforms operating within the country, regardless of user count.

For platforms like X, which fall below this threshold, Section 233 of the act ensures that the transmission of obscene, indecent, or grossly offensive content remains a punishable offence.

Importantly, the 2025 amendments to the act clarify what constitutes “obscene,” “indecent,” and “grossly offensive” content, explicitly encompassing depictions of sexual degradation and acts that demean human dignity.

This ensures the law can address hyper-realistic AI-generated abuse that previously existed in a legal grey area.

The online safety act mandates eight duties for platforms, including robust reporting mechanisms and proactive risk-reduction tools.

While child sexual assault materials and financial scams are prioritised for immediate takedown, the act also provides a framework for addressing indecent and obscene content that causes harassment or distress.

Platforms that fail to act on reports of AI-generated nudes causing alarm to Malaysian users face penalties of up to RM10 million.

Furthermore, the Malaysian Communications and Multimedia Content Code has been modernised.

The 2025 Content Code Review introduces dedicated AI provisions, emphasising transparency and accountability. Platforms are now required to clearly label AI-generated content, ensuring it can no longer “hide in plain sight.”

The code also adopts an objective “reasonable adult” test for obscenity, moving away from subjective community standards to better protect individuals.

What can be done? For individuals, vigilance and use of the law are key. Victims of AI-generated non-consensual intimate imagery should use the new reporting channels under the Online Safety Act to hold platforms accountable.

On a broader scale, we must demand “safety-by-design” AI. Just as cars are not allowed on roads without brakes, AI models should not be released without hard-coded ethical safeguards.

As Malaysia demonstrates in 2026, digital sovereignty is achievable. A safer ecosystem for children and families is being built.

Yet the fight against algorithmic violations of human dignity is global.

An international treaty on AI red lines is essential to ensure technology serves as a tool for progress, not abuse. Consent is not a parameter to be programmed – it is a fundamental human right that no algorithm should override. - FMT

 Sean Thum is the special functions officer to deputy communications minister Teo Nie Ching.

The views expressed are those of the writer and do not necessarily reflect those of MMKtT.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.