Kitson Foong says the approach is necessary given the rapidly evolving digital landscape and the integration of AI tools across all industries and sectors.

Lawyer Kitson Foong said this was the most effective way forward, given the rapidly evolving digital landscape and the integration of AI tools across nearly all industries and sectors.
He said the new law should help define what constitutes “AI-generated content” and impose disclosure requirements, including the watermarking of artificially-produced images and videos.
“Malaysia needs its own AI act to form a legal shield against AI-generated deception. Clearly, there exist liability gaps, which raise questions such as: ‘Who pays when an AI content ruins a life: the developer, user or platform?’
“Such legislation will be able to criminalise malicious use of AI, such as deepfake nudes and fake news,” he told FMT.
AI-generated content has been at the centre of several recent controversies, including multiple incidents involving the publication of erroneous versions of the Jalur Gemilang.
Last month, Iskandar Puteri police arrested two teenagers to assist in investigations into the sale of AI-generated nude photos. One of the suspects, a 16-year-old boy, is alleged to have used AI tools and photos from social media to create fake nude images.
Earlier this month, science, technology and innovation minister Chang Lih Kang said his ministry currently has no plans to introduce specific legislation to regulate the misuse of AI technology.
However, he said one of his ministry’s goals is to eventually turn the National Guidelines on Artificial Intelligence Governance and Ethics (AIGE) into enforceable legislation.
Lawyer Sarah Yong, chairperson of the Malaysian Bar’s cyber and privacy laws committee, said new AI laws must be accompanied by comprehensive amendments to existing legislation, regulations and standards.
“The AI law should provide remedies for both criminal and civil liabilities, but simply enacting a new law may not be sufficient on its own,” she said.
Yong said lawmakers must also consider whether the damage by the misuse of AI can be addressed by existing law, or if there is a loophole that needs to be filled.
Foong suggested that the Communications and Multimedia Act 1998 (CMA) and Penal Code be updated to address AI-generated offences such as fraud and defamation, and for the Computer Crimes Act 1997 to be amended to cover AI-facilitated crimes.
For now, Yong said it may be sufficient to apply Section 233 of the CMA to AI content that is considered “obscene, indecent, false, menacing or grossly offensive”.
She also said any content of a sexual nature may fall within offences under the Penal Code, the Anti-Sexual Harassment Act 2022 and the Sexual Offences Against Children Act 2017.
Another lawyer, Philip Koh, said the AIGE and the Artificial Intelligence Roadmap 2021–2025 were “soft laws” that lay the groundwork for legislation on AI regulation in the future.
Noting that AI technology spans multiple sectors and requires cross-ministerial collaboration, Koh said the law should not stifle innovation but must instead clearly outline the consequences of AI misuse.
“The AI roadmap and AIGE are set to expire this year, so they must be refreshed and updated.
“The government should also seriously consider whether a set of guidelines or legislation should regulate this space. In my opinion, we need to establish specific civil and criminal laws regarding the design, deployment, and use of AI tools,” he said. - FMT
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.