Fong Choong Fook calls for risk-based restrictions, while Sameer Kumar urges Putrajaya to enact a National AI Governance Framework.

They said this was crucial for data security and privacy, particularly in government infrastructure that manages information vital to national security and sovereignty.
Sameer Kumar of Universiti Malaya said DeepSeek experienced a data breach early last year, with over one million log entries and chat histories found on a public database that did not require authentication.
He said the incident had damaged the Chinese platform’s credibility, although this was primarily a database misconfiguration. Similar breaches have been reported with other AI systems, he added.
“In the case of DeepSeek’s 2025 breach, I believe such incidents result from inadequate encryption. For government users, this would mean that even their routine communications could be intercepted by third parties. This is a serious problem,” Sameer told FMT.
“The same could occur when Malaysian government officials use DeepSeek. Their data becomes subject to Chinese law, including Article 7 of its National Intelligence Law.
“This article states that ‘all organisations and citizens support, assist and cooperate with national intelligence efforts’,” he pointed out.
Sameer, however, said a blanket ban was unnecessary, recommending instead that regulatory authorities enforce compliance requirements that would apply across the board to all AI providers.
Fong Choong Fook, founder of security services firm LGMS, warned that critical information could inadvertently enter into chatbots whenever civil servants key in prompts.
He cautioned that such data might then be processed or stored outside the country and used to train and improve the AI model.
Fong said this may give rise to sovereignty and compliance risks and may potentially lead to data exfiltration, the leakage of credentials, and the granting of permissions to access government data.
He suggested banning DeepSeek in national security agencies, law enforcement, and other critical infrastructure as well as departments that handle the sensitive citizen data.
Several nations have imposed blanket bans on the platform, including Germany and Italy.
Others, like Australia, Czech Republic, India, Korea, Netherlands, Taiwan and the US, have introduced prohibitions across their public sectors or restricted its use in specific departments.
Fong said a blanket ban should only be imposed if there was clear evidence of unacceptable risk or non-compliance, adding that risk-based restrictions were the most practical.
Developed by Chinese researchers and engineers, DeepSeek made waves early last year and was touted as a potential rival to ChatGPT, particularly due to its specialisation in Mandarin and other Asian languages.
Exclusive local data storage most ideal
Sameer said the storing of personal data offshore was not unique to Chinese AI, adding that while the legal frameworks of both China and US mandate that companies disclose data, a key difference was transparency.
“While US companies like OpenAI regularly publish transparency reports that list such government requests, Chinese companies operate under secrecy requirements.
“DeepSeek must strengthen its security to avoid data breaches and reassure users that its backend is secure. DeepSeek’s GDPR non-compliance and lack of transparency are legitimate regulatory concerns that should be addressed by DeepSeek itself.
“The complex infrastructure on which AI platforms operate means compromise at any layer can expose data, irrespective of the AI platform’s own security practices,” he said.
The academic said there was an urgent need for Putrajaya to enact a National AI Governance Framework, adding that exclusively storing AI data within Malaysia’s jurisdiction would be most ideal.
“Otherwise, sovereign encryption could be implemented where encryption keys are held only by Malaysian authorities. This would make data inaccessible to foreign platform providers or their governments.”
Sameer also said civil servants must be required to undergo training on AI use to curb the accidental entry of sensitive data into AI platforms through ignorance.
Fong agreed, saying humans were still the biggest risk as they might key in sensitive information into public AI tools.
He also suggested Putrajaya carry out due diligence and security testing on AI vendors before allowing their use in the public sector. - FMT


No comments:
Post a Comment
Note: Only a member of this blog may post a comment.