With the recent incidents by X’s AI tool, Grok, in relation to the trend of its users creating explicit non-consensual sexual content of women and children, the Centre for Independent Journalism (CIJ) finds MCMC’s move to temporarily block access to the chatbot as timely.
Following mass reporting by victims, this serves to protect human rights, and in particular, individuals and communities at risk from disproportionate harm, as they see their images and identities violated.
As we see a clear international legal trend condemning Grok, with Indonesia following suit on banning it and the British media regulator, Ofcom, launching an investigation into X - to name only a few of the first responders - Grok’s quick turnaround to limit sexually explicit AI image generation is long overdue.
Though international pressure has caused X and xAI to take swift action on Grok, it is evident that rather than being reactive and acting under pressure, these measures should have already been prioritised and implemented by X to ensure a safe online environment as AI quickly emerges to the forefront.
We view the explicit non-consensual sexual content of women and children generated using Grok as a form of technology-facilitated gender-based violence (TFGBV), as this manipulated content exploits technology, primarily to violate dignity, privacy, bodily autonomy, and the safety of women, children, and gender-diverse people.
The generation and dissemination of such content reinforces discrimination, amplifies violence, which is also likely to manifest offline, and undermines the rights, safety, and participation of women and children.
Its root causes lie in structural gender inequality, harmful social norms, misogyny, and biased technological systems.
As root issues are not properly addressed, it allows online spaces to increasingly become environments where harmful ideologies, such as the incel movement and the manosphere, can take root and spread.
The reach, anonymity, and connectivity of digital platforms continue to be essential enablers of the right to freedom of expression. At the same time, the use of these platforms has also contributed to human rights risks.
In recent years, new technological advancements, including the increasingly prevalent use of artificial intelligence (AI), have made it easier for misogynistic, harmful, and even violent content to be amplified and promoted, reinforcing attitudes that devalue and endanger women and girls.
X, xAI must be held accountable
It is clear that, beyond structural inequalities and patriarchal and harmful social norms, the content and images generated using Grok reflect fundamental design and systemic failures by X and xAI.
The onus shall then be on X as the platform owner to adopt measures to prevent the creation and spread of non-consensual and sexually exploitative content.
It is clear from the response by X on this matter that directing people to their existing reporting channels has been ineffective.
Though action has been taken to limit explicit AI images, there must be clear ethical content and moderation guidelines from X put in place on their platform, drafted in collaboration with legal and institutional channels representative of the public that respect and protect human rights, without instituting indiscriminate suppression of freedom of expression.
As such, while nations around the world have pressured X to take this reactive action, it is still incumbent on X and xAI to be socially responsible actors that prevent harm by complying with the following:
Independent urgent audits and impact assessments - Conduct and publish an urgent audit and impact assessment of how Grok’s outputs and the algorithms of X may be contributing to the amplification of harmful content and content related to TFGBV against women, children, and gender diverse persons.
Embed safety-by-design - Block the creation and AI-generation of non-consensual explicit sexual content depicting real or identifiable individuals as the starting point, as in this instance, include strong filters and content safeguards until high-risk content is independently audited and validated as safe.
We hope this will be a stepping stone for all social platforms to create similar filters and safeguards for all critical issues, ensuring the safety of everyone.
Halt algorithmic amplification - Deprioritise and demonetise sexualised, abusive, and misogynistic content that does not meet the threshold of consent; adjust recommender systems to prioritise safety and human rights rather than engagement and virality.
Monetisation incentives and structures such as ads, visibility boosts, and engagement rewards for TFGBV-associated accounts must be removed, aligning with consent safeguards and guidelines that should be put in place by the platform.
Improve moderation, transparency, and accountability - While X continues to utilise its grievance mechanism for detection, it must continue to shift from reactive to proactive moderation through investments in both automated detection and trained human oversights in the detection of TFGBV, implement fast-track takedown procedures for non-consensual sexual content, strictly enforce moderation policies through measures such as account suspension and banning repeat offenders, in addition to publishing regular transparency reports on TFGBV.
X should provide updated data on the responsiveness of their reporting mechanisms, including the number of reports received from Malaysia on TFGBV and the types and number of responses and remedial actions to combat these.
Audit and address AI bias in training and deployment - Audit and clean training datasets that contain misogynistic, sexualised, or exploitative content; and continuously monitor and correct discriminatory outputs.
Ensure survivor redress - Provide accessible reporting, rapid removal of the content, the prevention of re-uploads (through hashing and detection technologies), and meaningful remedies for affected women and children.
The harms associated with Grok’s misuse are foreseeable, preventable, and systemic.
Immediate action by X and xAI is a matter of corporate accountability, respect for human rights and protection of communities at risk, such as women and children. Continued focus on maximising its own profit without acting decisively will continue to enable TFGBV and deepen the harm.
Regardless of the government that compels it to make institutional changes, X and xAI must continue to be held accountable as a social platform used to disseminate public interest ideas, so it does not cause harm and can serve as a benefit to the public.
Proportional, rights-based regulation
Under the Convention on the Elimination of All Forms of Discrimination Against Women (Cedaw) (General Recommendation No 35) and the Convention on the Rights of the Child (CRC), states are obligated to act with due diligence to prevent TFGBV, regulate digital and AI systems, hold perpetrators and facilitators accountable, dismantle discriminatory norms, and ensure effective protection, justice, and remedies for women and children in both offline and digital spaces.
In taking these actions, we recognise that while restricting access to Grok is legitimate, necessary, and proportionate in this instance, it should not be indefinite and the default mode of action, as it might carry a long-term risk of overbroad censorship and potentially lead to unjustified restriction of lawful expression.
As such, MCMC and the government must:
Set a clear timeline for the ban and for X to adopt the immediate measures as outlined above, having a clearly outlined threshold for this.
Indefinite blocks can restrict access to lawful access to information and expression, and restrict public access to emerging tools that are used for research, debate, and knowledge sharing.
Move beyond reactive enforcement and shift regulatory focus from purely content blocking to addressing the root causes of TFGBV, including the systemic and social contributors such as misogyny, discrimination, and inequalities, and the culpability of algorithms and monetisation incentives that generate and amplify TFGBV content.
Continue being transparent by providing clear public updates on the legal basis, thresholds, and necessity of its continued use against X and xAI.
MCMC should share information on clear legal justifications and thresholds of restricting expression, as well as the number of takedowns done under its powers.
The use of both the Online Safety and the Communications and Multimedia Act to compel X and xAI is highly problematic due to its lack of clarity, public interest safeguards and potential for misuse.
We call for legal clarity in both the law and its usage.
This will create a more effective environment at combating these online harms, as it also empowers others to be champions against TFGBV, while increasing trust in MCMC as the regulator of online safety in Malaysia.
Invest in education and public awareness initiatives on digital literacy and initiatives that promote gender equality, ethical, and responsible digital engagement and promote human rights norms on data protection and privacy.
It is clear in a rapidly developing society that will soon find ways to circumvent safety measures, we must not only institute educational systems that create healthy discourse and inherent community protective mechanisms, instilling a culture that shuns these behaviours, but also eliminate the economic incentives of monetisation of non-consensual sexual content on all online platforms.
Guarantee independent oversight of its actions against X through a body made up of multiple stakeholders to ensure it represents the public interest.
Foster collaboration with civil society, media, academia and other experts to become more effective agents of change in creating a safe digital environment.
We, as civil society actors, believe Malaysia can champion the way forward for international powers to compel X and xAI - extending to all potentially harmful generative AI features - to become socially responsible tools, following their legal obligations as protectors of both our rights and safety.
Malaysia has an opportunity to lead by example as we demonstrate it is possible to promote and protect the rights of women and children to their privacy and to live life free of violence, while upholding international human rights obligations and freedom of expression through transparent and accountable measures. - Mkini
The CENTRE FOR INDEPENDENT JOURNALISM is a non-profit organisation that aspires for a society that is democratic, just, and free, where all people enjoy free media and the freedom to express, seek, and impart information
The views expressed here are those of the author/contributor and do not necessarily represent the views of MMKtT.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.