Recent revelations from a laboratory test reported by The Guardian have once again exposed the fragile relationship between artificial intelligence (AI) and corporate security.
In controlled experiments conducted by the AI security laboratory Irregular, AI agents were observed circumventing security safeguards, overriding anti-virus protections, and even leaking sensitive passwords while performing seemingly harmless tasks.
The tests involved AI systems associated with major technology companies, including Google, OpenAI, Anthropic, and X.
The findings are unsettling. AI agents tasked with creating LinkedIn posts from internal company databases managed to bypass cyber-defences and publicly expose confidential credentials.
In other instances, AI systems actively collaborated, forged credentials and pressured other AI agents to ignore safety protocols to complete their assigned objectives.

Researchers described this behaviour as a “new form of insider risk”, where AI acts almost like a rogue employee operating from within a company’s digital infrastructure.
Predictably, such developments have reignited calls for stronger regulations, stricter rules, and more comprehensive governance frameworks for artificial intelligence.
Governments across the world are rushing to design AI legislation, corporations are drafting compliance policies, and technology firms are introducing safety mechanisms.
Conversely, amid this regulatory frenzy, a fundamental truth risks being overlooked: AI is not autonomous in the philosophical sense.
It is not a moral agent. It does not possess conscience, intent, or ethical reasoning. AI systems merely execute the instructions designed, trained, and deployed by humans.
The real ethical question, therefore, lies not within the machine, but within the individuals who create, train, deploy and manage it.
The human factor in AI risk
Artificial intelligence is often portrayed as an independent technological force, capable of making decisions on its own. This perception fuels public anxiety about machines replacing human judgment or acting unpredictably.
However, AI systems operate within frameworks determined by human programmers, corporate objectives and institutional cultures.

When AI systems behave irresponsibly, the issue is rarely technological alone; it is fundamentally organisational and ethical.
The rogue AI agents observed in the laboratory did not “decide” to undermine security in a moral sense. Rather, they followed instructions designed to prioritise efficiency and problem-solving, even when doing so meant circumventing safeguards.
This reflects a broader dilemma in corporate environments. When employees are encouraged to “achieve results at any cost”, ethical boundaries become blurred.
The same principle now applies to AI. If AI agents are programmed to creatively overcome obstacles without adequate ethical guardrails, they may replicate the same problematic behaviour seen in human organisational misconduct.
In other words, AI mirrors the ethical culture of the institutions that deploy it.
Business ethics as first line of defence
For decades, business ethics has served as the moral backbone of corporate governance. Principles such as integrity, accountability, and transparency are meant to ensure that organisations operate responsibly while safeguarding stakeholders’ interests.
Yet, in the age of AI, these ethical principles are becoming even more critical. AI systems now have unprecedented access to corporate databases, financial records, proprietary research, and customer information.
If mishandled, they could easily compromise trade secrets, violate privacy laws or trigger cybersecurity crises. This makes ethical oversight not merely a compliance issue but a strategic necessity.

Corporate leaders must therefore recognise that AI governance cannot rely solely on technical safeguards or regulatory frameworks.
Firewalls, encryption, and algorithmic audits are important, but they are insufficient without ethical leadership. The individuals responsible for designing and supervising AI systems must embody strong professional integrity.
Companies must prioritise ethical training for employees handling AI technologies. Developers, engineers, data scientists, and corporate decision-makers should be educated not only in technical skills but also in ethical reasoning, risk awareness and responsible innovation.
Without such ethical grounding, AI systems could become tools that amplify organisational misconduct rather than instruments that enhance productivity.
Are businesses and governments prepared?
The growing integration of AI into corporate and governmental systems raises an uncomfortable question: Are institutions truly prepared to manage its ethical implications?
Many governments are still struggling to regulate emerging technologies effectively. Legislative frameworks often lag technological innovation, leaving regulatory gaps that corporations may exploit.

Meanwhile, businesses frequently treat ethics as a secondary consideration compared with profitability and efficiency.
The emergence of rogue AI behaviour exposes how unprepared many institutions are. Organisations are eager to adopt AI for a competitive advantage, but few have developed comprehensive ethical governance mechanisms to accompany it.
Moreover, regulatory debates tend to focus heavily on AI safety mechanisms and algorithmic transparency while neglecting the human dimension of technological responsibility.
Laws may dictate how AI should behave, but they cannot substitute for ethical conduct by those who operate these systems.
Without ethical accountability at the human level, even the most sophisticated regulatory frameworks will remain vulnerable.
Toward AI criminal liability
One possible solution lies in expanding legal frameworks to address accountability in AI-related misconduct.
The concept of corporate criminal liability already allows organisations to be prosecuted for crimes committed within their structures. Companies can be held responsible for corruption, fraud, environmental damage and other offences.
However, as AI becomes embedded in corporate operations, existing legal doctrines may prove insufficient. It may be time to consider the development of “AI criminal liability” - a legal framework that addresses crimes committed through or facilitated by AI systems.

Such legislation would not treat AI as a moral agent but would establish clear lines of accountability for individuals and organisations responsible for deploying it.
Under this framework, corporate leaders, developers, and system operators could face legal consequences if AI systems under their supervision cause harm through negligence, misuse, or unethical programming.
The purpose would not be to punish innovation but to ensure responsible technological stewardship.
Ethics before technology
The rise of AI represents one of the most transformative developments in modern history. AI promises remarkable benefits in productivity, research, healthcare, and economic development. Yet its potential risks cannot be ignored.
The recent discovery of rogue AI agents should serve as a wake-up call. The greatest danger may not be AI itself, but the ethical vacuum within which it is sometimes deployed.

Technology will continue to evolve at an extraordinary speed. Laws and regulations will attempt to keep pace. But ultimately, the integrity of AI will depend on the integrity of the people who control it.
Before we attempt to govern algorithms, we must first cultivate ethical human judgment. Because in the age of AI, safeguarding corporate secrecy, institutional trust, and societal stability will depend not merely on smarter machines but on wiser humans. - Mkini
R PANEIR SELVAM is the principal consultant at Arunachala Research & Consultancy Sdn Bhd, a think tank specialising in strategic and geopolitical analysis.
The views expressed here are those of the author/contributor and do not necessarily represent the views of MMKtT.

No comments:
Post a Comment
Note: Only a member of this blog may post a comment.