I gave a talk about ‘writing and publishing online’ to a group of aspiring poets at a retirement village. As a prelude, I read them a limerick about life:
There once was a thing called life,
Full of joy, love, and strife,
With ups and downs, it’s true,
But it’s a gift through and through,
So, cherish each moment, rife!
They were quite impressed. Only, I didn’t compose it. ChatGPT did. (GPT stands for Generative Pretrained Transformer). I had fun interacting with ChatGPT. But it has its limitations - for now.
I asked ChatGPT to write my profile in 200 words. It generated an embellished character reference. Convincing but not completely correct.
For the second test, I queried: “What are the obstacles to good governance in Malaysia?” (It also responds in Bahasa Malaysia).
ChatGPT gave me a 475-word report with these sub-headings: “ethnic and religious diversity, corruption, economic disparities, political fragmentation, balancing Islam and secularism, environmental concerns, brain drain and talent retention.”
Its conclusion? “Addressing these challenges requires effective governance, sound policies, and the ability to engage and collaborate with different stakeholders to ensure sustainable development, social cohesion, and economic progress for the country.”
Common factors. Not what I would consider an in-depth assessment. It took several queries to obtain a reasonable answer.
I rephrased my query: “What is the level of public trust and confidence in the government and political institutions in Malaysia?”
It responded:
“… to have the most accurate and up-to-date understanding of public sentiment towards the government and political institutions in Malaysia, I recommend referring to recent news sources and surveys that provide insights into the current state of public opinion.”
Concepts and contexts
Not much use there for a serious researcher. Its “training data” is derived from the algorithms of information (aka “corpus”) accessible from the public domain up to 2021. (It says Ismail Sabri Yaakob is still our prime minister). Hence, its conversational responses carry a propensity for misinformation.
So, I turned to Google to cast a wider net for corroborating sources and to fact-check ChatGPT’s responses for contextual accuracy.
An expert view from TechCrunch notes: “… when ChatGPT or some other (large language) model gives you an answer, it’s always worth asking yourself (not the model) where that answer came from and if the data it is based on is itself trustworthy”.
ChatGPT advises users to “… (only) use it for generating ideas, getting inspiration, or as a starting point, but always complement its outputs with your own critical thinking, research, and editing skills.”
ChatGPT does not yet clarify concepts and contexts. Nor can it correctly differentiate falsehoods from facts. And it does not replicate individual writing styles or natural expressions - at least for now.
Human value and judgment are still indispensable to making out the meanings in the nuances of human relationships and communication.
However, ChatGPT will learn what it needs to learn from the vast corpus of text to become more sophisticated and human-like. The possibilities are boundless, the benefits far-reaching, and the “existential risks to humanity” could become real if left unregulated.
AI apps have already recreated humanlike conversations, for instance, between Bill Gates and Socrates - weird but stimulating, nonetheless. An AI-generated conversation between Jesus and Satan is conceivable.
Hello History and Historical Figures already provide users with a channel to have a textual chat with figures of the past. The unfiltered conversations could lead to a rewrite of history from the perspectives of, for instance, Adolf Hitler and Pol Pot - consequently reinforcing the ideologies of extremist groups lurking in their information silos.
Not infallible
I asked ChatGPT if AI-generative technology could polarise societies and lead to a breakdown in authentic human communication.
Yes, ChatGPT said, for these reasons: “social isolation, depersonalisation, loss of trust and authenticity, reinforcing biases and prejudices, impact on employment and socioeconomic dynamics, privacy and data concerns.”
However, it qualifies its response: “… AI complements, rather than replaces or hampers, meaningful human relationships. Striking a balance between technological advancements and the preservation of genuine human connections requires thoughtful consideration, ethical guidelines, and responsible implementation of AI technologies.”
ChatGPT admits that it is not infallible, and there may be times when the information it provides is “incorrect or outdated… therefore, it's always a good idea to verify important information from reliable sources or consult with experts in the respective fields to ensure accuracy”.
ChatGPT has saved me considerable time in generating basic pointers to a research question. Soon, it will be able to write my articles in my voice. Will I use it as my ghostwriter? No, if one stays true to the tenets of professional journalism.
In using ChatGPT, these principles still matter - transparency in sourcing, responsibility in sifting the facts from falsehoods, honesty in my intentions, and accountability in writing for public readership. - Mkini
ERIC LOO is a former journalist and educator in Australia, as well as a journalism trainer in parts of Asia.
The views expressed here are those of the author/contributor and do not necessarily represent the views of MMKtT.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.