
In today’s academic world, Artificial Intelligence (AI) has become an indispensable tool. From drafting reports and checking grammar to generating creative ideas, it has revolutionised how educators, researchers, and students work. Yet behind this technological convenience lies a critical question: are we thinking with AI—or are we letting AI think for us?
I have been using ChatGPT since its early release—sometimes through a free version, sometimes through a subscription, and over countless interactions, one truth has stood out to me: AI is only as intelligent as the human guiding it. The quality of its output depends entirely on the quality of the input. It thrives under clear direction, context, and intent. Without that, it becomes a digital mirror reflecting confusion.
For me, AI has never replaced intellect, intuition, or experience. It is an extension of my thinking. A tool that accelerates, organises, and refines ideas already formed in my mind. Whether I am preparing an emcee script, editing a research proposal, or crafting a translation, I rely on my academic judgment to decide what suits the context. AI may provide options, but discernment remains human.
A Wake-Up Call
One moment that reshaped my view of technology came during a talk by Prof. Emeritus Muhammad Haji Salleh, Malaysia’s sixth Sasterawan Negara. He said something that has never left me:
“Manusia sekarang malas untuk berfikir. Mereka lebih rela pajak otak pada teknologi.”
He reminded us that people today no longer think deeply. They have surrendered their intellects to machines. That line struck me hard. It was a wake-up call. A reminder that the greatest danger of convenience is complacency. We have stopped thinking first and acting later. Even in replying to emails, many rush to let AI do it. This may make us faster, but it dulls what makes us human: our ability to think critically, empathetically, and originally.
Responsible Use, Ethical Mind
As an academic, editor, and educator, I believe AI can be a powerful partner—but only when used responsibly. Responsible use means maintaining intellectual control, respecting ethical boundaries, and never outsourcing judgment or creativity to a machine. It means using AI to enhance our thinking, not replace it.
As an English lecturer, I can often instinctively tell when a piece of writing is human or AI-generated. There is something subtle yet unmistakable in the rhythm, phrasing, and emotional cadence of words that reveals the presence—or absence—of a human touch. It’s not about perfection; it’s about sincerity. Machines can imitate tone, but they cannot feel it. They cannot sense irony, cultural subtext, or the delicate weight of words born of human experience.
Every time I refine a text or write a speech, I remind myself that technology should follow our lead, not define it. Each task carries its own soul. A research proposal demands precision; an emcee script needs warmth and rhythm; an editorial review calls for sensitivity to tone and audience. No algorithm can replicate these nuances because they emerge from human experience and empathy.
The Human Element
Working at Universiti Teknologi Malaysia (UTM), I am constantly reminded of our motto, “Innovating Sustainable Solutions”. Innovation, to me, is not just about adopting technology. It is about integrating it meaningfully and ethically into our human pursuits. Sustainability in innovation means ensuring that progress does not come at the expense of intellect, empathy, or identity.
When I work with AI, I see it as a collaborative space where human and machine thinking meet—a form of cognitive partnership. I provide the reasoning, ethical frame, and creativity; AI provides speed, structure, and suggestions. The harmony lies in this equilibrium.
A Message to Fellow Academics
In academia, we pride ourselves on originality and integrity. But with AI now embedded in research, writing, and teaching, integrity extends to how we use these tools. We must teach our students that AI is not a shortcut to intelligence—it is a bridge connecting human thought to digital precision.
The danger lies not in AI itself, but in how we engage with it. When we stop questioning, editing, and reflecting, we risk intellectual stagnation. But when we use AI critically—asking, verifying, and contextualising—it becomes a genuine instrument of learning.
Responsible AI use also means being transparent. When I use ChatGPT for drafting, I treat it like a colleague who brainstorms ideas, not an author. I decide the tone, check the facts, and shape the final message. The machine may suggest the words, but the voice remains mine.
The Balance We Must Keep
The rise of AI marks one of the most profound shifts in how we create and communicate. Yet the essence of knowledge has not changed. Thinking still requires curiosity, humility, and effort. AI can offer scaffolding, but we must build the structure ourselves.
As educators, we owe it to our students to model what responsible innovation looks like—how to question, verify, and create with integrity. AI can inspire us to be more efficient, but it should also remind us to remain human.
In the end, the question is no longer how AI will change the way we work—it already has. Its pace is so fast that we can barely see what lies ahead. The real question is whether we will still recognise ourselves in the process. The goal is not to let technology think for us, but to think better because of it. The future of AI will always depend on the wisdom of those who use it. We must remain intellectually curious, ethically grounded, and ever mindful that technology advances only because the human mind dares to lead it.
Dr Wan Farah Wani Wan Fakhruddin teaches at the Faculty of Social Sciences and Humanities Kuala Lumpur, Universiti Teknologi Malaysia. Her academic interests include studying the science of human communication through the lens of functional linguistics.
Source: UTM NewsHub