By Manjit Singh Sidhu
We are living in a time of rapid technological transformation, and few developments have made as big a splash in education as generative AI. Tools like ChatGPT, DeepSeek, Google Gemini, and image generators like Midjourney are becoming as familiar to students as calculators and search engines once were. But this shift is not just about new tools it is changing the very fabric of how we teach, learn, and assess. And nowhere is that more apparent than in how we evaluate students.
Historically, the traditional model assessment has been about measuring what students know and how well they can apply that knowledge independently. Essays, quizzes, exams, and presentations have been our go-to tools. They are familiar, easy to implement at scale, and designed to gauge individual understanding.
But here is the catch, these systems were created for a world where knowledge was scarce and locked in textbooks or libraries. Today, knowledge is everywhere, and AI can write a five-paragraph essay, solve complex equations, or summarize entire textbooks in seconds. So, what happens when the “doing” of learning is something a machine can replicate? Are we still measuring the right things?
Is generative AI disruptor or assistant? At first, many educators saw generative AI as a threat. Students could use it to write papers, solve problems, or generate code. The fear was clear, if a student can outsource their work to AI, how do we know what they actually know?
But that framing misses a bigger point. Generative AI is not just a workaround, it is becoming an essential part of how people work and think. In the real world, professionals are increasingly expected to know how to use AI tools to be more effective. The skill is no longer just in producing something from scratch, it is in prompting, refining, curating, and building on what AI can provide.
This shift forces us to ask a deeper question: Are we assessing what matters for the future?
Let us rethink what we measure. If students are going to enter a world where collaboration with AI is the norm, our assessments need to evolve to reflect that reality. Instead of focusing purely on rote memorization or isolated performance, we should be thinking about how to evaluate:
- Critical Thinking: Can students evaluate the quality of AI-generated content? Can they spot bias, fact-check, or improve upon what AI provides?
- Prompt Engineering: Do they know how to interact with AI tools effectively? Can they give clear, targeted instructions to get the results they need?
- Creativity and Originality: Are students able to generate unique ideas, perspectives, or projects that go beyond what AI could produce alone?
- Ethical Use of AI: Do students understand when and how it is appropriate to use AI? Are they aware of issues like plagiarism, data privacy, and responsible sourcing?
- Process over Product: Can students reflect on how they approached a task, what decisions they made, and why rather than just showing the final outcome?
These are not easy things to measure, but they are increasingly important. The goal of assessment should be to support learning, not just to sort or rank students. And that means shifting from a model of surveillance and gatekeeping to one of guidance and growth.
So, what does this look like in practice?
One approach is process-based assessment. Rather than grading just the final essay or code, educators can ask students to document their journey what prompts they tried with AI, what worked and what did not, how they revised and improved the output. This turns assessment into a learning experience itself.
Another strategy is collaborative assessment. Since real-world work is often team-based (and now, AI-assisted), we can create tasks where students work together with each other and with AI tools to tackle complex, open-ended problems. Think design challenges, simulations, or multimedia storytelling.
Then there is oral and reflective assessments. Having students explain their work, their thinking, and how they used AI in the process can be incredibly revealing. It also reduces the temptation to cheat because it is hard to fake genuine understanding in a conversation.
Some institutions are even exploring AI-inclusive assessments, where students are encouraged (or required) to use AI tools as part of the task. The focus then becomes not whether they used AI, but how skillfully and ethically they did so.
It is easy to feel overwhelmed by these changes, but the human role in education is not going away it is becoming more vital. Teachers are no longer just content experts; they are coaches, facilitators, and curators of meaningful learning experiences.
That means designing assessments that are hard to cheat on not because they are surveillance-heavy, but because they are deeply authentic. It means building trust with students and encouraging a culture of curiosity, integrity, and reflection. And it means modeling thoughtful, ethical use of AI ourselves.
Yes, generative AI is changing education but it does not have to undermine it. If we lean into the opportunity, we can create richer, more relevant ways to understand what students are learning and how they are growing.
We are at a crossroads. We can adhere to outdated methods and try to police our way through the AI era or we can reimagine assessment to align with the world students are living in. That means embracing complexity, encouraging creativity, and recognizing that learning is more than just producing answers it is about asking better questions, making thoughtful choices, and becoming adaptable, ethical thinkers. And that is something no AI can do for us.
The author is a Professor at the College of Computing and Informatics, Universiti Tenaga Nasional (UNITEN), Fellow of the British Computer Society, Chartered IT Professional, Fellow of the Malaysian Scientific Association, Senior IEEE member and Professional Technologist MBOT Malaysia. He may be reached at manjit@uniten.edu.my






Leave a Reply