Navigation

© Zeal News Africa

What teachers can do to tackle AI-driven examination cheating

Published 7 hours ago4 minute read

The role of Artificial Intelligence (AI) in higher education is the subject of ongoing debate—particularly regarding how students use it to complete assignments. While AI offers immense opportunities for enhancing learning, concerns arise when students use tools like ChatGPT to generate term papers or assessments, then claim ownership for grading purposes.

As an instructor, I have encountered cases where students submit assignments that are technically correct but suspiciously flawless, especially when contrasted with their previous work. Even after designing highly contextual questions, some students still relied on Generative Pre-trained Transformers (GPTs), producing generic responses that lacked relevance or depth.

This unethical use of AI in education has become a global concern. Many educators are struggling to guide students toward using AI responsibly, rather than as a shortcut to critical thinking and problem-solving. There is a pressing need to help learners develop the discipline to think independently and seek authentic solutions rather than rely on automated quick fixes.

AI cannot replace the human brain. Its real value lies in assisting with tasks—organising data, conducting routine analyses, and improving efficiency. AI tools like ChatGPT don’t possess original knowledge. Instead, they depend on existing data and language models derived from vast online sources. They “learn” from available material to generate logical responses based on prompts. Therefore, when presented with obscure or context-specific questions, these tools may offer irrelevant or even inaccurate answers.

This is particularly dangerous for novice learners. In the age of misinformation, AI systems may unwittingly reproduce falsehoods due to the replication of unreliable data—a phenomenon akin to the “woozle effect.” A false claim published online and repeated across multiple sources can be treated as fact by AI models. Students unfamiliar with a topic may not detect such errors, potentially mistaking fabricated content for truth.

AI misuse isn’t confined to academia. In politics and media, we’ve seen deepfakes, manipulated images, and altered videos used to deceive and damage reputations. You recently witnessed how neighbouring country's president’s images were manipulated. The untrained eye cannot always detect such fabrications—much like students unable to recognise AI-generated academic falsehoods. This underscores the importance of educating students on how to use AI tools constructively and ethically.

For instance, AI can be a powerful aid in academic writing if applied correctly. Asking students to reflect on AI-generated content encourages them to analyse, critique, and improve their own work. Rather than simply retrieving ready-made answers, students should engage with the content, consider its relevance, and refine it to meet academic standards.

However, educators must be strategic. Assignments should go beyond prompts that AI can easily respond to. Questions like “Assess the impact of the Industrial Revolution on medicine” are readily answerable by ChatGPT. Without critical engagement, students simply copy and paste these responses without learning. Educators must redesign assessment methods to encourage original thought and deep understanding.

Globally, institutions are experimenting with strategies to curb AI misuse. Some advocate for supervised, in-person exams or group discussions that promote collaboration, argumentation, and reflection. These methods develop not only critical thinking but also communication skills in both spoken and written forms. Students learn to structure arguments, articulate ideas, and refine them through interaction.

While these approaches have merit, they alone cannot counter the influence of AI. Rather than avoiding AI, we must incorporate it into learning in meaningful ways. Educators can teach students to use AI tools for feedback and editing, thereby enhancing their understanding of language and content quality. For example, students might write a short essay, then input it into an AI tool with the instruction to refine grammar or improve clarity. They would then compare both versions, identify differences, and learn from the AI suggestions.

Such practices introduce students to prompt engineering—the art of crafting precise and actionable AI instructions. The quality of AI responses is directly tied to the clarity of the prompt. Learning how to structure effective prompts is a valuable skill, especially in today’s data-driven, AI-enhanced world.

For educators worried about students misusing AI for assignments, one solution is to design tasks grounded in specific contexts—local experiences, culture, or classroom discussions. Such questions are more difficult for AI tools to answer accurately, as they require insider knowledge or first-hand classroom interaction. This strategy helps ensure only actively engaged students can respond effectively.

We can also guide students toward using academic AI tools for research. Platforms like Research Rabbit, Connected Papers, Elicit, and Litmaps assist in organising and exploring scholarly literature. These tools can help students identify credible sources, build literature reviews, and construct well-researched arguments—skills that AI can support but not replace.

Ultimately, educators must be proactive. Avoiding AI is not an option. Instead, we should aim to equip students with the skills to use it responsibly, critically, and creatively. This begins with instructors understanding the tools themselves. Without this knowledge, we risk being misled by students, ultimately producing graduates who are ill-prepared for the demands of the modern world. 

Stay informed. Subscribe to our newsletter

Origin:
publisher logo
The Standard
Loading...
Loading...
Loading...

You may also like...