How might we teach with artificial intelligence in higher education?
Today I’m starting a new series on this Substack to explore that question. In each post I’ll explore a different part of the question, from curriculum to institutional support to opposition, AI literacy to assessment. With each one I invite readers to share their thoughts and experiences.
Let’s kick things off with perhaps the biggest challenge within that topic. How can we in higher education reduce the AI cheating problem this fall?
To recap: right now this is an immense problem. It's clear that students have ready access to a wide of generative AI tools. With little effort they can have bots produce class content, from papers and reports to slideshows and code.
It's not an entirely new problem. Cheating is as old as assessment. There are traditions of people writing papers for money, of Greek houses maintaining archives of papers, etc. There’s even at least one science fiction story about it. Now AI offers a new dimension of cheating.
In this post I’d like to share what I'm seeing of academic thinking and practice about AI-enabled cheating, based on numerous conversations over the past two years with a wide range of academics from a variety of institutions and disciplines. I’ll identify different types of responses, each with its advantages and problems.
I’m also going to focus on practical strategies and tactics in this post, not theory or big picture dimensions, for reasons of space and focus.
Some caveats: first, data about student use of AI isn’t terrific, since surveys rely on self-reporting. Ditto faculty and staff use. Respondents have many reasons to get… creative about their experience. This makes it hard to generalize about student AI work. Second, I don't want to volunteer my own teaching practice in an exemplary way here because all of my fall classes are about education and technology, so their meta level makes them unusual. Third: the field of postsecondary teaching is vast and complex. I may well have missed more strategies, which is why I hope folks will volunteer such in comments.
Fourth: what follows is focused mainly on in-person classes. Online and especially asynchronous classes require their own treatment. And fifth, I’m trying to discuss the problem of students cheating while making stuff for class. That “stuff” is usually writing in most discussions of the topic, but I want to make sure we also address other forms of student work: exams, code, art, reports. (This is part of the Two Cultures divide in AI discussions, which I should address in another post)
Here’s what we’re doing and considering right now:
Turnitin and ChatZero are the leaders in this field. They scan student writing then provide a determination about how much of the text is AI-generated.
Problem: they are awful at this, issuing plenty of false positives and negatives. They may also penalize some styles which the academy often allows, including work by students for whom English isn’t their first language.
The student must speak from their own mental work. They also get to work on presentation skills. The presentation can be of various lengths. It can also involve discussion with faculty member and fellow students. One variant: oral presentations for which students can’t use AI to prepare. Kelley Skillin on Facebook recommends doing these as in-class surprises, along the lines of pop quizzes or classic legal pedagogy.
Problems: this doesn't scale well beyond small classes or brief checks. It can also penalize students who have social anxiety.
Called "blue books" in the American tradition, students write in these sheafs of paper by hand, without computing devices nearby. They work from memory and their own thoughts. These kind of assessments and in-class work can take many forms: quick tests, longer writing, students responding to each other, etc. Here is one story about blue books’ appeal.
Problems: student handwriting is worse than before. They have no time for revision, which is often a key part of writing pedagogy. They can make no use of digital technologies we normally see as beneficial, like spellcheck, formatting, using digital documents, class materials in a learning management system/virtual learning environment, etc.
(There is also some serious nostalgia for blue books from older faculty.)
These machines cannot connect to AI, such as when they are air gapped or constrained by software. The advantages include the benefits of digital text (no handwriting; yes to spellcheck) as well as the possibility of including approved digital documents as part of the assignment - for example, a pdf of a novel one analyzes.
Related to this are applications which strictly control the student’s digital environment for the duration of assessment. Examsoft is one example of this. NB: I haven’t used any of these apps recently, so welcome any corrections and feedback.
Problems: not every instructor knows how to set up the hardware or software or even has access to them. A department of school might need to establish a lab for this purpose, which means scale limitations. Additionally, some might object to the software solutions as too intrusive.
In this strategy students know that participation will count for a lot. The instructor shapes discussions accordingly and records them to some extent, using them as prompts for assessment. The reasons is that sometimes AI applications do not have class discussions in their training set.
Problems: students still have to show they know the class basics, which will likely be available to AI. This is also an extra challenge for introverted students, as well as those with social anxiety, for whom class participation is draining, an extra burden on their learning experience.
The idea is that AI might not have access to something which occurred this week, or which involves an offline situation, such as one in the community. (I’m reminded of the final exam in an Irish history seminar, where we had to respond to two letters to the editor about Irish politics, drawing on our knowledge of their historical contexts.)
Problems: AIs are getting more current. And students can use AI to generalize from a specific real world case to a more abstract one.
. Cuts off the threat cold. This can take place at various levels: individual class sessions, entire courses, departments, a campus. Academics may decide to do this to prevent AI cheating or as a response to a broader critique of the technology (Maha Bali offers a good summary).
Problems: this doesn't work. If a campus blocks some AI at the network level, students can access it through phone networks. If a classroom blocks all digital devices through policy or a Faraday cage students get access to machines and AI the minute they leave class. A ban also raises a different challenge if a campus or unit considers familiarity with AI to be a desired outcome for graduates.
We also refer to this as "scaffolding." Instead of students turning in one document at one time they produce multiple parts and versions over the course of the semester. For example, they turn in a paper proposal, then an outline, then a bibliography, then one draft, then another, and at each point the instructor gives feedback. This is not a new pedagogy, but one stretching back decades, partly driven to respond to pre-AI forms of cheating.
Problem: students can easily use AI at every step, especially when they aren’t in the physical classroom.
Much of this discussion concerns individual students, but there are options to consider involving small groups. For one example, Inside Higher Ed columnist John Warner recommended students use AI in pairs during a recent Future Trends Forum (recording coming!). That would reduce the chances of emotional identification and add some discussion about the process. For another, Megan B. McNamara, a University of California-Santa Cruz lecturer, recommends small group AI work for accountability (students record their participation) and time management (the group context giving more framework for progressing on long term assignments). McNamara also advises “In-class flipped-exam activities”, wherein “students create exam questions with rationales for correct/incorrect answers, or collaborate on detailed answers to fake exam questions without using computers.” (I found her ideas in a recent issue of Beth McMurtrie’s newsletter.)
There are many versions of this, such as students having AI generate writing which the students critique, or students using AI to work through big datasets with each other and the instructor.
Problems: this doesn't stop students from using AI outside of class to make stuff. And doing this work takes some significant research and experimentation; see below.
Historian Niall Ferguson coined this term to describe a two-phase sequence. Students alternate between time spent studying fully offline (the cloister) then immersed in digital technology, including AI (the starship). Each phase gives benefits from its particular strengths: the cloister’s mental and interpersonal focus, the starship’s many affordances.
Problems: this feels very much like a primarily humanities practice, as a science lab might occupy a third category as it requires a great deal of technology. And students might rebel in the cloister.
Keep teaching the way one has done. Use all assessment practices from recent practice.
This obviously opens the door to extensive AI-enabled cheating. It also doesn’t help students prepare for a world beyond class which AI has transformed. But it is, from what I’ve seen, an option some faculty members are taking.
Taken together, all of these approaches have additional problems. For one, faculty aren't getting a lot of help on this score. Many campuses offer faculty and staff some form of professional development and support through teaching centers, libraries, educational technologists, etc., but these often don't scale up very far. More, the biggest number of faculty are adjuncts, who often can't access such support, or who have to race off to another gig. And some (it’s hard to say how many) instructors haven't yet worked on this problem - heck, a serious number tell me even in 2025 that they haven't used LLMs.
Second, a good number of campuses are experiencing financial stresses as I’ve written elsewhere. Cutting professional development funding and support is often a first step in a crisis. It's hard to argue for, say, expanding a teaching center’s staff or hiring an AI and teaching officer when an institution is laying off people.
My gut tells me that we need to rethink assessment from top to bottom, but that nearly all colleges lack the resources to do this. Maybe some elite liberal arts colleges not currently targeted by Trump can attempt such an effort which everyone else can benefit from.
Otherwise I think we’re in serious trouble. It’s possible that no combination of these practices will significantly dent the AI-enabled cheating problem, which will mean more students will fail to learn, which in turn depresses the value of post-secondary education.
One further recommendation: we academics need to develop post-AI classroom practices openly, involving the entire post-secondary community, especially students. Our policies should be clear. Sharing these practices, even pilots and experiments, can help academics everywhere develop their own. We should build out more scholarship of teaching, in Boyer’s famous formulation, which describes this work.
So now I turn things over to you. Am I being too bleak? What do you think of these strategies? What are you seeing in your parts of the academic ecosystem? What has this account missed?
I hope we can have a good and useful conversation about the problem.
(thanks to many friends for reviewing and contributing, like Jeanne Eicks and Peter Shea)