You're grading assignments, reading them one-by-one, until one of them catches your eye. You can't place a finger on exactly what, but it doesn't sound like your student. It sounds like AI. So you put it into Pangram and you get a result. 99% AI. What do you do with that?
An AI detector like Pangram is trained to pick up on signs that text was written by AI. If a segment of text gets a 99% AI score, that doesn't mean we necessarily think the entire text was AI-generated. Rather, we are 99% confident that AI was used to generate some portion of the text.
In longer documents, we split the text up into segments. So you can browse the segments to see whether they all have a high AI confidence, or if it's just one section of the text.
I always recommend the simple action of talking to your student.
You could ask about their writing process to try to get a sense of how well they know their own submission. Or you could simply ask if they used AI. They may admit it – they were swamped and had to choose an assignment to take a shortcut on. Or they wrote a first draft and weren't happy with the result so they asked ChatGPT to improve it.
This is a great opportunity to discuss what is and isn't a violation of academic integrity. You can remind your student how they should handle an issue like this in the future. Should they ask for an extension? Or just turn in that bad pre-AI first draft?
Sometimes there's a mismatch between what a teacher considers cheating, what a student considers cheating, and what triggers an AI detector. Here are some common ways to use AI in a way that may trigger AI detection.
We recommend using an AI policy like this tier system to ensure that students and teachers are on the same page with regards to which assistive tools are allowed. This prevents misunderstandings where a teacher allows Grammarly, not realizing that Grammarly is a full AI writing assistant now, but also uses an AI detector which would flag any students using Grammarly's AI features.
Say your student admitted to using some phrasing from ChatGPT. Or perhaps they claim that their case is a rare false positive. The best next step to clear their name and confirm that they did the work is to look at writing process artifacts. What research did they do for this assignment, and did they take notes? Do they have early drafts saved?
If they worked in Google Docs, select File -> Version history -> See version history to see a full history of their writing process. It will be clear if they just copied from ChatGPT and pasted into the file or if they typed it in one go from top to bottom (a sign that they had AI assistance but wanted to fake the writing process). If they have a robust multi-hour writing history, then that's some very compelling evidence that they wrote the work themselves.
Derek Newton, author of the academic integrity newsletter The Cheat Sheet, often compares AI detectors to metal detectors. When you walk through a metal detector and it goes off, you don't immediately get arrested and sent to prison. Instead, they investigate further. Did you actually try to bring a gun through security or is your belt buckle just made of metal? Similarly, we believe that AI detection is a great way to flag assignments, but detection warrants futher investigation before any punitive measures. A nonzero false positive rate means that any positive detection could be real, or it could be the statistically anomalous one-in-ten-thousand situation where Pangram gets it wrong.
If the student has evidence of their writing process, I would be inclined to believe them. At the worst case they learn their lesson to not use AI assistance, even lightly.
If the student has a history of their work being detected by AI, that should also be considered. They may get the benefit of the doubt once, but the more times this happens the clearer it becomes that there is an issue.
Hopefully this is a helpful guide to anyone navigating the nuances of AI plagiarism. It's a difficult situation to be in, which is why it's important to have the tools and information to handle a case like this when it comes up.