New

Instantly know what's human and AI on Twitter, LinkedIn, Substack and more. Get our new chrome extension.

Learn More
Education

What to do when a student submission is flagged as AI

Feb 14, 2025
What to do when a student submission is flagged as AI

You're grading assignments, reading them one-by-one, until one of them catches your eye. You can't place a finger on exactly what, but it doesn't sound like your student. It sounds like AI. So you put it into Pangram and you get a result. 99% AI. What do you do with that?

Understand what it means

An AI detector like Pangram is trained to pick up on signs that text was written by AI. If a segment of text gets a 99% AI score, that doesn't mean we necessarily think the entire text was AI-generated. Rather, we are 99% confident that AI was used to generate some portion of the text.

In longer documents, we split the text up into segments. So you can browse the segments to see whether they all have a high AI confidence, or if it's just one section of the text.

Talk to your student

I always recommend the simple action of talking to your student.

You could ask about their writing process to try to get a sense of how well they know their own submission. Or you could simply ask if they used AI. They may admit it – they were swamped and had to choose an assignment to take a shortcut on. Or they wrote a first draft and weren't happy with the result so they asked ChatGPT to improve it.

This is a great opportunity to discuss what is and isn't a violation of academic integrity. You can remind your student how they should handle an issue like this in the future. Should they ask for an extension? Or just turn in that bad pre-AI first draft?

You could also direct the student for AI tools that are appropriate for schools and help promote learning, instead of taking short cuts.

Check for misunderstandings

Sometimes there's a mismatch between what a teacher considers cheating, what a student considers cheating, and what triggers an AI detector. Here are some common ways to use AI in a way that may trigger AI detection.

  • Grammar checkers like Grammarly that incorporate AI assistance in the writing process
  • Translation tools: these are often built on LLMs
  • Detecting Google Docs AI features like "Help me write"
  • Talking to ChatGPT for brainstorm and research, and reusing phrases written by AI
  • Using ChatGPT for wording advice

We recommend using an AI policy like this tier system to ensure that students and teachers are on the same page with regards to which assistive tools are allowed. This prevents misunderstandings where a teacher allows Grammarly, not realizing that Grammarly is a full AI writing assistant now, but also uses an AI detector which would flag any students using Grammarly's AI features.

Look at writing process artifacts

Say your student admitted to using some phrasing from ChatGPT. Or perhaps they claim that their case is a rare false positive. The best next step to clear their name and confirm that they did the work is to look at writing process artifacts. What research did they do for this assignment, and did they take notes? Do they have early drafts saved?

If they worked in Google Docs, select File -> Version history -> See version history to see a full history of their writing process. It will be clear if they just copied from ChatGPT and pasted into the file or if they typed it in one go from top to bottom (a sign that they had AI assistance but wanted to fake the writing process). If they have a robust multi-hour writing history, then that's some very compelling evidence that they wrote the work themselves.

Consider the stakes

Derek Newton, author of the academic integrity newsletter The Cheat Sheet, often compares AI detectors to metal detectors. When you walk through a metal detector and it goes off, you don't immediately get arrested and sent to prison. Instead, they investigate further. Did you actually try to bring a gun through security or is your belt buckle just made of metal? Similarly, we believe that AI detection is a great way to flag assignments, but detection warrants futher investigation before any punitive measures. A nonzero false positive rate means that any positive detection could be real, or it could be the statistically anomalous one-in-ten-thousand situation where Pangram gets it wrong.

If the student has evidence of their writing process, I would be inclined to believe them. At the worst case they learn their lesson to not use AI assistance, even lightly.

If the student has a history of their work being detected by AI, that should also be considered. They may get the benefit of the doubt once, but the more times this happens the clearer it becomes that there is an issue.

Hopefully this is a helpful guide to anyone navigating the nuances of AI plagiarism. It's a difficult situation to be in, which is why it's important for teachers to have the AI tools and information to handle a case like this when it comes up.


Max Spero
Max SperoCEO, Co-founder

Max is a seasoned machine learning engineer. He most recently worked on autonomous vehicles at Nuro, leading their active learning effort. He has a long history of deploying successful machine learning products at Google, Two Sigma, and Yelp.

Max holds a B.S. in theoretical computer science and an M.S. in artificial intelligence from Stanford University. In addition to his passion for building, he is also an active member of the Magic: the Gathering cube community.

More from Max Spero

Related reading

Academic Integrity Isn’t Just About Catching Cheaters. It’s About Teaching Students to Own Their Mistakes.
Education

Academic Integrity Isn’t Just About Catching Cheaters. It’s About Teaching Students to Own Their Mistakes.

Jul 16, 2025
The State of Academic Integrity & AI Detection 2025
Education

The State of Academic Integrity & AI Detection 2025

Dec 4, 2025
What happens when an AI detector makes a mistake?
Education

What happens when an AI detector makes a mistake?

May 15, 2025
A trusted proofreading tool now uses AI to write for students
Education

A trusted proofreading tool now uses AI to write for students

Mar 6, 2025
Is Google Going to Penalize AI-Generated Content in 2026?
Education

Is Google Going to Penalize AI-Generated Content in 2026?

Jan 30, 2026
How to Detect AI on Reddit - Fighting the Bots
Education

How to Detect AI on Reddit - Fighting the Bots

Feb 6, 2026