The Latest in AI Detection Research

Elyas MasrourMarch 4th, 2025

Pangram continues to emerge as an authority in detecting AI-generated content. Our industry-leading approach and model consistently appears in the latest studies in the field of AI detection, so today, we wanted to highlight some recent studies and their findings!

Study 1: People who frequently use ChatGPT for writing tasks are accurate and robust detectors of AI-generated text

Study Overview

In this paper, researchers from the University of Maryland study human detection of AI-generated text. They hire annotators of different familiarities with LLMs to read 300 non-fiction articles and try to classify them as human-written or AI-generated. They find that people who use LLMs often for writing tasks excel at detecting AI-generated text, even without training.

What it says about Pangram

The study benchmarks the human ability vs. "automatic detectors" (aka Pangram). Take a look at the results:

Pangram's Humanizer model (more on that below) and Pangram were far and away the best detectors, detecting 100% of all AI-generated text. Both our models also maintained a high robustness to paraphrasing and humanizing, maintaining a 90% detection rate.

For More Information:

Check out the published study here

Study 2: Cross-Domain Machine-Generated Text Detection Challenge

Study Overview

In this study, researchers from the University of Pennsylvania wanted to benchmark if detectors could generalize to a fixed set of AI models, types of documents, and "adversarial attacks" (attempts to make AI-text harder to detect). They find that "detectors are able to robustly detect text from many domains and models simultaneously". If anyone tells you that AI detectors don't work, just point them to this study!

What it says about Pangram

There's Pangram, at the top! We came in first place, tied with a detector from a research team at Leidos that was designed and trained specifically for this study.

For More Information:

Find our entire blog post about this topic here and check out the published study here!

Study 3: ESPERANTO: Evaluating Synthesized Phrases to Enhance Robustness in AI Detection for Text Origination

Study Overview

This study takes a look at an attack called "back-translation", where bad actors translate text into a number of languages before translating back to english in order to evade AI detection. They find that they can retain the semantic meaning of the text while significantly reducing detectability of AI text (on most detectors 😄).

What it says about Pangram

As you can see, Pangram exhibits the best robustness in all categories. While backtranslation can sometimes half or nearly quarter the detection rate of competitors, Pangram remains robust.

For More Information:

Check out our initial blog post here and the published study here!

Bonus: Pangram’s own Research

If you're interested in learning more about the research Pangram conducts internally to make our model better, you can read more about those studies here:

DAMAGE: Detecting Adversarially Modified AI Generated Text

Technical Report on the Pangram AI-Generated Text Classifier

Commitment to Research

Here at Pangram, we are committed to lifting up research in this field, and as such we give free, unlimited access to academics interested in studying AI detection with Pangram. Interested in learning more? Get in touch at info@pangram.com

Subscribe to our newsletter
We share monthly updates on our AI detection research.