Morning Overview

DraftMarks tool from Georgia Tech, Stanford flags AI use in student writing

A college instructor opens a student essay and notices something unusual: faint digital eraser crumbs hover over a deleted paragraph, a strip of masking tape marks a sentence pasted from somewhere else, and a smear of glue residue highlights a passage inserted by an AI chatbot. None of these marks existed when the student submitted the paper. They were generated automatically by DraftMarks, a new tool built by researchers at Georgia Tech and Stanford that makes the writing process itself visible to teachers. This scenario is hypothetical, but it illustrates the core idea behind the project. No real student or faculty member outside the research team has publicly described using the tool.

Rather than scanning a finished essay and guessing whether AI wrote it, DraftMarks records what happens as a student drafts, tracking deletions, revisions, pastes, and AI-assisted insertions in real time. It then renders those actions as skeuomorphic marks: digital imitations of physical editing artifacts. The result is a document that carries visible evidence of how it was made, giving instructors a way to evaluate AI collaboration without reducing the conversation to a binary cheating-or-not verdict.

How the tool works

DraftMarks is described in a preprint paper titled “DraftMarks: Enhancing Transparency in Human-AI Co-Writing Through Interactive Skeuomorphic Process Traces,” published on arXiv in September 2025. The research team includes Momin N. Siddiqui, Nikki Nasseri, and Adam Coscia from Georgia Tech, along with Roy Pea and Hari Subramonyam from Stanford’s Graduate School of Education and computer science programs. The Stanford SCALE Initiative repository lists the paper and categorizes it within its education research framework.

Six types of marks appear on a finished document. Eraser crumbs show where text was deleted. Smudges indicate revisions. Masking tape signals content pasted from an outside source. Glue residue highlights AI-generated inserts. Ghost text reveals hidden changes, and font shifts flag style alterations. Together, these marks create a layered portrait of the writing process, one that distinguishes between a student who used ChatGPT to brainstorm an outline and then wrote independently, and a student who pasted entire AI-generated paragraphs with minimal editing.

“The goal is not whether students are using AI, but how,” lead author Momin Siddiqui said in a Georgia Tech research announcement. That framing is central to the project: DraftMarks is designed as a transparency tool, not a policing mechanism.

Early results and open questions

The research team conducted a preliminary instructor study in which participants reviewed student essays annotated with DraftMarks. According to the Georgia Tech announcement, participants reported a clearer understanding of AI’s role in the writing process after using the tool. The announcement does not disclose the study’s sample size, detailed methodology, or statistical findings, so these results should be treated as preliminary rather than conclusive. Hari Subramonyam presented the work at a Johns Hopkins University computer science seminar, describing DraftMarks as providing “real-time instrumentation of student writing behavior.”

That said, the evidence base remains narrow as of spring 2026. The initial study focused on instructor perceptions; no published results yet show how students behave when they know their process traces will be visible. Whether that visibility encourages more thoughtful AI use or simply pushes students to find workarounds is an unanswered question.

Evasion is a real concern. DetectGPT, a separate Stanford-developed tool that analyzes finished text to identify AI-generated passages, operates on a different principle, but its creators have noted that post-editing techniques can undermine text-based detection. That observation appeared in coverage of DetectGPT’s limitations, though no single published source from the DetectGPT team makes the claim in those exact words. DraftMarks takes a fundamentally different approach by tracking the process rather than the product, but no adversarial testing has been published. Until researchers document whether students can manipulate their process traces to obscure AI involvement, the tool’s resilience against deliberate gaming remains unproven.

No university has publicly committed to integrating DraftMarks into grading workflows or academic integrity policies. Georgia Tech maintains institutional guidance on generative AI in academics, but that guidance does not mention DraftMarks by name. The gap between a research prototype and an adopted classroom tool is significant, and no deployment timeline has been announced.

Why process tracking differs from AI detection

Most AI detection tools, including Turnitin’s AI writing indicator and standalone systems like DetectGPT, work by analyzing a completed document and estimating the probability that portions were machine-generated. These tools have faced persistent criticism over false positives, particularly for non-native English speakers, and over their vulnerability to paraphrasing.

DraftMarks sidesteps that entire problem by never asking “Did AI write this?” Instead, it asks “What did the student do with AI during writing?” The distinction matters for instructors who want to allow AI assistance but need a way to evaluate the depth of a student’s engagement with the material. A paper covered in glue residue and masking tape tells a different story than one marked primarily by eraser crumbs and smudges, even if both received some AI input.

That shift in framing, from detection to transparency, could influence campus AI policies regardless of whether DraftMarks itself gains wide adoption. Several universities have already moved away from blanket AI bans toward assignment-level policies that specify acceptable uses. A tool that makes the nature of AI collaboration legible gives those policies something concrete to point to.

What student-facing trials will need to show

DraftMarks sits at a clearly defined stage: a working prototype backed by a preprint paper, a preliminary instructor study, and cross-institutional support from two major research universities. The concept is compelling, and the technical approach is distinct from anything else in the academic integrity space. But the tool has not been tested at scale with students, no institution has moved to adopt it, and no student or non-researcher faculty voice has been part of the public record so far.

For instructors and administrators planning for the 2026-2027 academic year, the practical step is to watch for two things: published results from student-facing trials, and any formal adoption announcements from Georgia Tech, Stanford, or other universities. The strongest signal will come when students, not just instructors, are part of the equation, and when the data shows whether making the writing process visible actually changes how students engage with AI.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.