Morning Overview

AI reshapes college applications as schools tighten rules on essays

When Virginia Tech’s admissions office opened its application portal for the 2025-2026 cycle, it did so with a tool that no applicant could see: an AI-trained essay reader capable of scoring thousands of personal statements against a rubric built from years of past submissions. At the same time, the University of California was telling applicants that submitting an AI-written essay could get them disqualified. Across American higher education, the same technology is being welcomed behind the admissions desk and policed on the applicant’s side, creating a double standard that students and families are only beginning to understand.

Universities are adopting AI faster than they are disclosing it

Virginia Tech’s system, first detailed in AP News reporting, pairs an algorithmic score with a human reviewer’s score for each essay. When the two diverge beyond a set threshold, the application is routed to additional staff for a closer read. The university has framed this as a safeguard, not a replacement, but it has not published the system’s error rate, the size or composition of its training data, or how often the machine’s judgment is overruled.

Virginia Tech is not an outlier. A January 2026 Bloomberg investigation found that admissions offices at multiple institutions have begun using AI to evaluate essays, transcripts, and other application components. The driver is straightforward: application volumes have surged while admissions staffing and budgets have not kept pace. Automated tools offer a way to handle early-stage screening that would otherwise require hiring readers many departments cannot afford.

What remains unclear is how widespread the practice has become. No central body tracks which schools use AI in review, and most institutions have not voluntarily disclosed it. The result is an information gap where applicants may not know whether a human, an algorithm, or some combination of both is reading their work.

The rules for applicants are getting stricter

While some schools quietly integrate AI into their own workflows, the message to students has been blunt: do not let AI write your essay.

The University of California states on its admissions page that it does not use AI in its own application review. It also warns that AI-authored personal insight questions can be treated as academic dishonesty and may lead to disqualification. UC’s rationale is that a machine-generated response cannot reveal who the student actually is, which defeats the essay’s purpose. The system runs plagiarism checks on submissions, though UC has not published data on how many applicants, if any, have been disqualified under this policy.

The University of North Carolina has updated its admissions website to address AI-generated content, signaling that generative tools cross into misconduct when they substitute for a student’s own voice. However, no UNC admissions official has publicly described the enforcement mechanism, the evidence threshold for an investigation, or the number of flagged applications to date.

UC does acknowledge that applicants may use AI in limited ways, such as brainstorming or grammar checks, a concession that highlights how difficult it is to draw a bright line. The difference between “I asked ChatGPT for feedback on my draft” and “ChatGPT wrote my draft” is real but hard to detect from the outside.

Detection tools exist but have clear limits

A peer-reviewed study published in Scientific Reports, part of the Nature Portfolio, tested whether automated classifiers can reliably tell human-written admissions materials apart from AI-generated ones. Researchers used real admissions documents, including recommendation letters and statements of intent, alongside AI-generated counterparts produced with the GPT-3.5 Turbo API. They found measurable stylistic differences between the two sets but also significant limits in classification accuracy, particularly when AI outputs were polished or edited.

The study’s biggest constraint is timing. GPT-3.5 Turbo is now several generations behind the models students actually use. More recent systems, including GPT-4o and its competitors, produce text that is harder to distinguish from human writing. Independent benchmarks from detection vendors like Turnitin and GPTZero show improving accuracy, but none have published large-scale results specific to college admissions essays. No multi-institution study has yet tested how well any detection tool performs against the latest AI outputs in a real admissions pipeline.

That gap matters. If schools are making consequential decisions, including disqualification, based on detection flags, applicants deserve to know how reliable those flags are. So far, no university has published that data.

The equity question no one has answered

Underlying every policy choice is a concern that AI could widen existing disparities in college access. Wealthier applicants are more likely to have private college counselors who understand exactly how much AI assistance stays within a school’s stated bounds. They may also have access to premium AI tools that produce less detectable output. Students without those resources face a different calculus: they may avoid helpful tools entirely out of fear, or use free versions in ways that are more easily flagged.

This is not a hypothetical concern. Research from the National Association for College Admission Counseling has documented for years that access to private counseling correlates with socioeconomic status and, in turn, with admissions outcomes. AI adds a new variable to that equation, but no longitudinal data yet shows whether its integration into admissions has widened or narrowed demographic gaps in acceptance rates. Until that research exists, claims that AI will “democratize” or “further stratify” admissions remain unsupported by evidence.

What applicants are actually navigating

For the roughly 1.1 million students who submitted Common App applications for fall 2026 admission, the practical landscape looks like this: some schools may be using AI to score their essays, most schools have not said whether they do, and nearly all schools warn against AI-written submissions without specifying how they enforce that rule.

The Common Application itself has not issued a blanket policy on AI use, leaving enforcement to individual member institutions. That means a student applying to ten schools could face ten different standards, most of them vaguely worded.

The safest guidance, based on what institutions have actually published, is narrow. Use AI for brainstorming or proofreading if a school permits it. Write the essay yourself. Do not assume detection tools will miss AI-generated text, and do not assume they will catch it fairly, either. The system is still being built, and the people inside it are still deciding what the rules should be.

What would change the picture is transparency: universities publishing how their AI tools are trained, how often algorithmic and human reviewers disagree, how detection flags are adjudicated, and whether any demographic patterns emerge in who gets flagged. None of that data is publicly available as of spring 2026. Until it is, students are making high-stakes decisions with incomplete information, and the institutions asking for their trust have not yet earned it on this front.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.