Morning Overview

Study finds people often miss AI-written messages, even frequent AI users

Imagine getting a thoughtful text from a friend, a warm reply from your doctor’s office, or a charming opening line on a dating app. Now consider the possibility that none of those words were written by a human. According to a peer-reviewed study published in early 2026, you probably would not notice the difference.

Researchers led by Chiara Longoni and colleagues tested exactly that scenario across two experiments, asking participants to evaluate messages in everyday formats: emails, text conversations, dating profiles, and social media posts. The results, published in Computers in Human Behavior, showed that people consistently rated AI-assisted messages as authentic and formed positive impressions of the supposed senders. No one grew suspicious unless they were explicitly told AI had been involved.

The “AI penalty” kicks in only after disclosure

The study introduces a concept the authors call the “AI penalty.” AI-generated messages performed well on their own terms. Readers found them natural, even likable. But the moment participants learned AI had played a role, their impressions of the sender dropped. “Recipients did not exhibit skepticism about AI use unless that use was explicitly disclosed to them,” the researchers wrote, a pattern that held across every communication format they tested, from a casual text to a first message on a dating app.

The gap between those two reactions is where the real tension sits: people are forming judgments about others based on words that may not belong to them, and nothing in the message itself tips them off. The finding suggests that generative AI has already crossed a threshold of fluency where its output blends seamlessly into the kind of writing people encounter every day.

Healthcare portal messages tell a similar story

A separate study reinforces the pattern in a higher-stakes setting. A survey of roughly 1,400 respondents, published in JAMA Network Open, examined how patients reacted to AI-drafted responses sent through patient portal messages specifically. Using a disclosure versus non-disclosure design and measuring satisfaction on a 5-point Likert scale, the researchers found that patients actually showed a mild preference for AI-drafted replies when they did not know AI was involved. Once told, satisfaction dipped slightly.

Despite that dip, the study’s authors argued that disclosure should be maintained to protect patient autonomy. Their reasoning was direct: even when the AI version is preferred, patients deserve to know what generated the medical guidance they are reading. That recommendation aligns with a broader policy conversation about whether platforms and institutions should require AI-use labels on outgoing messages, a discussion that as of spring 2026 has not produced binding standards but has drawn attention from healthcare regulators and major communication platforms exploring voluntary disclosure frameworks.

Frequent AI users are not necessarily better detectors

One finding complicates the picture in a useful way. A separate preprint, posted on arXiv and not yet peer-reviewed, found that people who frequently use ChatGPT for writing tasks can accurately identify AI-generated text under controlled laboratory conditions. That sounds like a natural safeguard, but the real-world implications are narrower than they appear.

In the lab, participants knew they were being tested on detection. In a normal conversation, that vigilance does not activate. The distinction between a detection task and an actual exchange with a friend or colleague turns out to matter enormously. Heavy AI users may know what AI writing looks like when they are hunting for it, but they are not hunting for it when they open a text message.

Significant gaps remain in the research

The available studies leave several important questions unanswered. The primary experiments in Computers in Human Behavior focused on interpersonal scenarios like texting and dating, and did not test AI detection in workplace messaging contexts specifically. Adjacent research on AI-assisted professional writing does exist, but none of the studies in this reporting block directly examined whether the same invisibility applies when a colleague sends a polished project update or a manager drafts a performance review with AI help.

Demographic breakdowns are thin as well. The published research does not offer detailed data on how age, cultural background, or digital literacy influence detection rates. Given that younger users and older adults likely interact with AI-saturated communication in very different ways, this is a notable blind spot.

All of the studies are also cross-sectional, capturing a single moment rather than tracking the same people over time. No published research yet follows participants across months or years to measure whether repeated exposure to AI-written messages gradually sharpens suspicion or deepens complacency. As generative AI tools become more deeply embedded in everyday apps, that question will only grow more urgent.

There is also no experimental data on partial disclosure. It is plausible that labeling a message as “co-written with AI assistance” would land differently than saying nothing or revealing full AI authorship. But that specific framing has not been tested in any of the published studies, leaving a gap between what transparency advocates recommend and what researchers have actually measured.

Disclosure responsibility falls on the sender, not the reader

The strongest evidence here comes from peer-reviewed research with specific experimental designs and measurable outcomes. The Computers in Human Behavior study led by Longoni and colleagues and the JAMA Network Open study both cleared formal review. The arXiv preprint on frequent ChatGPT users as detectors has not, so its findings carry slightly less weight, though it offers a valuable boundary condition.

For the millions of people sending and receiving messages every day in spring 2026, the practical reality is blunt. If someone is using AI to draft texts, emails, or social media posts, the people reading those messages are very unlikely to notice on their own. That shifts responsibility squarely onto the sender.

The healthcare researchers’ recommendation that disclosure be maintained for patient autonomy applies well beyond medicine. When impressions, relationships, or professional decisions hinge on written communication, the person on the other end deserves to know who, or what, actually wrote the words they are reading.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.