water waves in close up photography

Artificial intelligence has become the default coauthor of modern science, quietly drafting manuscripts, generating images, and even fabricating citations at a scale that traditional safeguards were never built to handle. What began as a set of helpful tools is now flooding journals, conferences, and preprint servers with work that looks like research but often lacks the labor, transparency, and accountability that real science demands. The result is a growing sense among editors and reviewers that the system is being swamped by AI-generated slop faster than it can be filtered.

At stake is not just the credibility of individual papers but the reliability of the entire research record that policymakers, clinicians, and other scientists depend on. As automated systems accelerate both legitimate discovery and low quality output, the line between rigorous work and synthetic noise is blurring in ways that are already forcing journals, funders, and conferences to rethink how they judge what counts as knowledge.

From niche tool to default coauthor

In only a few years, generative models have shifted from experimental curiosities to routine infrastructure in labs and universities. Analysts tracking Publishing Trends describe AI as central to how manuscripts are drafted, edited, and screened, bundled alongside Open Science and Peer Review Reform as defining forces in how research is produced. What used to require weeks of writing and data wrangling can now be compressed into hours, which means the bottleneck is no longer how fast scientists can type but how quickly the system can distinguish robust work from polished nonsense.

That acceleration is not hypothetical. Economists studying generative tools report that when new systems lower the cost of writing and analysis, they do more than speed up existing teams, they expand who can participate in research at all. One recent analysis argues that When barriers fall, output rises and talent from new regions and institutions can enter the conversation. That is the optimistic version of the story, and it is real. But the same dynamics that empower under resourced researchers also make it trivial for paper mills, fake journals, and opportunistic authors to flood the zone with plausible looking but unreliable work.

What “AI slop” looks like in a lab coat

The phrase “AI slop” has migrated from internet culture into scientific circles for a reason. It describes digital content made with generative systems that is low in effort and quality, often produced at scale with little human oversight. As one definition puts it, AI slop is content that may be grammatically smooth yet shallow, misleading, or outright fabricated, a category that now includes research style prose, synthetic figures, and even fake datasets. In science, that slop often takes the form of papers that read like generic literature reviews, recycle the same phrasings across multiple manuscripts, or cite references that do not exist.

The problem is not only text. Technical blogs on research integrity now warn that AI can fabricate convincing microscopy slides, experiment charts, and even MRI scans that are difficult to spot, even for experienced reviewers. When those images are paired with fluent but generic text, the result is a new genre of paper that looks like legitimate work yet is built on synthetic artifacts. I have seen reviewers describe the experience of reading such manuscripts as like walking through a stage set: everything appears in the right place until you push on a wall and realize it is cardboard.

Peer review is buckling under the volume

Editors are already sounding the alarm that traditional peer review cannot keep up with the surge in AI assisted submissions. One analysis of the retraction crisis links a sharp rise in withdrawn papers to flaws in peer review, the growth of paper mills, and the spread of automatically generated manuscripts that slip through initial checks. Retractions are a lagging indicator, surfacing only after flawed work has already been indexed, cited, and sometimes used to justify policy or clinical decisions.

Even elite conferences are struggling. A recent report on a major AI meeting found that Even with thousands of volunteer reviewers, the sheer volume of submissions made it impossible to deeply scrutinize every reference list, which allowed more than one hundred hallucinated citations to appear in accepted papers. If a flagship venue in machine learning cannot reliably detect fabricated references in its own field, it is hard to imagine smaller journals in medicine or materials science faring better as they confront similar tactics.

Editors, ethicists, and insiders push back

Some of the strongest warnings are coming from inside the system. In an early 2026 editorial, a small family of journals described how they use select AI tools while insisting that their core judgments must remain grounded in human scientific experience and expertise. The editors explained that Over the past year they had collaborated with automated services like DataSeer to check whether authors were sharing data and code as promised, but they framed those systems as aids, not replacements, for human judgment. Their message was blunt: resisting low effort AI content is now part of editorial responsibility.

Holden Thorp, who writes as editor in chief of a major journal, has described the current wave of AI vendors with a mix of skepticism and pragmatism. In one essay he opens with the line Here they come again, before cataloging the Tools for everything under the sun that salespeople now pitch to editors and researchers. I read his stance as emblematic of a broader mood among scientific leaders, who see clear benefits in automating routine checks but worry that outsourcing too much judgment to opaque systems will only deepen existing problems of bias, error, and inequity.

Fake journals, paper mills, and the “Land of Make Believe”

AI slop is not confined to individual manuscripts, it is reshaping the publishing landscape itself. Watchdogs tracking predatory outlets have documented the rise of entire fake journals that exist primarily to ingest AI generated submissions and collect fees. One detailed account describes a cluster of such outlets as a Land of Make, where editorial boards are fabricated, peer review is illusory, and in some cases no trace of the supposed author could be found. These venues exploit the fact that automated writing tools can churn out endless variations of plausible sounding studies that are difficult to distinguish from legitimate work at a glance.

Retraction databases and investigative reports now link a growing share of withdrawn papers to such operations, often tied to organized paper mills that sell authorship slots and guarantee acceptance. The same analysis that flagged the retraction surge noted that some of these manuscripts were explicitly labeled as Generated with Meta AI, a reminder that the tools themselves are neutral but their deployment is not. I have spoken with reviewers who now treat unfamiliar journal names with immediate suspicion, a defensive posture that unfortunately also risks stigmatizing newer, legitimate titles from underrepresented regions.

More from Morning Overview