Pavel Danilyuk/Pexels

Generative AI was sold as a creativity engine, but the systems now embedded in search, social feeds, and office tools are starting to behave more like cultural refrigerators. Instead of pushing music, images, and language into new territory, they are quietly looping back the recent past, smoothing away edges and surprises. Researchers are warning that this feedback loop is already visible in the data, and that the longer we lean on these tools, the harder it may become for culture to move forward at all.

The risk is not a single catastrophic failure, but a slow drift into what one team of computer scientists has described as “visual elevator music,” a background hum of content that feels familiar, frictionless, and strangely timeless. That aesthetic is creeping into everything from advertising to book covers, while the same dynamics are also shaping how people work, how they switch jobs, and how institutions reward conformity over experimentation.

How AI turns culture into “visual elevator music”

The core problem starts with how large models are trained. They ingest vast archives of existing images and text, then learn to predict the most statistically likely next word or pixel. When those outputs are fed back into the next generation of training data, the system begins to chase its own tail, reinforcing what it has already seen instead of discovering anything new. In one experiment, researchers watched a model that had been primed with a specific scene gradually lose track of its own starting point, drifting into generic, placeless imagery that they described as visual elevator music.

That phrase captures how these systems flatten context. The same model that can render a photorealistic “prime minister” or a “city street” often strips away any concrete sense of era, politics, or geography once it is left to iterate on its own. In a related study, a prompt that began with a prime minister under a specific flag gradually morphed into a generic leader in a vague office, then into a faceless figure in a corridor, as the model forgot the details it had been given and converged on a bland average. The researchers, including the computer scientist Jan who led the work, argue that this tendency to erase time and place is not a bug but a default behavior of systems that are rewarded for staying close to the center of their training distribution.

Feedback loops, forgotten prompts, and a frozen present

Once AI-generated content starts to dominate the web, the feedback loop becomes hard to escape. New models are trained on a mix of human and synthetic data, but the synthetic share grows as companies flood social platforms, stock photo libraries, and even news sites with machine-made material. Over a few training cycles, the model’s world narrows to its own past outputs, which is why the same researchers saw their system quickly “forget” its starting prompt and drift toward that featureless aesthetic. The more the model samples from itself, the more it collapses into a narrow band of styles and ideas that feel safe and familiar.

This is where cultural time can start to feel stuck. If the images that illustrate news stories, the music that fills playlists, and the copy that populates websites are all optimized for what has already performed well, then the system is effectively locking in the recent past. A separate analysis framed this as a choice between cultural stagnation and acceleration, and warned that current generative tools are tilting toward the former. The authors argued that if generative AI is to avoid becoming a brake on novelty, its designers will have to change how they handle training data, evaluation metrics, and incentives, because right now the technology is already nudging culture toward a narrow, repetitive groove that one report simply called cultural stagnation.

Workplaces caught between job hugging and quiet cracking

The same dynamics that keep cultural products looping in place are also shaping how people behave at work. As AI tools automate more tasks and executives talk openly about restructuring, many employees are responding not by leaping into new roles, but by clinging to the ones they have. A recent workplace survey described a pattern of “job hugging,” where workers stay put even when they feel underused or misaligned, because they fear that any move could expose them to automation or layoffs. The same research introduced the term “quiet cracking,” coined by Talent LMS, to describe a persistent feeling of work-related anxiety that never quite rises to the level of open conflict but steadily erodes engagement.

In that study, respondents described a cycle in which AI anxiety made them less likely to experiment, which in turn bred more disengagement. People who worried that a model could replace them were less inclined to volunteer for creative projects or propose unconventional ideas, because they believed the safest strategy was to look indispensable in their current niche. The authors, writing in Sep, argued that this combination of job hugging and quiet cracking is already reshaping office culture, and that organizations which rely heavily on AI to monitor performance risk deepening the problem. When dashboards reward consistency and volume over originality, workers quickly learn that the path of least resistance is to keep doing what the system already recognizes, a pattern that one analysis at Built In linked directly to rising disengagement.

Institutions, incentives, and the pull toward the average

Behind these individual choices sit powerful institutional incentives. Cultural gatekeepers, from streaming platforms to book publishers, are increasingly leaning on AI to predict what will sell. Recommendation engines trained on past hits are more likely to surface work that resembles those hits, which nudges artists to conform if they want to be discovered. One analysis of this trend argued that institutions, subcultures, and artists still have room to resist, but only if they deliberately value what is unique or creative over what is merely optimized. The author, identified as Human in the summary, stressed that the current trajectory is not inevitable, but that without conscious intervention, the default settings of these systems will favor safe, familiar content.

That warning is echoed in a separate piece that framed the issue as a contest between institutional convenience and cultural vitality. The writer pointed out that institutions are already using AI to filter grant applications, scout talent, and even generate early drafts of creative work, because it is cheaper and faster than relying solely on human judgment. Yet the same tools tend to privilege patterns that have already been rewarded, which can marginalize experimental scenes and minority voices. In that context, the role of Institutions becomes critical, because they can either double down on algorithmic averages or use AI as a way to free up human curators to take more risks, a choice that one report at Life and News framed as a test of whether human creativity remains central.

Why stagnation is not destiny

Despite the bleak phrase “cultural stagnation,” the researchers closest to these systems are careful to say that the outcome is not fixed. The computer scientist Jan, who helped document the visual elevator music effect, has argued that human creativity is resilient and that the same tools that now flatten culture could be reconfigured to amplify difference. That would mean training models on more diverse datasets, penalizing them for collapsing into clichés, and giving users more control over how much risk or novelty they want in an output. It would also mean treating AI as a collaborator that can be pushed and prodded, rather than as an oracle whose first answer is always accepted.

More from Morning Overview