
In a nondescript building on the edge of a research campus, a small team of scientists is training machines to do something unnervingly specific: reach under our skin and spark a physical shiver. Their work is not about generic engagement or productivity, but about engineering the precise mix of surprise, dread, and delight that makes hair stand on end. The result is a new kind of AI lab, one that treats goosebumps as both a design target and a measurable signal of success.
What happens inside this lab sits at the intersection of horror storytelling, affective computing, and robotics, and it is already reshaping how technologists think about emotion. Instead of asking whether machines can think, these researchers are asking whether machines can reliably make us feel, and then using that answer to build tools for film, games, and even physical robots that literally bristle with feeling.
The lab that chases shivers
The group at the center of this work operates more like a film production unit than a traditional engineering department, with storyboards on the walls and mood charts pinned beside code snippets. Their mandate is narrow and audacious: build systems that can predict, and then provoke, the moment a viewer’s skin tingles. In reporting on this project, one filmmaker embedded with the team as they treated fear and suspense as variables to be tuned, following researchers on an expedition they described as a quest for goosebumps inside a deliberately secretive AI environment.
Inside that space, the usual AI benchmarks, such as accuracy or latency, are secondary to a more visceral metric: did the subject’s body react. The lab’s experiments pair biometric sensors with generated scenes, tracking heart rate, micro-sweats, and piloerection while algorithms adjust lighting, pacing, and sound. The project has been described publicly as “Deep Inside, Secretive AI Lab With Just One Goal, Make Goosebumps,” a phrase that has circulated through Business News coverage and hints at how tightly the team has framed its mission. By treating the human nervous system as a feedback loop for machine learning, they are turning the body itself into a training set.
Teaching machines the language of fear
To make goosebumps predictable, the lab first has to make fear legible to code. That work builds on earlier experiments in computational horror, including projects where researchers trained models on thousands of short, crowd-sourced scary stories and then asked the system to generate its own unsettling narratives. One such effort, known as Shelley, used a collaborative interface to let people co-write horror with an AI that had been steeped in classic and contemporary tales of dread, an approach documented by a team at the MIT Media Lab that set out to see whether AI can learn to scare us in ways that consistently trigger goosebumps.
In the current lab, those narrative techniques are fused with physiological data. Scripts are not just evaluated for plot coherence, but for their ability to line up with spikes in arousal recorded during test screenings. The researchers tag story beats with emotional labels, then correlate those with sensor readings to identify patterns that precede a shiver. Over time, the models learn that certain combinations of ambiguity, pacing, and sensory detail are more likely to produce a physical reaction. Instead of relying on a director’s intuition alone, the team can point to specific sequences where the AI’s predictions about fear align with the audience’s bodies, tightening the loop between storytelling and somatic response.
From virtual chills to physical skin
What makes this lab distinctive is that it does not stop at screen-based experiences. The researchers are also fascinated by how machines might display their own emotional states in ways that humans instinctively understand. That curiosity has led them to study work at Cornell, where engineers built a small, expressive robot that literally gets goosebumps. In one widely cited prototype, described by writer John Biggs, a jolly-looking device used a flexible outer layer and internal actuators to raise tiny bumps across its surface, letting observers feel its “mood” through touch as well as sight, a concept detailed in coverage of this jolly robot.
That same line of research evolved into texture-changing skins that can shift from smooth to spiky, giving machines a way to signal excitement, fear, or alertness without words. Engineers at Cornell University have demonstrated a robot whose outer layer can sprout bumps or ridges on command, effectively mimicking the piloerection humans experience when they are cold or afraid. For the goosebump-focused AI lab, these advances are more than curiosities. They offer a physical canvas on which the lab’s emotional models can be displayed, turning abstract predictions about fear into tangible, touchable signals that a person can see and feel on a robot’s “skin.”
Why goosebumps became a design metric
Goosebumps are an oddly specific target, but they solve a real problem for anyone trying to measure emotion. Self-reported feelings are noisy and subjective, while clicks and watch time only hint at what people actually experience. Piloerection, by contrast, is a clear, involuntary reaction that often accompanies intense awe, fear, or nostalgia. The lab’s scientists treat it as a kind of ground truth for emotional impact, pairing it with heart rate variability and skin conductance to build a richer picture of how a scene lands. When a particular combination of images and sounds repeatedly produces that response across test subjects, the team treats it as evidence that the underlying pattern is worth encoding into their models.
That approach has obvious appeal for storytellers and marketers who want more than guesswork when they talk about “goosebump moments.” By quantifying those spikes, the lab can offer directors and game designers tools that highlight where an audience is likely to feel the strongest jolt, or where a sequence falls flat. In the film project that followed the lab’s work, the director leaned on these insights to refine jump scares and slow-burn reveals, using the AI’s predictions as a second opinion alongside human editors. The result is a feedback system in which creative judgment and biometric data inform each other, with goosebumps serving as the shared currency between art and analytics.
The uneasy future of engineered emotion
As I look at this research, I see both a breakthrough in understanding human affect and a preview of a more manipulative media ecosystem. A lab that can reliably trigger shivers can also, in principle, optimize for outrage, anxiety, or compulsive engagement. The same techniques that help a horror director fine-tune a scare could help a political campaign or a platform designer push users toward more extreme content, all while claiming to be simply “responding to what works.” The fact that the lab operates as a tightly controlled, secretive AI project only heightens those concerns, since the details of its datasets and objectives are not easily scrutinized outside curated glimpses like the filmmaker’s guided tour of the goosebump expedition.
At the same time, there is a case to be made that confronting these capabilities in a focused lab is safer than letting them emerge haphazardly in commercial products. By explicitly targeting goosebumps, the researchers are forced to grapple with consent, transparency, and limits in a way that many recommendation systems still avoid. Their collaborations with projects like Shelley and their attention to physical embodiments, from John Biggs’s jolly robot to the texture-changing skins at Cornell University, suggest a willingness to think about emotion as something embodied and relational, not just a metric on a dashboard. Whether that mindset will hold as the techniques spread into entertainment and advertising is an open question. For now, the lab stands as a vivid example of where affective AI is headed: not just toward smarter machines, but toward systems that can reach into the oldest parts of our nervous system and, quite literally, raise our hair.
More from Morning Overview