Researchers have built AI systems capable of steering targeted brain circuits in real time, moving neuroscience closer to precise, closed-loop control of neural activity. These tools pair machine learning with brain stimulation hardware to read neural signals, interpret them, and fire back corrective pulses within milliseconds. The therapeutic promise is significant, but so is an uncomfortable question: what happens when the same technology is turned against the person it is supposed to help?
Closed-Loop AI Systems That Steer Neural Activity
The core idea behind these systems is straightforward in principle and staggering in execution. An AI model monitors brain signals continuously, compares them against a desired target state, and delivers stimulation to nudge activity toward that target. A methods paper in STAR Protocols details a framework called SpikerNet, which uses deep reinforcement learning paired with infrared neural stimulation to define explicit action and observation spaces and a reward function for controlling neural firing patterns. Once deployed on compatible recording and stimulation hardware, the controller can operate as an autonomous feedback loop, adjusting its actions in milliseconds based on how closely the observed activity matches the goal state.
A separate line of work has demonstrated similarly fine-grained control in living primates. In Nature Biomedical Engineering, researchers showed that closed-loop optogenetic stimulation can manipulate neural dynamics in anesthetized non-human primates, generating, enhancing, or suppressing oscillations and modulating seizure-like activity with phase-specific precision. The underlying recordings and analysis pipelines from Zaaimi and colleagues are available through a public dataset at Newcastle, allowing others to probe exactly how the algorithms track ongoing rhythms and decide when to stimulate. Together, these results show that AI-guided stimulation can already override natural brain patterns in a primate nervous system with a level of accuracy that would have been difficult to imagine a decade ago.
From Primates to People: Early Human Evidence
Translating such systems into humans adds layers of complexity, from safety constraints to questions about consent and agency. A recent arXiv preprint describes what its authors call a first demonstration of reinforcement-learning-based closed-loop EEG-TMS in humans, where an algorithm identifies the mu-rhythm phase linked to high versus low corticospinal excitability and triggers transcranial magnetic stimulation at the desired phase. By tracking motor evoked potentials and connectivity changes, the researchers argue that the controller is not only sensing brain state but actively reshaping how motor circuits communicate. Because the work has not yet been peer-reviewed, its findings should be treated as provisional, yet it illustrates how quickly closed-loop AI control is moving from animal models toward human neuromodulation.
Other proposals focus less on direct electrical or magnetic stimulation and more on using AI to design the sensory inputs that drive neural activity. A generative framework dubbed DecNefGAN envisions creating visual stimuli that can induce targeted mental states via fMRI feedback, with the system continuously measuring brain activity and adjusting images until the desired pattern emerges. Instead of injecting current into the brain, this approach leverages the brain’s own perceptual machinery, using imaging to keep the loop tightly calibrated. Notably, the preprint explicitly raises the risk that malicious actors or external systems could interfere with the feedback loop, underscoring that dual-use concerns are not just the domain of ethicists but are being flagged by technical teams building the tools.
The Brainjacking Problem
The security vulnerabilities raised by these technologies are not merely speculative. An article in Ethics, Medicine and Public Health defines “brainjacking” as unauthorized control of an electronic brain implant, emphasizing the possibility of altering cognitive, emotional, and motivational states. Deep brain stimulation is already used clinically for Parkinson’s disease, essential tremor, and treatment-resistant depression, usually with relatively simple control schemes. As these implants become more networked and AI-driven, the potential attack surface expands from basic parameter changes to the reward functions and policy structures that determine what the controller is trying to optimize. In a worst-case scenario, an intruder who can rewrite those objectives could subtly shift a patient’s emotional baseline or risk tolerance while the device continues to appear medically functional.
Ethical analysis is beginning to catch up with these technical capabilities. A paper in PLOS Biology warns that advanced brain-computer interfaces could enable precision implantation of specific intentions, moving beyond coarse modulation of brain regions toward influencing the content of thought. Much of the current debate focuses on overt abuses, such as hackers coercing someone’s decisions or governments weaponizing neurotechnology. Yet a subtler and perhaps more likely risk lies in mis-specified objectives within legitimate clinical systems: a controller tuned to minimize anxiety might slowly extinguish adaptive fear responses, or one optimized for mood stability could flatten the full range of emotion. Because closed-loop systems adapt over time, small initial biases in their reward functions could accumulate into large behavioral shifts, and there is almost no long-term human data to illuminate how such dynamics play out in everyday life.
AI Is Also Sharpening the Instruments of Observation
Closed-loop control depends on the quality of the neural signals feeding into the algorithm, and AI is rapidly improving what those signals can reveal. In one recent study highlighted by EurekAlert, researchers used machine learning to decode how specific neuronal populations drive particular behaviors, showing that activating or silencing targeted cells could switch actions on and off in animal models. Although this work is not itself a closed-loop therapeutic system, it illustrates how AI-enhanced analysis can map brain circuits with enough granularity that future controllers might act on highly specific cell types or patterns rather than broad regions. As decoding models become more accurate, the line between “reading out” a brain state and “setting up” a brain state may blur, because the same features that best predict behavior are natural candidates for intervention.
These advances in observation have privacy implications that extend beyond clinical implants. High-resolution decoders trained on imaging or electrophysiology could, in principle, infer preferences, emotional reactions, or even covert intentions from patterns that appear opaque to human observers. When paired with adaptive stimulation, the result is a powerful bidirectional interface: the system learns how to interpret a person’s internal state and simultaneously learns how to change it. This combination magnifies traditional cybersecurity concerns, since an attacker would not need to understand a victim’s psychology to influence it; they would only need access to a model that has already learned how certain patterns of stimulation shift the decoded state in desired directions.
Designing Safeguards for Neural Autonomy
Because the same properties that make AI-controlled neurostimulation therapeutically promising also make it dangerous, governance will need to be built into the architecture of these systems from the outset. One obvious step is to constrain what closed-loop controllers are allowed to optimize, using hard-coded safety bounds or multi-objective reward functions that explicitly protect aspects of agency such as variability in choice or the capacity to experience negative emotion when appropriate. Another is to require transparent logging of stimulation decisions and underlying model states, so that clinicians and patients can retrospectively audit how the system behaved over days or months. Such logs would not eliminate the risk of brainjacking, but they could make covert manipulation easier to detect and investigate.
At the same time, safeguards must account for the fact that many patients seek out neuromodulation precisely because their own brains feel untrustworthy, whether due to severe depression, obsessive-compulsive disorder, or movement disorders. A rigid insistence on preserving every facet of existing neural function could undermine the very benefits these tools are meant to deliver. The challenge, then, is to define “neural autonomy” in a way that respects a person’s long-term values rather than their momentary states, and to encode that definition into systems that learn and adapt. That will require close collaboration between engineers, clinicians, ethicists, and patients themselves, as well as regulatory frameworks that treat AI-driven neurotechnology not simply as a more precise version of existing devices but as a qualitatively new kind of interface between mind and machine. If that work is done well, closed-loop AI could become a powerful ally in restoring mental health and function; if it is neglected, the same capabilities could erode the boundaries of selfhood they were meant to protect.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.