
Inside American prisons, the most human part of incarceration is often the phone call home. Now those conversations are being quietly fed into artificial intelligence systems, turning intimate check-ins with family into raw material for predictive policing tools. Prisoners are only just discovering that a private startup has been training a large language model on years of their recorded calls, and the sense of betrayal is reshaping the debate over surveillance, consent, and the future of punishment.
What looks, on paper, like a clever way to detect contraband and planned crimes feels very different to the people whose voices are being mined. They already live under cameras, searches, and constant monitoring, yet the idea that an algorithm is learning from their jokes, arguments, and fears has left many feeling as if the last sliver of privacy has been taken without a straight answer or a real choice.
The hidden AI listening behind every prison phone line
The core revelation is stark. A United States telecommunications company that runs prison phone systems has trained an artificial intelligence model on years of incarcerated people’s calls, then turned that system loose to scan for signs of future crimes. Reporting describes how An AI system now combs through conversations to flag what it interprets as planning, coordination, or coded references to illegal activity, effectively transforming routine calls into a continuous stream of data for law enforcement analysis.
Industry voices have been surprisingly open about the ambition. In one public post, James O’Donnell framed the project as a breakthrough, describing how An AI model trained on prison phone calls can now look for planned crimes in those calls. That pitch, aimed at technologists and potential customers, sits in sharp contrast to the way incarcerated people learned about the system, often through secondhand reports and online discussion rather than any direct, detailed explanation from the company or corrections officials.
From routine recording to a “treasure trove” for machine learning
Prison phone calls have long been recorded, but the shift from storage to large scale machine learning marks a profound change in how that data is used. People in custody and their families are told that calls are monitored and recorded, yet they are rarely told that those recordings will be repurposed to train a large language model that can be pointed at what one executive described as an entire “treasure trove” of data. Reporting on the system notes that People in prison, and those they call, are notified that their conversations are recorded, But that does not mean they understand that a sophisticated model is being trained on them while collecting their data.
The startup behind the system has been explicit, at least in investor and industry circles, about how it sees this dataset. One executive boasted that “We can point that large language model at an entire treasure trove [of data] to detect and understand when crimes are being planned,” describing how the same tool could be used to guide targeted searches and inspections of the general population. That vision, reported in coverage of prisoners’ reactions, shows how the company sees incarcerated people’s voices as a rich resource to be mined, a framing that has left many of those same prisoners feeling more like test subjects than citizens with rights.
“Alarmed” on the inside: how incarcerated people found out
The people whose calls power this system did not learn about it through a transparent rollout or a clear consent process. Instead, they began to hear that a startup had been training an AI on their conversations through news coverage and online chatter, then pieced together that their own calls were part of the experiment. Reporting describes how incarcerated people were “alarmed” to discover that a private company had quietly built a model on their voices, with some learning only after family members on the outside sent them clippings about the new surveillance tool that was already in use on their housing units.
That sense of shock is not just about the existence of monitoring, which most people in custody already assume, but about the scale and purpose of the new system. Incarcerated callers describe feeling as if the last space where they could speak somewhat freely has been converted into a training ground for predictive policing, with no meaningful way to opt out. One account notes that prisoners only realized the scope of the project when they read that the company could now use the model to guide targeted searches and inspections, a detail that crystallized how their everyday calls had been turned into operational intelligence for the very institution that confines them, leaving Dec coverage filled with accounts of fear and anger.
A long arc of prison surveillance meets a new generation of AI
To understand why this AI project feels like a breaking point, it helps to see it as the latest step in a long expansion of prison surveillance. For years, corrections agencies have layered on new tools, from automated call monitoring to video analytics, often sold as add ons to existing services. In California, for example, tools like Verus were initially marketed as optional features for prison phone systems, yet reporting shows how many prison telecommunications providers have since folded such AI driven surveillance into broader contracts, including a major deal with LeoTech in 2023 that expanded machine monitoring across facilities.
Phone calls are only one piece of a much larger apparatus. Earlier reporting on prison technology documented how US prisons and jails already use AI to mass monitor millions of inmate calls, with Prisoners’ rights advocates warning that this added level of surveillance can chill speech and undermine rehabilitation. One analysis noted that MORE than 39 states show decreases in prison populations, yet the technology footprint inside facilities continues to grow, a contradiction that fuels concerns that AI is entrenching a culture of suspicion even as incarceration itself slowly recedes.
Consent, notice, and the legal gray zone around prison data
Legally, prisons have long asserted the right to record and monitor calls, but using those recordings to train a large language model pushes into a murkier space. The standard recorded warning that a call “may be monitored or recorded” does not spell out that the audio will be stored indefinitely, fed into an AI system, and used to build a predictive model that can be repurposed for future products. Reporting on the new system stresses that People in prison and their loved ones are notified that calls are recorded, But that notice falls far short of meaningful consent for AI training, especially when refusing the call is not a realistic option for maintaining family ties.
Privacy advocates argue that this gap between minimal notice and expansive data use would be unacceptable in almost any other context. If a consumer messaging app tried to train a model on private conversations without explicit opt in, regulators and users would likely revolt. Yet incarcerated people, who have fewer legal protections and limited access to counsel, are being treated as a captive dataset. Online discussions of the project, including a thread titled Startup Uses Prisoners’ Phone Calls for Training, Raising Privacy Concerns, highlight how technologists and civil liberties advocates see this as a test case for whether AI companies can quietly harvest sensitive data from the most powerless people in society.
Supporters say safety, critics see “Minority Report”
Supporters of the AI system frame it as a pragmatic tool for safety. They argue that if a model can sift through thousands of hours of calls to spot patterns that human monitors would miss, it could help intercept drug smuggling, gang coordination, or planned assaults before anyone gets hurt. In that telling, the system is simply a smarter extension of monitoring that already happens, one that uses pattern recognition to surface the most concerning snippets for human review rather than leaving staff to drown in audio they cannot realistically process.
Critics, including many people inside, hear something very different. To them, the idea of an algorithm scanning for “planned crimes” sounds less like targeted safety work and more like a pre crime fantasy. In one Comments Section reacting to the news, a user joked “I’ve seen this movie. # MinorityReport,” while another, posting as Ahhh thou, worried about how easily ambiguous phrases could be misread as threats. Those reactions capture a broader fear that AI will not just detect real plots but will also misinterpret slang, sarcasm, or emotional venting as evidence of danger, with little transparency about how those judgments are made or challenged.
How the AI actually works, and what it is looking for
Although the company has not open sourced its model, public descriptions and reporting offer clues about how the system functions. The AI is described as a large language model trained on years of recorded calls, which means it has ingested countless examples of how incarcerated people talk about daily life, conflict, and plans. Once trained, it can be “pointed” at new calls to look for patterns that resemble past incidents, such as coded references to contraband, mentions of specific locations or times, or conversational structures that historically preceded fights or smuggling attempts.
Industry posts suggest that the model is not just doing keyword spotting but trying to understand context, tone, and relationships between speakers. In his public note, Donne described how the system can now look for planned crimes in those calls, implying that it is trained to distinguish between hypothetical talk and concrete planning. Yet the same sophistication that makes the tool powerful also makes it opaque. People flagged by the system may never know which phrase or pattern triggered scrutiny, and there is no clear process for them to contest the AI’s interpretation of their words.
Public backlash and the emerging ethics debate
Once the existence of the AI model became public, reaction outside prison walls was swift and polarized. On technology forums, some users praised the system as an innovative way to keep facilities safer, while others saw it as a chilling example of how AI can be deployed against people with little power to resist. In one widely shared thread, a commenter opened with Hey, thanks for sharing our story, then went on to add, Here‘s some context from the article, underscoring how even tech enthusiasts felt the need to spell out the stakes for readers who might otherwise see the project as just another clever use of data.
Ethics discussions have focused on three main questions. First, whether incarcerated people can ever meaningfully consent to data use when refusing means losing contact with loved ones. Second, whether training AI on such sensitive conversations crosses a line even if it is technically legal. And third, whether the benefits claimed by the company, such as reduced contraband or violence, can be independently verified. Threads like Dec discussions of Raising Privacy Concerns repeatedly call for transparency in AI training practices, arguing that if a system is powerful enough to shape searches, discipline, or parole decisions, it must be open to outside scrutiny rather than locked behind proprietary claims.
What this experiment means for the future of punishment
The prison phone AI is not an isolated project. It sits at the intersection of two powerful trends: the rapid commercialization of large language models and the steady outsourcing of core prison functions to private vendors. Tools like While and other analytics platforms show how companies pitch AI as a way to manage shrinking budgets and staff by automating surveillance, even as incarceration rates fall in Reported Oct figures across 39 states. The result is a system where the physical footprint of incarceration may shrink, but the digital footprint of control grows denser and more automated.
For people inside, the AI trained on their calls is a warning about where that trajectory leads. If their most personal conversations can be quietly converted into training data for a predictive model, it is not hard to imagine similar tools being turned on prison emails, video visits, or even in cell audio. The backlash from incarcerated people and outside advocates suggests that the public is not ready to accept that future as inevitable. Whether that resistance translates into new rules, lawsuits, or policy reforms will determine if the voices captured on those calls remain a “treasure trove” for machine learning or become the catalyst for drawing a hard line around the limits of surveillance in a system that already takes so much.
More from MorningOverview