
Open models were supposed to democratize artificial intelligence. Instead, security researchers now say they are handing cybercriminals industrial grade tools that can be downloaded, modified, and quietly weaponized on any capable laptop or server. The result is a fast maturing underground ecosystem where attackers no longer just use AI, they increasingly build on top of it.
As law enforcement and national cyber agencies sound the alarm, I see a clear pattern emerging: open source AI is collapsing the barrier between hobbyist experimentation and professional grade cybercrime. The same transparency that fuels innovation is also giving attackers detailed blueprints for scalable fraud, ransomware, and infrastructure hijacking.
The new criminal toolkit: open models without guardrails
Security teams are now documenting how freely available models can be turned into turnkey hacking assistants, especially when they are stripped of the safety layers that commercial systems enforce. A new study highlighted by Criminals describes how attackers can hijack computers that run open source AI models without guardrails, turning those machines into launchpads for further compromise. When anyone can run a powerful model locally, there is no central provider to throttle abuse or log suspicious behavior.
Researchers and incident responders are also tracking how these systems are being folded into broader criminal operations. One investigation found that Hackers turned an open source AI framework into a global cryptojacking operation, quietly siphoning GPU cycles from exposed clusters that were originally deployed for legitimate machine learning workloads. In parallel, a related warning on social media stressed that Criminals can hijack computers running these models and that the lack of built in protections is creating systemic security risks.
From phishing to ransomware, AI supercharges classic attacks
What makes open AI particularly attractive to criminals is not just raw capability, it is the way models can be tuned to specific scams. Detailed analysis of attacker behavior shows that How Cybercriminals Are now includes generating highly personalized phishing emails that mimic internal corporate language and even individual writing styles. In that reporting, the section on Phishing notes that Organizations once relied on a “human firewall” to spot awkward grammar and odd phrasing, but AI generated messages erase those telltale signs.
National cyber authorities are warning that this shift is not theoretical. A detailed assessment from the United Kingdom’s cyber agency concludes that it is “therefore likely that cyber criminal use of available AI models to improve access will contribute to the global ransomware t” and that attackers will not need to train their own systems to benefit from these capabilities, they can simply plug into existing AI model services. A companion version of that analysis, accessible through a separate link, reinforces that Jan guidance is that off the shelf tools will be enough to raise the baseline of criminal tradecraft.
Backdoored models and poisoned AI supply chains
Behind the scenes, attackers are not just using AI, they are corrupting it at the source. Threat researchers have documented how a vast majority of models available on popular repositories such as Hugging Face rely on Python’s pickle module for serialization, which opens the door to hidden code execution when unsuspecting users load a model. In that same research, the section labeled Backdoored LLMs describes how malicious actors can embed payloads that trigger during seemingly benign operations such as a “math prompt” or a “System override” instruction.
Corporate environments are equally exposed through the AI software supply chain. A detailed industry analysis notes that most AI software supply chain compromises now target shared components and orchestration layers, and that Most AI software dependencies are pulled automatically from public registries. The same report, authored by a group identified simply as Authors, warns that With the adoption of artificial intelligence soaring across industries, defenders must treat AI pipelines as critical infrastructure rather than experimental code.
Law enforcement and national agencies sound the alarm
Public warnings from law enforcement show how quickly this problem has moved from niche concern to mainstream risk. The FBI San Francisco division has formally cautioned that cybercriminals are using artificial intelligence to increase the scale and sophistication of online fraud, stressing that AI generated content is enabling more successful deception and data theft. A related advisory, accessible through a separate URL, underscores that the FBI Warns of Increasing Threat of Cyber Criminals Utilizing Artificial Intelligence and that The FBI San Francisco field office is now treating AI enhanced fraud as a priority threat category.
National cyber agencies are aligning with that assessment. The same UK analysis that flagged AI’s role in ransomware, accessible via a second link, emphasizes that it is “therefore likely that cyber criminal use of available AI models to improve access will contribute to the global ransomware t” and that defenders must assume attackers will routinely use Jan era models to probe networks. That companion document, reachable through a shorter URL, reiterates that it is therefore likely that cyber criminal use of available AI models to improve access will contribute to the global ransomware t and that defenders should plan for attackers who can query an AI model for this purpose rather than relying on manual reconnaissance.
Defenders race to weaponize AI in response
Security vendors and enterprise teams are not standing still. I see a parallel arms race in which defenders are embedding AI into detection pipelines to keep up with machine generated attacks. One detailed technical overview notes that AI driven cyberthreats come in faster and with less warning, but that the same techniques can be used to spot anomalies in network traffic and user behavior, as described in the section labeled Contents. That report frames the challenge as Fighting the ongoing battle of digital vulnerabilities with AI cybersecurity, where machine learning models triage alerts and surface the most urgent incidents.
Specialists who track attacker behavior argue that defenders must also understand how criminals are experimenting with AI in the wild. A detailed breakdown of malicious use cases explains that Cybercriminals are using AI to create malware variants with customized signatures, and that defenders will need their own models to keep pace with that churn. Another technical blog on cybercriminal abuse of large language models shows how techniques like the math prompt method and System override can be used both to test model robustness and to detect when a model has been tampered with. In practice, that means blue teams must treat AI artifacts as potential attack surfaces, scanning them for hidden payloads just as they would any other binary.
More from Morning Overview