Image Credit: Alex Bores - CC BY-SA 4.0/Wiki Commons

Artificial intelligence deepfakes have gone from fringe novelty to front-page threat, warping politics, finance, and everyday trust in what we see and hear. Alex Bores, a former Palantir engineer turned New York state assemblymember, argues that this crisis is not an unsolvable sci‑fi nightmare but a practical security problem that can be contained with a decades‑old, essentially free technique. His pitch is simple: instead of trying to perfectly spot every fake, we should make it easy to prove what is real.

That shift in mindset, from chasing forgeries to authenticating originals, is quietly reshaping how technologists, lawmakers, and companies think about AI risk. I see Bores’s proposal as part of a broader movement to harden our information infrastructure, using tools that already exist but have rarely been deployed at scale for everyday speech and video.

Alex Bores’s journey from Palantir engineer to deepfake watchdog

Alex Bores did not arrive in Albany as a typical backbencher learning technology on the fly. He built his early career at Palantir, working with large data systems and security‑sensitive analytics before running for the New York State Assembly, where he now represents a Manhattan district and has made AI policy a central focus of his agenda. That mix of hands‑on engineering and legislative authority gives him an unusual vantage point on how fast synthetic media is advancing and how slowly public safeguards are catching up, a tension he has described in detail in profiles of his shift from ex‑Palantir technologist to politician.

In that role, Bores has zeroed in on deepfakes as a test case for whether democratic institutions can adapt to AI without either overreacting or surrendering. He has warned that manipulated audio and video are already eroding trust in elections, public hearings, and even routine constituent communication, and he has pushed for practical guardrails that do not require banning generative tools outright. His argument is that the same mindset that once treated cybersecurity as an afterthought is now repeating itself with AI, and that lawmakers need to treat authenticity as critical infrastructure rather than a niche technical concern, a theme he has expanded on in interviews about his legislative priorities and his experience bridging the worlds of code and statute in New York.

The solvable problem: reframing deepfakes as an authentication gap

Where many experts describe deepfakes as an arms race that defenders are destined to lose, Bores frames them as a more mundane failure of authentication. In his view, the core problem is not that AI can generate convincing forgeries, but that our everyday communications rarely carry any cryptographic proof of origin, so a realistic fake can circulate with the same apparent legitimacy as a genuine recording. He argues that if we treat audio and video like sensitive documents, attaching verifiable signatures at the point of capture, the space for plausible fakes shrinks dramatically, because recipients can quickly check whether a clip truly came from the claimed source.

That perspective aligns with how security engineers have long approached email spoofing and website phishing, where protocols like SPF, DKIM, and TLS did not eliminate malicious messages but made it far easier to verify trusted senders. Bores has said that deepfakes become a “solvable problem” once we stop expecting AI detectors to be perfect lie detectors and instead build a default expectation that important communications, from campaign ads to corporate earnings calls, arrive with a cryptographic trail. His focus on solvability is not techno‑optimism so much as a call to apply the same discipline that already protects online banking and software updates to the videos and voice notes that now shape public opinion.

The “free, decades‑old” trick: digital signatures for everyday speech

The old technique Bores wants to revive is digital signing, a cryptographic method that has existed for decades in tools like PGP, code‑signing certificates, and secure messaging apps. In his proposal, the camera or microphone that records a message would generate a hash of the raw data and sign it with a private key controlled by the speaker or institution, creating a tamper‑evident seal that can be checked by anyone with the corresponding public key. Because the math and software behind this are already widely deployed and often open source, he describes it as essentially free, apart from the engineering work to integrate it into phones, conferencing platforms, and editing tools, a case he lays out in detail when he argues that deepfakes can be contained if we bring back this decades‑old technique.

In practice, that could look like a “verified origin” badge on a livestreamed town hall, a signed transcript attached to a CEO’s video message, or a watermark that survives basic editing as long as the underlying content has not been synthetically altered. Bores emphasizes that the goal is not to sign every meme or casual TikTok, but to create a clear, widely understood lane for high‑stakes content where authenticity matters most. Once that lane exists, he argues, the burden shifts: instead of asking viewers to spot subtle artifacts in a candidate’s voice, we can ask why a supposedly official message lacks the same cryptographic proof that already protects a smartphone operating system update.

Why detection alone is losing to AI impersonation scams

The urgency behind Bores’s push is visible in the corporate world, where AI‑driven impersonation scams are already costing real money. Companies have reported cases where fraudsters used cloned executive voices on phone calls and realistic video avatars in video conferences to trick staff into wiring funds or sharing sensitive data, a pattern that has spawned a new wave of startups promising to stop deepfakes in real time. One such firm recently raised $28 million to analyze live audio and video for signs of manipulation, underscoring how quickly the threat has moved from theoretical to operational.

These tools can help, but they also highlight the limits of a detection‑only strategy. As generative models improve, the artifacts that current detectors rely on, from unnatural blinking to audio compression quirks, become less reliable, and attackers can test their forgeries against public detection APIs before deploying them. Bores’s argument is that this dynamic looks uncomfortably like spam filtering before email authentication protocols matured: defenders are stuck playing catch‑up while attackers iterate. By contrast, a signed‑content approach would let a finance team ignore any “urgent” video request from a CFO that lacks the expected cryptographic seal, reducing the need to guess whether a voice sounds slightly off in a noisy conference call.

AI anxiety, Gen Z, and the politics of authenticity

The deepfake debate is unfolding alongside a broader wave of AI anxiety, especially among younger workers who worry that automation will make their skills obsolete. Some Gen Z founders have pushed back on that narrative, arguing that the biggest misconception is that AI is a lazy shortcut rather than a tool that still demands human judgment and creativity, a point one entrepreneur made when describing how peers misread AI as a path to instant success instead of a demanding craft in a recent reflection on AI and Gen Z. Bores’s focus on authentication fits into that conversation by treating AI not as an unstoppable replacement for human trust, but as a technology that can be governed and shaped by policy choices.

At the same time, the politics of AI are already volatile, with lawmakers, regulators, and campaigns clashing over how aggressively to regulate generative tools and how to police their use in elections. One widely shared analysis noted that the politics of AI are already exploding, from content moderation fights to national security debates about model access. In that environment, a proposal like Bores’s, which leans on existing cryptographic infrastructure rather than sweeping bans, offers a politically palatable middle path: it promises concrete protection against some of the worst abuses without trying to halt AI research or criminalize every instance of synthetic media.

Human rights, harassment, and the cost of doing nothing

Deepfakes are not just a corporate or electoral problem; they are also a human rights issue that disproportionately harms women, activists, and marginalized communities. Researchers have documented how manipulated videos and “shallowfakes” have been used to discredit journalists, fabricate compromising footage of women, and muddy evidence of abuses, making it harder for victims to prove what really happened. One early analysis warned that these techniques could be weaponized to undermine documentation of war crimes and police violence, turning the very idea of video evidence into a contested battleground, a concern detailed in reporting on deepfakes and human rights.

Bores’s emphasis on authentication speaks directly to that risk. If activists, human rights monitors, and local journalists can capture footage with built‑in cryptographic signatures, they gain a stronger foundation to rebut claims that their videos are fabricated, while platforms and courts gain a clearer standard for weighing authenticity. It would not stop bad actors from circulating fake clips, but it would give targeted individuals a more robust way to prove that a particular recording is genuine, and it would make it harder for governments or powerful figures to dismiss inconvenient evidence as AI trickery. In a world where “it is fake” has become a reflexive defense, the ability to say “here is the signed original” could be a quiet but powerful counterweight.

How scammers exploit trust gaps as deepfakes get smarter

Financial institutions and consumers are already feeling the impact of more convincing synthetic media, as scammers blend AI‑generated voices and faces with old‑fashioned social engineering. Banks and credit unions have begun warning customers that fraudsters can now mimic a loved one’s voice to request emergency transfers or impersonate a bank representative on a video call, urging people to verify unexpected requests through independent channels. One such advisory bluntly notes that deepfakes are getting smarter, and that traditional cues like odd phrasing or low‑quality audio are no longer reliable red flags.

In that context, Bores’s call for signed content is not just about high‑profile political videos, but also about everyday financial safety. If a bank could mark its official video messages and support calls with a cryptographic seal that customers learn to expect, it would be easier to tell a legitimate fraud alert from a synthetic impostor. Similarly, families could agree on simple verification rituals, like shared passphrases or secure messaging channels, to supplement any voice or video plea for help. The broader point is that as AI makes impersonation cheaper and more scalable, societies need to upgrade their trust protocols, and digital signatures offer a way to do that without asking every consumer to become an expert in audio forensics.

Online culture, Reddit debates, and the demand for proof

Outside formal institutions, the internet’s own culture is already grappling with the authenticity crisis in more chaotic ways. On large forums, users trade examples of uncanny AI‑generated clips, argue over whether a viral video is real, and share tools for spotting telltale glitches, turning deepfake detection into a kind of participatory sport. Threads on communities like r/technology routinely feature heated debates about whether a given clip is synthetic, with some users insisting that skepticism should be the default and others warning that constant doubt can be weaponized to dismiss genuine evidence.

That grassroots skepticism is healthy up to a point, but it also illustrates why Bores’s authentication‑first approach matters. If every contentious clip becomes a referendum on individual users’ ability to spot artifacts, the loudest or most technically confident voices will often win, regardless of who is actually correct. A simple, widely adopted standard for signed content would not end those arguments, but it would give communities a more objective reference point: instead of endless zoom‑ins on pixelated frames, moderators could ask whether a video claiming to show a public official or corporate leader carries the expected cryptographic proof. In effect, it would shift some of the burden from amateur sleuthing to infrastructure, which is where security problems tend to be solved most reliably.

Where Bores’s proposal fits in the wider AI economy

The stakes of getting deepfake policy right are rising alongside the broader AI boom, as companies pour capital into generative models and related tools. Market trackers show a steady stream of funding rounds, product launches, and regulatory updates in the artificial intelligence sector, with investors betting that everything from customer service to drug discovery will be reshaped by machine learning. In that environment, the temptation is strong to treat deepfakes as an unfortunate side effect of progress, something to be managed with piecemeal moderation rather than structural changes.

Bores’s insistence on a low‑cost, infrastructure‑level fix challenges that complacency. By pointing to a “free, decades‑old” technique, he is effectively arguing that the AI economy has no excuse for leaving authenticity as an afterthought, especially when the same cryptographic primitives already secure software supply chains and financial transactions. His proposal also dovetails with emerging industry efforts to standardize content provenance, such as embedding metadata about how an image or video was created, but it pushes further by tying authenticity to the identity of the speaker rather than just the tool. In a market where trust is becoming a competitive advantage, companies that adopt such standards early may find that verifiable communication is not just a compliance burden but a selling point.

Limits, trade‑offs, and what comes next

No security measure is perfect, and Bores’s approach has real trade‑offs that policymakers and platforms will need to confront. Digital signatures can be compromised if private keys are stolen, and any system that ties identity to cryptographic keys raises questions about who controls those keys and how revocation works when a device is lost or an account is hacked. There is also a risk that authoritarian governments could co‑opt authenticity standards to delegitimize anonymous speech or require all political content to pass through state‑approved signing authorities, a concern that human rights advocates have raised in broader debates about content provenance.

Still, the alternative is to continue relying on a patchwork of AI detectors, platform policies, and individual skepticism, even as synthetic media becomes more convincing and more accessible. Bores’s core insight is that we already know how to build systems where authenticity is the default rather than the exception, and that the cost of deploying those systems for speech and video is low compared with the social and political damage that deepfakes can inflict. As public awareness grows, fueled by viral clips, financial scams, and high‑profile political controversies, the pressure to move from ad hoc responses to structural fixes will only increase, a trend reflected in the growing volume of AI‑related stories that dominate short video explainers and in the way AI topics now routinely top mainstream political coverage. Whether or not Bores’s exact blueprint becomes law, the idea that authenticity must be engineered, not assumed, is likely to define the next phase of the AI era.

More from MorningOverview