Image Credit: Daniel Torok - Public domain/Wiki Commons

Artificial intelligence was always going to collide with politics, but the way it is now being weaponized from inside the White House is forcing a reckoning. Hyperreal images and videos that once belonged to fringe corners of the internet are now being pushed through official channels, blurring the line between satire, propaganda, and outright fabrication. The result is a mounting crisis of confidence in what voters can believe about their leaders and their opponents.

President Donald Trump has not only accepted that blurring, he has turned it into a governing style. His administration’s embrace of AI generated imagery, from flattering fantasies to inflammatory depictions of critics, is testing whether a democracy can function when even the pictures coming out of the Oval Office are up for debate.

The White House turns AI into a governing tool

Trump’s second term has been defined by a willingness to experiment with artificial intelligence in public messaging, treating AI tools as just another extension of his political brand. From the start of this term, his aides have steadily ramped up AI generated posts that target opponents and glorify the president, a pattern that fact checkers have traced as Trump’s AI posts have “ramped up” and increasingly focused on adversaries who cross him, including images of buildings and monuments that bear his name in gold that never existed in reality but circulate as if they might. That escalation is documented in detailed reviews of Trump and his staff’s online output.

Inside the administration, aides have framed this as innovation rather than deception, arguing that AI images are simply a more vivid way to communicate political points. Yet experts who track disinformation say the White House is “testing the limits” of AI driven political messaging, normalizing a style in which audiences are expected to decode what is real, what is exaggerated, and what is entirely fabricated. Analyses of the president’s social feeds describe how these AI generated posts fulfill a political purpose, reinforcing Trump’s preferred narratives while leaving critics scrambling to debunk content that can be created faster than it can be fact checked, a trend chronicled in multiple examinations of Trump’s evolving strategy.

From memes to deepfakes: a new level of political unreality

What began as meme style images has now hardened into something more serious, with the Trump administration openly sharing AI generated imagery that looks indistinguishable from documentary photography. Over the past year, official accounts have posted AI created scenes of Trump in heroic or glamorous settings, including one widely discussed image that placed Trump and Israeli Prime Minister Benjamin Netanyahu at a lavish resort with “Trump Gaza” emblazoned in the background, as well as another that showed the president behind bars in an orange jumpsuit, a fantasy of persecution that his team used to rally supporters. These examples have been cataloged in reporting on how Another AI post turned deepfake aesthetics into a campaign tool.

The White House has also leaned into AI video, commissioning clips that show Trump speaking in stylized environments or confronting dramatized crises that never occurred, a practice that technologists warn can desensitize viewers to the difference between authentic footage and synthetic storytelling. One investigation into these AI videos describes how the administration has used artificial intelligence to dramatize immigration enforcement and other hot button issues, creating content that is emotionally charged but only loosely tethered to real events. When such material is pushed through official channels, it carries the implicit seal of government, even when it is closer to fiction than fact.

The Nekima Levy Armstrong photo and the line that was crossed

The breaking point for many critics came when the White House circulated an altered image of civil rights attorney Nekima Levy Armstrong that appeared to show her in tears after an arrest. The picture, which was shared by allies including Noem, was realistic enough that it required forensic analysis to confirm it had been manipulated, with The New York Times running the image through the detection service Resemble.AI to show that the version used by Noem and the White House differed from the original and misrepresented the circumstances of the prosecution of Ms. Levy Armstrong. That sequence of events is laid out in detail in reporting on Nekima and the doctored photo.

Experts who study misinformation say that image marked a shift from self referential propaganda to AI being used to distort the reputation of a private citizen in the context of a live legal case. One analysis described the picture as “edited and realistic,” warning that such content “distorts the truth and sows distrust” when it is presented without clear labeling, and noting that the controversy erupted precisely because the image was so plausible. Coverage of the uproar around But that edited image underscores how quickly AI can be turned against individuals who lack the megaphone of the presidency.

Trust, grandparents, and the “your eyes are lying” problem

The deeper danger is not any single image but the cumulative effect of a White House that treats reality as optional. Researchers who monitor public opinion say Trump’s use of AI images is “pushing new boundaries” and “further eroding public trust,” especially among people who already struggle to keep up with the pace of online information. In one widely cited warning, an expert noted that “Your grandparents may see it and not understand the meme, but because it looks real, it leads them to ask their kids or grandkids,” a dynamic that can spread confusion across generations when official accounts share content that looks like news photography but is actually synthetic. That quote appears in coverage of how Trump’s use of AI is landing with ordinary viewers.

Analysts who spoke to federal technology reporters argue that this strategy is not accidental but part of a broader pattern in which Trump’s team uses AI to inflame existing doubts about institutions. They describe how the administration’s approach “pushes new boundaries” and “further erodes public trust,” warning that once citizens internalize the idea that any image might be fake, it becomes easier for leaders to dismiss authentic evidence as fabricated too. That concern is central to assessments of how Trump is reshaping expectations for official communication, as well as to broader examinations of the administration’s impact on public trust in government messaging.

AI politics is bigger than Trump, but he is setting the tone

Trump’s aggressive use of AI imagery is unfolding in a wider environment where synthetic media is already seeping into campaigns and voter outreach. Earlier in the election cycle, voters in New Hampshire received a robocall that mimicked Biden’s voice and falsely told them that if they cast a ballot in the primary they would not be able to vote in the general election, a deepfake that investigators traced back to political operatives experimenting with voice cloning. That incident, which unfolded “In January” among New Hampshire voters and invoked Biden by name, showed how AI can be used not just to shape perceptions but to actively suppress participation.

At the same time, watchdogs have documented AI generated images that falsely depict Trump’s support from Black and African American voters, part of a broader wave of deepfakes that researchers say threatens democracy while widening political polarization. Analyses of these AI generated depictions of Trump’s supposed backing among Black voters underscore how synthetic media can be tailored to exploit racial and partisan divides. In that context, the president’s own embrace of AI, chronicled in detailed accounts of how MPR and other outlets describe his strategy, is not just another campaign tactic. It is a signal to the rest of the political system that reality itself is negotiable, and that the most powerful office in the country is comfortable operating in a permanent state of visual doubt.

More from Morning Overview