Morning Overview

Why so many users refuse AI on their devices for one blunt reason?

One-third of consumers flat-out reject AI features on their devices, and the top reason is not confusion or technophobia. It is a blunt, four-word verdict: they do not need it. That finding, drawn from Circana’s Connected Intelligence research covering more than 10,000 U.S. households, signals a growing gap between the AI features tech companies are racing to ship, and the practical value ordinary people actually experience. As AI becomes a default checkbox on new phones, laptops, and smart home gear, the data suggests a sizable slice of the public is not just hesitant but actively uninterested in the pitch.

That disconnect matters because it challenges the assumption that AI adoption will mirror earlier waves of consumer tech, where skepticism faded as people saw the benefits. Here, many consumers already see the demos, understand the marketing claims, and still decide that these tools do not earn a place in their daily routines. For device makers betting their product roadmaps on “AI everywhere,” the Circana numbers read less like a temporary hurdle, and more like a structural demand problem. People are not asking for what is being built.

Most People Know About AI and Still Say No

The assumption that consumer resistance stems from ignorance does not hold up. Circana’s research finds that 86% of U.S. consumers are already aware of AI capabilities built into smartphones, smart speakers, and other gadgets. Awareness is not the bottleneck. Yet 35% of those same consumers say they are not interested in having AI on their devices at all, and nearly two-thirds of that resistant group say they simply do not need it. In other words, people are making an informed choice to pass on AI, not avoiding it because they have never heard of it.

That response carries a specific weight. It is not a complaint about glitches or a misunderstanding of what generative models can do; it is a cost benefit judgment that the features on offer do not solve problems these users care about. Coverage of the Circana findings in the tech press notes that many respondents see AI as a solution in search of a problem, with consumers describing AI assistants as unnecessary extras rather than must-have upgrades. The pattern echoes what executives themselves are discovering: according to reporting on the same survey, more than half of CEOs say they have yet to see tangible business benefits from AI deployment, and Microsoft’s chief executive has publicly acknowledged that AI still needs to prove out broader impact. When both buyers and builders question the payoff, consumer skepticism looks less like stubbornness and more like a rational read of the current landscape.

Privacy Fears Backed by Enforcement Actions

Beyond the “do not need it” verdict, privacy ranks as a persistent secondary concern among device AI resisters. That worry is not abstract. In May 2023, the Federal Trade Commission and Justice Department charged Amazon with violating children’s privacy law by retaining kids’ Alexa voice recordings indefinitely and undermining parents’ deletion requests. As part of the settlement, regulators restricted how the company could use certain collected data to train its algorithms, a direct acknowledgment that voice data harvested through consumer devices was being repurposed in ways users never explicitly agreed to. For consumers already uneasy about invisible data flows, the case reads like confirmation that their suspicions are justified.

Broader survey data reinforces that distrust. According to recent polling by Pew, half of Americans say they are more concerned than excited about the increased use of AI in daily life, up sharply from earlier in the decade. Majorities also report low confidence that companies deploying AI will handle personal information responsibly, and many expect that data collected by AI-powered products will be used in ways they would not approve of. Separate privacy research has found similarly high levels of anxiety, with large majorities believing that information gathered by AI systems will be repurposed or shared beyond the original context. These numbers describe a public that understands AI well enough to distrust the business model behind it, especially when it hinges on continuous data collection.

Americans Want Safety Rules, Not Faster Rollouts

Consumer resistance also shows up in clear policy preferences. A national Gallup survey found that 80% of U.S. adults want to maintain or strengthen rules for AI safety and data security even if doing so slows the pace of AI development. Only 9% say accelerating innovation should take priority over safeguards. That is not a close call; it is an overwhelming mandate for caution. For device makers touting rapid AI integration as a competitive advantage, the message is that most people would rather see fewer, safer features than a ceaseless stream of experimental ones.

The federal government has started to formalize that instinct. In January 2023, the National Institute of Standards and Technology launched the AI Risk Management Framework, a voluntary guidance document that sets out a common vocabulary for assessing harms related to privacy, security, transparency, and accountability. NIST’s broader work on trustworthy technologies, cataloged through its computer security programs, gives policymakers and companies a technical foundation for talking about AI risks in concrete terms. But the gap between these high-level frameworks and what consumers actually encounter on their phones and smart speakers helps explain why so many people default to refusal. When official risk categories read like a checklist of unresolved problems (data leakage, opaque decision-making, weak recourse), opting out starts to look like the most sensible consumer choice available.

Can Privacy-First Design Change Minds?

Some companies are betting that stronger privacy architecture can convert skeptics into users. Apple’s recently announced Private Cloud Compute system, for example, is designed so that only the minimum necessary data is sent to remote servers for AI processing, and that data is used solely to fulfill the immediate request before being deleted. In its technical description of the system, Apple says it does not retain the contents of those requests or use them to build long-term profiles, and it emphasizes that code running in the cloud is subject to outside inspection. If those claims hold up under independent scrutiny, the approach directly targets the fears that Pew and Gallup surveys have documented: persistent tracking, opaque storage, and secondary uses of personal information.

But a single vendor’s privacy model does not fix a market-wide credibility problem. Consumers do not experience AI as a monolithic technology; they encounter it in a patchwork of apps, platforms, and devices, many of which are funded by advertising or data brokerage. As long as people have to guess which products behave like Private Cloud Compute and which behave like the Alexa case that drew FTC action, distrust is likely to persist. To shift that perception, companies would need not just better engineering but also plain-language disclosures, consistent enforcement, and visible consequences when promises are broken. Without that, even well-designed privacy features risk being dismissed as marketing gloss.

From “Nice-to-Have” to “Need-to-Have”

Underneath the privacy debate lies a simpler question: what problems does on-device or cloud AI actually solve for ordinary people? For many, the current offerings (chatbots in search bars, auto-summarized emails, generative wallpapers) feel like conveniences rather than necessities. Circana’s data shows that when users say they do not need AI, they are often comparing it to past innovations that clearly earned their keep, like mobile internet access or high-quality cameras on phones. Until AI features cross that threshold of obvious utility, resistance will remain a rational stance, especially when risks around data use and security are still being worked out.

That does not mean consumer attitudes are frozen. History suggests that perceptions can change quickly when a technology delivers unmistakable value: navigation apps that prevent people from getting lost, real-time translation that makes travel easier, or accessibility features that open up devices to people with disabilities. For AI, the path to that kind of acceptance likely runs through focused, trustworthy applications rather than vague promises of “smarter” everything. If companies can pair demonstrably useful capabilities with verifiable privacy protections and clear accountability, the one-third of consumers now rejecting AI outright may eventually revisit their verdict. Until then, the message from the data is straightforward: people are not asking for more AI on their devices. They are asking for better reasons to say yes.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.