
Artificial intelligence is slipping out of phones and laptops and into a new generation of hardware, from smart glasses to “AI PCs” and voice-driven assistants that feel less like apps and more like colleagues. As these devices arrive, the real question for users is not whether the models are impressive, but whether the services they already rely on will still be front and center or quietly sidelined.
The answer depends on how quickly app makers adapt to on-device AI, new interaction patterns, and a shifting power balance between platforms and developers. I see three big forces at work: the rise of ambient AI hardware, the redesign of apps around agents and voice, and a scramble among platforms to control the new operating system layer that sits between users and their favorite software.
AI hardware is becoming the new default, not a niche
AI is no longer confined to cloud servers and smartphone chips, it is being woven into the physical fabric of everyday devices. Analysts expect the Internet of Things to evolve from simple sensor networks into something closer to an autonomous nervous system, with connected objects that can sense, decide, and act with minimal human input, a shift that one forecast describes as the Internet of Things turning into an AIoT built on pervasive sensor intelligence. That backdrop matters because it means AI devices are not a single product category, they are a layer that will sit inside cars, earbuds, laptops, and home appliances, each with its own rules for which apps get access.
Chipmakers are racing to supply the silicon for this shift, and their roadmaps hint at how deeply AI will be baked into future app experiences. At CES, Intel introduced Intel Core Ultra Series 3 processors as the first platform built on Intel 18A, explicitly pitched as the foundation for PCs that can run complex AI workloads locally rather than offloading everything to the cloud. Rival efforts framed as Enabling AI PC experiences everywhere point in the same direction, toward billions of users expecting their laptops to summarize documents, generate media, and run agents without spinning up a browser tab. For app developers, that means the “device” is no longer just a neutral container, it is an opinionated AI environment that can either surface or sideline their services.
Apps are being rebuilt around agents, not taps
On these new devices, the core interaction is shifting from tapping icons to delegating tasks to autonomous agents. Enterprise forecasts suggest that Gartner predicts that 40% of enterprise applications will leverage task-specific AI agents by 2026, compared to less than 5% in 2025, a jump that effectively turns line-of-business software into a swarm of bots acting on behalf of employees. In legal technology, similar expectations are emerging, with reports that Your eDiscovery team will see the game change as AI agents sift through vast digital records while still having to play by the rules of compliance and procedure.
Consumer apps are on the same trajectory, just with different branding. Analyses of mobile development trends argue that Apps Will Shift From Reactive to Predictive behavior, moving away from waiting for a user to tap a button and toward anticipating needs based on context and history. In that model, a travel app does not simply show flight options, it quietly monitors prices, predicts disruptions, and proposes rebookings before the user asks. The apps that survive on AI-first devices will be the ones that expose their capabilities as agents that can be invoked, combined, and orchestrated, not just as screens that can be opened.
Voice and conversational AI are becoming the new app launcher
As interfaces become more conversational, voice is emerging as a primary way to summon those agents and, by extension, the apps behind them. One influential commerce forecast argues that Voice will finally pull agentic commerce onto the mobile phone by turning complex, desktop-only “go do this for me” processes into spoken requests that a smart assistant can execute end to end. That is a very different funnel from today’s app store model, where users consciously choose a retailer’s app, install it, and then navigate through menus to complete a purchase.
Behind the scenes, conversational systems are being industrialized across sectors, which will further normalize this behavior. Predictions for customer and employee experience suggest that by 2026, conversational AI will handle a growing share of routine interactions in banking, retail, and support, with examples like Emirates NBD in the Middle East introducing AI-driven services that blend natural language with complex problem solving. For app makers, the implication is stark: if users increasingly say “book me a flight” or “dispute this charge” to a device, the assistant will decide which airline, bank, or fintech app gets the business unless those brands have negotiated or engineered their way into the conversational flow.
On-device AI is rewriting the rules for mobile developers
Mobile platforms are not standing still, they are baking generative models directly into operating systems, which changes both what apps can do and how they are discovered. One analysis of Apple On Device AI, Key Findings notes that Over 80% of enterprises will have used generative AI APIs or deployed generative tools in production, and treats Apple’s shift to local processing as a foundational change for iOS developers. If Siri or system-level models can summarize emails, rewrite messages, and generate images without opening a third-party app, then productivity and creative tools will need to offer deeper, more specialized value to justify a separate icon on the home screen.
Developers are already responding by infusing AI into the core of their products rather than treating it as a bolt-on feature. A detailed Developer Reality Check argues that Why 73% of Apps Will Need AI by 2026 is not just speculation, pointing to AI integration that has grown faster than any previous wave of mobile tooling. Complementary research on How AI Will Transform Mobile App Development suggests that Mobile Apps Will Become Connected across industries, with shared data and models enabling features like real-time anomaly detection in finance or predictive maintenance in logistics. In practice, that means a banking app might quietly collaborate with a budgeting tool and a travel service through system-level AI, even if the user never explicitly opens all three.
Platforms are fighting to own the AI “operating system” layer
All of this is unfolding against a strategic battle over who controls the AI layer that sits between users and their software. A widely discussed analysis of AI platforms notes that major players like OpenAI and Amazon are experimenting with assistants that behave like operating systems, routing user requests to different services and apps while also inserting their own Caption Options for monetization and control, framed in part as Close Settings on the old app-centric model. Leaving aside the nontrivial problem that AI agents today can be fairly unreliable, the direction of travel is clear: the assistant, not the app grid, is becoming the first thing users see.
Design trends are already adapting to this reality, with interface experts highlighting Top Mobile App Design Trends to Watch that emphasize adaptive layouts, multimodal input, and AI-generated personalization as core to engagement. At the same time, the underlying infrastructure is moving toward a world where, by 2026, AI is treated as a colleague embedded in workflows across the Dec Internet of Things, traditional sectors like Emirates NBD in the Middle East, and the emerging class of AI PCs. In that environment, I expect your favorite apps to survive only if they can plug into assistants as callable skills, expose their data and actions to system-level models, and embrace a future where the most important user interface might be a sentence spoken into the air rather than a beautifully crafted home screen.
More from Morning Overview