Every week, Dave McCann used to block out 30 minutes before each of his roughly 10 client meetings to get up to speed: reviewing account histories, scanning recent emails, pulling together talking points. That added up to about five hours, nearly a full half-day, spent just getting ready to walk into a room.
So McCann, a Managing Partner at IBM Consulting, built himself an AI agent. He calls it “Digital Dave.” It scans his calendar, cross-references internal data sources, and generates a briefing he describes as “10 things I need to know” for every upcoming session. Those 30-minute prep calls? Gone. McCann shared the details in an interview published by Business Insider, calling the tool one of the most practical changes he has made to his workflow in years.
The story is a small one, a single executive and a single calendar tool, but it sits inside a much larger experiment happening across IBM. And it raises a question worth taking seriously: can a self-built AI assistant actually reshape how a knowledge worker spends a week?
Inside IBM’s push for employee-built AI
McCann’s agent did not emerge in a vacuum. IBM has been actively encouraging employees to build their own AI tools rather than waiting for centralized IT teams to hand them finished products.
The company launched IBM Consulting Advantage in January 2024, a watsonx-powered platform designed to give consultants a library of AI assistants for tasks ranging from proposal drafting to project delivery. The platform provides guardrails, governance, and shared infrastructure, essentially a launchpad for tools like Digital Dave, even if McCann’s particular agent appears to be a personal creation rather than a standard-issue product.
IBM also runs an internal Consulting Assistant Contest, listed on its TechXchange Community site, which invites employees to compete by building their own AI assistants. The contest signals that bottom-up experimentation is not just tolerated but incentivized. Details about participation rates and winning entries have not been made public, so it is hard to gauge how widespread the tinkering actually is.
At a company-wide level, IBM says the results are already significant. A Bloomberg-published feature produced in partnership with IBM reported that internal AI agents collectively saved employees more than 3.9 million hours in 2024. The same piece highlighted tools called AskIT and AskHR, which handle IT support tickets and human-resources queries, and noted that at least one was built in a 100-day sprint.
What the numbers actually tell us
McCann’s math is straightforward: 10 meetings a week, 30 minutes of prep eliminated per meeting, five hours reclaimed. It is specific enough to be challenged by anyone with access to his calendar, which makes it more useful than a vague claim about “increased efficiency.”
The 3.9 million hours figure is harder to evaluate. It appeared in sponsored content, not in an earnings filing or an independent audit. IBM has not publicly disclosed how it defines a “saved” hour, whether the total accounts for time spent building and maintaining these tools, or whether it reflects gross time reclaimed versus net gains after implementation costs. For context, IBM employed roughly 270,000 people at the end of 2024. If the figure is spread evenly, it works out to about 14 hours saved per employee over the entire year, or roughly 15 to 20 minutes per person per week. That is meaningful at scale but modest at the individual level, and it underscores how much of the headline impact depends on aggregation.
Neither figure has been independently verified. McCann’s claim is first-person testimony from a senior executive whose division sells AI consulting services to clients, which gives him a professional incentive to highlight wins. That does not make his account unreliable, but it does mean readers are working with a single self-reported data point, not a controlled study.
The gaps worth noting
Several questions remain unanswered in the public reporting as of May 2026.
First, it is unclear what Digital Dave actually runs on. McCann may have built it on IBM’s watsonx infrastructure through the Consulting Advantage platform, or he may have stitched it together from off-the-shelf APIs and internal data connectors. The distinction matters: a tool built within a governed enterprise framework is easier to scale, audit, and support than a bespoke script that lives or dies with its creator.
Second, McCann’s metric is time saved, not outcomes improved. He has not publicly shared whether meeting quality went up, whether clients noticed a difference, or whether the briefings occasionally miss critical context. If Digital Dave surfaces outdated information or overlooks a recent development in a client relationship, some of that saved prep time could be eaten up by confusion during the meeting itself.
Third, there is no public data on adoption rates. The gap between “one senior partner built a useful tool” and “this approach works across a 270,000-person company” is enormous. IBM’s contest and platform suggest organizational intent, but intent is not the same as proven, widespread results. Without usage figures, it is impossible to know whether most employees are building and relying on assistants or whether only a small, technically skilled subset is experimenting.
Finally, the story exists in a competitive vacuum. Microsoft’s Copilot, Google’s Gemini for Workspace, and a growing roster of startup tools are all targeting the same meeting-prep and knowledge-work pain points. How Digital Dave compares to those alternatives, or whether IBM employees also use third-party tools alongside internal ones, is not addressed in any of the available sources.
What to make of it
The most honest reading of McCann’s story is that it is an illustrative case, not a proof point. One experienced executive identified a repetitive task, had the skills and infrastructure to automate it, and reports that the result has meaningfully changed his week. That is worth paying attention to, especially because the specificity of his claim (10 meetings, 30 minutes each, five hours total) makes it concrete in a landscape flooded with vague AI promises.
But individual anecdotes, even good ones, do not tell us whether AI agents are transforming knowledge work at scale. For that, we need independent measurement: transparent methodologies, controlled comparisons, and outcome data that goes beyond time saved to include quality, accuracy, and business results. IBM’s internal numbers hint at something large, but until the methodology behind them is open to scrutiny, they remain corporate claims rather than verified statistics.
For anyone considering building a similar tool, McCann’s experience offers a useful starting principle: pick a task that is repetitive, time-consuming, and low-ambiguity, then measure what changes. The five hours he says he recovered each week did not come from a moonshot project. They came from eliminating a routine he had repeated hundreds of times. That is where most practical AI gains are likely to start, not with grand transformation, but with one tedious workflow that finally gets automated.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.