Morning Overview

Meta is building an AI Zuckerberg to handle some meetings

Mark Zuckerberg may soon have a stand-in at meetings, and it won’t be a fellow executive. Meta is developing a photorealistic, real-time 3D AI version of its CEO, trained on his voice and mannerisms, that could interact with employees and represent him during some internal sessions, according to people familiar with the matter who spoke to the Financial Times. (The FT’s reporting, based on unnamed sources with direct knowledge of the project, has not been officially confirmed or denied by Meta.)

The system is designed to replicate Zuckerberg’s speech patterns and behavioral cues closely enough to field questions, relay company priorities, and hold conversations that feel, at least on the surface, like talking to the real person.

For Meta’s workforce of roughly 70,000 people, the initiative distills a question the entire tech industry is circling: what happens when the boss doesn’t just use AI but becomes it?

The building blocks are already in place

This is not a moonshot disconnected from Meta’s current trajectory. Zuckerberg has spent the past year describing a company-wide shift toward what he calls “AI-native tooling,” a push to embed AI into internal productivity across coding, data analysis, project management, and communication, as he outlined in remarks reported by Axios in January 2026. In that framework, an AI avatar that can absorb routine leadership communication fits naturally. If the system can handle status updates, reinforce strategic themes, or answer frequently asked questions, it frees the actual Zuckerberg to focus on decisions only he can make.

Meta has also been acquiring the specific technology this kind of project demands. In mid-2025, the company bought PlayAI, a startup specializing in high-fidelity voice synthesis, in a deal first reported by TechCrunch. An internal memo tied to the acquisition reportedly outlined how PlayAI’s capabilities would support AI characters, Meta’s AI assistant, wearable devices, and audio content creation. The technology’s strengths, including low-latency voice streaming and the ability to capture individual vocal quirks, are directly relevant to building a convincing avatar of a specific person.

Then there is the money. Meta has guided for 2026 capital expenditures between $115 billion and $135 billion, a range the company disclosed during its most recent earnings call, with much of that directed at data center expansion and AI infrastructure. Running a photorealistic 3D avatar in real time while processing natural conversation and generating human-level responses requires enormous compute power. That level of spending signals Meta is not treating AI integration as a side experiment. The budget is scaled for enterprise-grade deployment, making a resource-intensive avatar project both technically and financially viable.

The gap between a video message and a virtual CEO

One of the biggest unknowns is what this avatar would actually do in practice. The range of possibilities is wide, and the technical challenges vary dramatically depending on where Meta lands.

At one end of the spectrum, the AI Zuckerberg could function as a more lifelike version of a pre-recorded video message: polished, tightly scripted, and reviewed before delivery. That is a meaningful upgrade over a standard all-hands recording, but it is not fundamentally new. At the other end, the avatar could engage in live, unscripted dialogue with individual employees or small teams, interpreting context, handling unexpected questions, and adapting in real time.

The second scenario is far more ambitious and far more fraught. If an AI wearing the CEO’s face tells an employee something about company strategy, compensation, or organizational changes, does that carry the weight of a direct statement from Zuckerberg? Who decides what the avatar is allowed to say? How are its training data and response boundaries defined? None of these governance questions have been addressed in any public reporting.

That gap matters because Meta has articulated principles around AI transparency and responsible deployment for its consumer-facing products. Applying similar standards internally would likely require clear labeling of AI-generated interactions, conversation logging, and a way for employees to escalate from the avatar to a human leader.

How employees might actually experience it

Internal sentiment about the project has not been reported in detail, and Meta has not discussed any feedback mechanisms or opt-out options. The reaction inside the company could cut both ways.

“The real test isn’t whether the avatar looks like Zuckerberg. It’s whether employees trust what it says,” said Ethan Mollick, a Wharton professor who studies AI adoption in organizations, in a recent interview about AI-driven management tools. That trust question sits at the center of the project’s viability.

Some employees might welcome an always-available, AI-mediated channel to leadership. Getting a quick answer on strategic direction or company policy without waiting for the next all-hands could genuinely improve day-to-day work. For a company Meta’s size, where most employees will never have a one-on-one with the CEO, even a simulated version could feel like increased access.

Others could read it differently. Sending a digital copy to a meeting instead of showing up, even when the original is busy with higher-priority work, risks signaling that routine interactions with employees are not worth a leader’s actual time. The line between efficient delegation and perceived detachment is thin, and it depends heavily on execution.

There is also a broader workforce question that extends beyond Meta. If a CEO can delegate meeting presence to AI, the same logic applies down the org chart. Middle managers, team leads, and individual contributors could eventually face similar tools, raising questions about which human interactions companies consider essential and which they view as automatable.

Where this fits in the wider AI landscape

Meta is not the only company exploring AI-generated human likenesses. Nvidia has invested heavily in digital twin technology through its Omniverse platform, and startups like HeyGen and Synthesia have built businesses around AI-generated video avatars for corporate communications. But most of those applications are outward-facing: marketing videos, customer service, training content.

What makes the reported Meta project distinct is its internal, leadership-specific application. Building an AI replica of the CEO to interact with the company’s own employees is a step beyond using synthetic media for external audiences. It raises the stakes on accuracy, trust, and governance because the people on the receiving end are not customers who can walk away but employees whose livelihoods depend on the information they receive from leadership.

Signals that will clarify the avatar’s trajectory

The strongest evidence for this project remains the Financial Times report, which relies on people with direct knowledge of the initiative. That sourcing standard is the same level of attribution that has preceded confirmed product launches and strategic shifts at major tech companies. It is credible, but it is not an official announcement.

Zuckerberg’s public comments about AI-native tooling and the PlayAI acquisition serve as supporting context. They show Meta has both the stated intent and the technical capability to build something like this. But intent does not guarantee delivery, and capability does not confirm deployment.

The signals to watch in the coming months: whether Meta acknowledges the project publicly, whether job postings or patent filings point to avatar-specific work, and whether employees begin describing interactions with the system. Until then, the most grounded reading is that Meta is actively developing a sophisticated AI representation of its CEO as part of a broader bet that AI should handle more of the work humans currently do, including, apparently, some of the work done by the most powerful human in the building.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.