Sam Altman’s eyeball-scanning startup has not only tried to convince the world to stare into a metallic orb, it has also pushed employees into behavior that critics say feels more like devotion than standard corporate loyalty. The company’s internal culture, as described in recent reporting, blurs the line between mission-driven enthusiasm and something closer to a belief system, with staff encouraged to treat the orb as a world-changing object rather than a controversial biometric device.
I see that tension as the core of the story: a firm that says it wants to prove who is human in an AI-saturated internet, while cultivating an internal environment where questioning the mission can feel almost heretical. That dynamic matters far beyond one startup, because it shows how tech companies building powerful identity systems can normalize extreme asks of workers long before regulators or the public have caught up.
The orb, the mission, and the leap of faith
The eyeball-scanning project at the center of this controversy is built around a polished metal sphere that captures high-resolution images of a person’s iris and converts them into a unique identifier. The company pitches this as a way to create a global proof-of-personhood system that can distinguish humans from bots, a goal it frames as essential in an era of generative AI and synthetic accounts. Employees are told they are working on infrastructure for a future internet, not just another app or token.
That framing turns a hardware gadget into a kind of totem, and internal messaging has reportedly encouraged staff to treat the orb as the centerpiece of a historic mission rather than a product that still faces basic questions about privacy, consent, and governance. Critics who have spoken about the culture describe an expectation that workers embrace the narrative that scanning millions of irises is an unambiguous social good, even as outside observers warn that the same biometric database could be misused or repurposed in ways the company cannot fully control.
Inside the “cult-like” employee expectations
Accounts from people familiar with the company’s operations describe a workplace where commitment to the orb project is measured not only in hours worked but in how fully employees internalize the founding story. Staff are reportedly encouraged to see themselves as part of a small vanguard that understands the stakes of digital identity more clearly than governments or civil society, a mindset that can make dissent feel like betrayal rather than healthy skepticism. When a company’s mission is framed as saving the internet from collapse, ordinary questions about risk and oversight can be recast as a lack of belief.
That is where the “cult-like” label comes in. Reporting on internal culture has highlighted how some employees felt pressure to participate in promotional events, defend the orb in personal conversations, and treat criticism as something to be neutralized rather than engaged. One detailed account described how the company’s rhetoric about building a fairer global financial system and protecting humanity from AI-driven fraud was paired with an expectation that staff would evangelize the orb’s benefits in almost missionary terms, a pattern that outside observers compared to a tech-adjacent cult rather than a conventional startup.
How the eyeball scan became a loyalty test
The most striking element of these accounts is the way a simple biometric enrollment process reportedly morphed into an informal test of loyalty inside the company. Employees were not just building the orb and its software, they were expected to submit to the same iris scan themselves, sometimes in highly public settings, as a way of demonstrating trust in the system. Declining to be scanned, or even hesitating, could mark someone as insufficiently aligned with the mission in the eyes of colleagues who had already embraced the ritual.
That dynamic turned a privacy-sensitive act into a social signal. Workers who might have had legitimate concerns about how their biometric data would be stored or used were left to weigh those doubts against the risk of being seen as disloyal. In one reported case, internal advocates framed the scan as a symbolic step that showed an employee was “all in” on the project, a framing that critics argue collapses the distinction between personal autonomy and corporate allegiance in ways that are deeply unhealthy for any workplace, let alone one handling sensitive identity data.
Public backlash and the privacy alarm
Outside the company’s walls, the idea of scanning irises at scale has triggered a wave of criticism from privacy advocates, regulators, and ordinary users who are uneasy about handing over their most intimate biometric markers. Concerns range from the security of the underlying database to the possibility that a system built to prove humanity could be repurposed for surveillance or exclusion. Even people who accept the need for better identity tools in a world of AI-generated content question whether a single, privately controlled orb network should be trusted with that role.
Reporting on the project’s rollout has documented how early deployments in markets with weaker regulatory protections raised particular alarm, with critics arguing that communities with fewer resources were being used as test beds for a system whose long-term implications they could not fully evaluate. One investigation into the company’s global expansion described how the promise of future financial benefits was used to entice people into scans, even as experts warned that the tradeoff between a one-time reward and permanent biometric enrollment was poorly understood. Those same experts have pointed out that if employees inside the company feel pressured to comply, the power imbalance for users in the field is likely even more stark.
Rebranding, new orbs, and the push for legitimacy
In response to mounting scrutiny, the company has tried to reposition itself as a more mature identity platform, including a rebrand that shifted its public-facing name and a new generation of orb hardware designed to look sleeker and more approachable. The updated device is marketed as a way to “prove your humanity” in a digital ecosystem increasingly flooded with AI-generated content, with the company arguing that a robust proof-of-personhood layer is now a prerequisite for everything from social media to financial services. That pitch is meant to reassure both regulators and potential partners that the orb is not a gimmick but a serious piece of infrastructure.
Coverage of the rebrand has emphasized how the company is leaning into the language of safety and trust, highlighting technical measures that are supposed to keep raw iris images separate from the identifiers used in applications. A detailed report on the unveiling of the new orb and the shift from the original name to a broader identity-focused brand described how executives framed the change as a step toward mainstream adoption, positioning the platform as a neutral protocol rather than a speculative crypto project. In that account, the company’s leaders showcased the redesigned hardware and stressed that the system was built to prove your humanity without exposing sensitive biometric data, a claim that privacy advocates continue to interrogate.
Corporate partners and the normalization of the orb
Even as critics raise alarms, major online platforms are exploring ways to plug the orb’s identity layer into their own ecosystems, a development that could rapidly normalize the technology. One prominent example surfaced in a discussion about how a large social forum might integrate the company’s proof-of-personhood tools to distinguish human posters from bots, potentially tying access or reputation to whether a user has submitted to an iris scan. That possibility has sparked intense debate among the forum’s community, with some users welcoming stronger anti-spam tools and others warning that it would create a two-tier system where only scanned accounts are fully trusted.
The conversation around that potential partnership, documented in a widely shared thread, shows how quickly a controversial biometric system can move from fringe experiment to infrastructure if big platforms decide it solves their moderation and authenticity problems. In that thread, participants dissected reports that the forum’s leadership was in talks to embrace Sam Altman’s identity tools, weighing the benefits of better bot detection against the risk of entrenching a private company as the arbiter of who counts as a real person online. For employees inside the orb startup, such deals are framed as validation of the mission, which can further intensify the expectation that staff treat the project as a historic cause rather than a product still under ethical review.
Employee pressure meets global expansion
The same cultural patterns that shape life inside the company also appear in how it approaches rapid global growth. Workers tasked with expanding orb deployments into new cities and countries are reportedly given aggressive targets for sign-ups, along with messaging that frames each new scan as a step toward a fairer, more inclusive financial system. That combination of numerical pressure and moral language can create a powerful incentive to push the boundaries of what is considered acceptable outreach, especially in regions where regulatory oversight is limited.
Investigations into the company’s field operations have described how contractors and local partners were encouraged to prioritize enrollment numbers, sometimes using marketing tactics that critics say downplayed the permanence of biometric capture. For employees who internalize the narrative that the orb is a tool for global equity, those tactics can feel justified, even as outside observers argue that they exploit information asymmetries. The result is a feedback loop in which internal culture, growth targets, and the company’s self-image as a world-changing project all reinforce one another, making it harder for staff to raise concerns without being seen as obstacles to progress.
Leadership, narrative control, and the cost of dissent
At the center of this ecosystem is Sam Altman, whose profile as a leading figure in artificial intelligence gives the orb project a level of attention and deference that most startups could never command. His involvement allows the company to present itself as part of a broader effort to manage the societal impact of AI, linking proof-of-personhood to debates about deepfakes, automated propaganda, and the future of work. Inside the company, that association reportedly amplifies the sense that employees are participating in a historic endeavor, one that will be judged not just by investors but by future generations.
Yet the same charismatic narrative that attracts talent can also make dissent more costly. Reporting on internal dynamics has described how some staff who raised questions about privacy, governance, or the pace of expansion felt marginalized or sidelined, as if their skepticism reflected a failure to grasp the urgency of the mission. One detailed feature on the company’s culture and strategy recounted how leadership leaned heavily on the idea that the orb was a necessary response to AI-driven chaos, a framing that left little room for alternative approaches to identity or for slower, more consultative development. In that account, the pressure to align with the official story was so strong that even employees who privately harbored doubts about the eyeball-scanning orb project felt compelled to present unwavering enthusiasm in public settings.
Why the “cult” critique matters for everyone else
It might be tempting to dismiss talk of cult-like behavior as insider gossip, but the stakes are far larger than one company’s HR problems. When a firm that aspires to manage global identity treats internal skepticism as disloyalty and turns biometric enrollment into a symbolic act of faith, it signals how easily powerful technologies can be insulated from meaningful challenge. That insulation matters because the decisions made inside such companies, from data retention policies to partnership terms, will shape how millions of people experience identity and privacy online.
I see the cult critique as a warning sign about governance. If employees closest to the technology feel unable to question its trajectory without risking their standing, then external oversight becomes even more critical, and regulators, civil society groups, and partner platforms need to scrutinize not just the code and the contracts but the culture that produces them. The orb’s future will not be decided only by hardware iterations or rebrands; it will also depend on whether the people building it are allowed to treat it as a tool that can be redesigned, constrained, or even rejected, rather than an object of devotion that must be defended at all costs.
More from MorningOverview