Solos has begun shipping the AirGo V2, a pair of ultra-slim smart glasses that come loaded with native access to four major AI models: Google’s Gemini, OpenAI’s ChatGPT, Anthropic’s Claude, and DeepSeek. Priced from $299 and available in multiple frame colors after a debut at CES 2026, the AirGo V2 represents a bet that buyers want to choose their own AI assistant rather than be locked into a single ecosystem. The device also packs a 16MP camera, electronic image stabilization, and live Full HD streaming, making it one of the most feature-dense smart glasses at its price point.
Four AI Models, One Frame
The defining feature of the AirGo V2 is not any single spec but the breadth of AI integrations baked directly into the hardware. Where most smart glasses ship tied to one voice assistant or one large language model, Solos built the V2 around what it describes as multimodal AI spanning ChatGPT, Gemini, Claude, and DeepSeek. That means a user can switch between models depending on the task, whether that is drafting a quick email reply through ChatGPT or running a visual query through Gemini, all without pulling out a phone. The glasses act as a front end to whichever model best fits the moment, with voice, camera, and touch controls providing the input layer.
This multi-model approach sidesteps a real limitation in the current crop of AI wearables, where most competitors force users into a single provider’s strengths and weaknesses. If one model handles code questions well but stumbles on creative writing, the user is stuck. By offering four options natively, Solos is treating AI models more like apps than operating systems. The practical upshot for buyers is that as these services improve or new ones gain traction, the glasses can evolve without a hardware swap. That flexibility could matter a great deal over a two- or three-year ownership cycle, especially as the pace of model updates shows no sign of slowing and users become more aware of the trade-offs between different AI providers.
Camera and Streaming Hardware
Beyond AI, the AirGo V2 carries meaningful imaging hardware for a pair of glasses. The built-in 16MP camera supports electronic image stabilization and live Full HD streaming, turning the frames into a hands-free content capture tool. Low-power Wi-Fi handles the data pipeline for streaming and cloud-based AI queries, while Bluetooth Low Energy manages device control and pairing. The combination lets the glasses stay connected for extended periods without draining the battery as quickly as a standard Wi-Fi radio would, which is critical for all-day wear and spontaneous recording.
The camera is not just for photos and video. Solos positions it as the sensory input layer for AI-driven tasks like object identification, real-time text translation, and scene description. Point the glasses at a restaurant menu in another language, for instance, and the onboard AI can read and translate it aloud. Look at a street sign or product label, and the system can identify and describe what it sees. These are not hypothetical use cases; they are listed as core functions in the company’s product documentation, with the camera and microphone array funneling data to the user’s chosen AI model. Whether the real-world accuracy matches the promise will depend on model performance and network conditions, but the hardware pipeline to support those workflows is clearly in place and competitive with more expensive rivals.
Accessibility as a Built-In Feature
One detail that separates the AirGo V2 from many competitors is its inclusion of accessibility tooling, specifically an integration with Envision. Envision is a platform designed for people who are blind or have low vision, and its presence on the V2 means the glasses can serve as a visual aid out of the box, describing surroundings, reading text aloud, and identifying objects in real time. This is not an afterthought or a third-party hack; it ships as part of the product’s software stack, using the same camera and AI pipeline that power mainstream features like translation and scene analysis.
That decision reflects a broader shift in how tech companies think about accessibility. Rather than building a separate, specialized device for users with visual impairments, Solos is folding those capabilities into a mainstream consumer product. The result is a single pair of glasses that can function as everyday smart eyewear for one person and as an assistive device for another. For users who need hands-free workflows, whether due to disability, job requirements, or personal preference, the V2 offers a degree of utility that goes well beyond music playback and phone calls. It also positions Solos to appeal to institutions and nonprofits that prioritize inclusive design when choosing assistive technology for clients and staff.
An Open SDK Changes the Calculus
Hardware specs tell only part of the story. Solos also maintains a developer SDK that exposes the AirGo V2’s microphones, camera, streaming, recording, sensors, and webhooks through RTMP. The V1 and V2 models communicate with the SDK using Bluetooth Low Energy for control signals and Wi-Fi for data transfer, allowing developers to treat the glasses as both a capture device and a networked sensor hub. This open architecture means third-party teams can build custom applications on top of the hardware, tapping into sensor data or camera feeds for use cases Solos itself may not have anticipated, from industrial inspections to live coaching overlays.
An open SDK is a strategic differentiator in the smart glasses market, where some major players run tightly controlled software environments. By contrast, Solos is inviting outside developers to extend what the hardware can do. That could lead to specialized apps for field technicians who need hands-free repair guides, medical professionals who want to record procedures, or fitness coaches streaming live sessions to clients. The SDK also supports the older AirGo V1 (camera-equipped) and the AirGo 3 and A5 models (audio-only), which means developers building for the platform can target multiple hardware tiers. That breadth gives the Solos ecosystem a wider addressable base than a single-product SDK would, and it signals that the company views its glasses as a platform rather than a one-off gadget, an important distinction for enterprise buyers evaluating long-term support.
Positioning in a Crowded Smart Glasses Market
All of these choices (multi-model AI access, a capable camera, accessibility integration, and an open SDK) feed into how the AirGo V2 is positioned in a crowded smart glasses landscape. At a starting price of $299, the frames undercut some camera-focused competitors while offering a broader feature list than audio-only smart glasses that emphasize calls and music. The ultra-slim design and multiple frame colors aim to make the technology look more like conventional eyewear than a gadget, which matters in everyday social settings where bulky or conspicuous devices can feel out of place. By combining this design with practical features like prescription lens support, Solos is clearly targeting daily wear rather than occasional novelty use.
Still, the company faces substantial challenges. Larger rivals can subsidize hardware with revenue from advertising or cloud services, while Solos must make its business case on the glasses themselves and the ecosystem it builds around them. Success will depend on more than specs; it will hinge on software reliability, battery life in real-world use, and how well the multi-model AI strategy works when networks are congested or unavailable. If the company can keep its integrations current and continue to court developers and accessibility partners, the AirGo V2 could carve out a distinctive niche as a flexible, AI-forward pair of smart glasses. If not, it risks being remembered as an ambitious but crowded-out entry in a category that is still searching for its breakout hit.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.