May 23 2025 – Apple is reportedly accelerating its efforts to develop smart glasses, with the aim of unveiling its first-ever product in this category by the end of 2026, according to Bloomberg’s Mark Gurman today. The move is part of the tech giant’s broader strategy to capture a larger share of the market for AI-enhanced devices.
Engineers at Apple are diligently working to meet the 2026 deadline for the smart glasses’ launch, sources familiar with the matter have revealed. In a strategic shift, the company has put on hold its plans to develop a smartwatch with a built-in camera.
The upcoming product is positioned as an “AI-based foundational wearable device,” directly competing with offerings like Meta’s Ray-Ban smart glasses and the Android XR glasses prototype announced by Google this week.
Internal sources indicate that the smart glasses will be equipped with a camera, a microphone array, and miniature speakers, enabling features such as capturing still photos and videos, real-time translation, turn-by-turn navigation, music playback, and phone calls.

Similar to products from competitors like Meta, users will be able to interact with the device through the Siri voice assistant. Additionally, the glasses will offer environmental analysis feedback based on the user’s field of vision. However, the initial version of the product will not include augmented reality (AR) capabilities.
Apple is expected to begin collaborating with overseas suppliers by the end of this year to produce “large-scale” prototypes, providing ample time for subsequent mass production testing, according to knowledgeable individuals.
A testing engineer shared with Bloomberg that the product’s form factor resembles that of Meta’s Ray-Ban glasses but boasts superior craftsmanship. The development of the smart glasses is being led by Apple’s Vision Products Group, which was previously responsible for the Vision Pro’s development.
Notably, unlike Meta, which utilizes both Llama and Google’s Gemini AI platforms, Apple will rely solely on its in-house AI models.
Internal sources have expressed concerns within Apple’s R&D team regarding the company’s AI technology reserves. Currently, the visual intelligence features on iPhones still rely on technical support from Google and OpenAI, while both Meta’s glasses and Google’s Android XR glasses are based on their proprietary AI platforms. To achieve greater hardware autonomy, Apple is intensifying its efforts to develop alternative solutions.
It’s worth noting that Apple recently unveiled a visual language model called FastVLM, available in 0.5B, 1.5B, and 7B versions, optimized for on-device AI computation on Apple Silicon devices. The application scenarios of this technology point towards the smart glasses wearables currently under development at Apple.
Technical documentation reveals that FastVLM achieves near-real-time responses for high-resolution image processing while maintaining accuracy, all with significantly less computational demand compared to similar models.