Today, I want to discuss one of the XR technologies that has been getting more hype lately: smartglasses. I want to start my analysis with what I’ve seen at CES and then go beyond that and discuss what I envision for the future of this technology.
Smartglasses at CES
In the XR area (and beyond), smartglasses were one of the most popular technologies at CES. There were so many smartglasses and technologies related to smartglasses (e.g. waveguide systems) that I couldn’t try them all. For instance, it’s a pity that I’ve not been able to try the Halliday glasses. But still, I managed to get my hands on a few interesting devices.
Lightweight glasses
The first lightweight (AI) smartglasses that I tried there have been Rokid Glasses, the latest device by Rokid, one of the leading companies in AR. These are very lightweight smartglasses that have speakers, a monochrome green waveguide display for notifications and small texts, and a camera to shoot photos and videos. They have a companion app through which you can manage them and thanks to this you can also have connectivity to an AI agent and other AI services. Here you can see some photos I’ve taken of the product:
I liked the fact that Rokid glasses were very lightweight and also stylish. This is because the glasses are built in collaboration with Swedesh eyewear manufacturer BOLON… which is one of the brands of the EssilorLuxottica group.
They were also doing their job pretty well: I tried to have a conversation with one of the Rokid employees there, with her speaking Chinese and I speaking English (and also a bit Chinese), and I could see the translation of what she was saying written in green in front of my eyes.
The companion app also allowed me to speak with an AI assistant and also had some fitness-oriented features that I had not tried. But I have tried to shoot some photos and videos: they are recorded on the glasses and then can be moved on the phone through the companion app. The quality of the recorded media is not fabulous, but it’s ok. All the time, the screen was just green, but the color was quite vivid (the glasses have like 1000 nits), so the text was very readable.
I liked the glasses by Rokid: they were lightweight, fashionable, the screen was readable, and they were able to do few things and do them well. They were not perfect, but good for the current status of the technology.
Fast forward a few days, the last glasses that I tried at CES were the ones by LAWK, which is another Chinese brand. They featured a display showing notifications and green text and they allowed me to speak with an AI, have live translation between English and Chinese, and shoot photos and videos. If this sounds familiar to you, it is because you’ve read the same things in this article, 10 lines ago. Long story short, many of the smartglasses were just clones the one of the other, just with a different design. LAWK ONE, the device currently available on the website of the company, is anyway much bulkier than the Rokid glasses, also because it is targeted at people doing sports, like cycling on a bike.
I can confirm that also in this case, the glasses were working, with the translation service doing its job. I was not a big fan of the look and feel, though, especially because the frames were pretty big.
LAWK had also a new model of glasses that were as lightweight as Rokid’s and I could put them on my face and see they were pretty comfortable, even if not as stylish as the ones designed by BOLON for Rokid.
I could not turn them on because the guy at the booth told me “They have the same features as the others you tried, no need to turn them on”. So my review on them will basically be “Trust me, dude”.
I’ve already written an article about my hands-on with the Ray-Ban Meta glasses. But summarizing my experience with them: I’ve found them stylish and comfortable, the speakers were loud and clear, the videos and photos taken were good, and I was intrigued by the potentialities of the AI features. Even if they had no display, they were still able to deliver a lot. Still, the demo with them was about… photos and videos, live translation, and AI.
There were some private rooms by Google and Samsung at CES and surely they were demoing Android XR to close partners. Unluckily I’ve never been able to try an Android XR device. But from what I’ve read in the various magazines when they broke the news about this new operating system, the demos of the Android smartglasses prototypes were about live translation, AI, photos, and videos. I guess you’re not surprised by it.
I also had a quick hands-on with the TCL RayNeo X3 Pro. What impressed me about these glasses is that the display had colors. It was not green, it was RGB and could show also 2D icons of the various applications. And the visuals were also pretty bright: with 2500 nits they should theoretically work also outdoors. The FOV was the usual small one typical of the smartglasses. And the processor was a Qualcomm Snapdragon AR-1. All of this was in a form factor which was still rather small and comfortable to wear. I’m not surprised that many journalists attending MWC praised this device (e.g. in this article or this other one). The fun thing is that if you read the articles from MWC, you will discover that the main use case that the journalists have tried was AI translation from Chinese to English. Ah and there were also photos and videos of course.
I liked the use of RGB colors on the display because it allowed the glasses to have an interface that was prettier. And the glasses were not much bigger than the competition. But they were more expensive.
Since most smartglasses were basically the same thing, after a while I even stopped trying them. I read so many translations from Chinese to English that I became fluent in Mandarin.
Ah, I’ve also shot some pictures at the Vuzix booth, let me share them with you:
My vision for lightweight smartglasses
I have mixed feelings about the hype for smartglasses. On one side, I understand it: thanks to the advancements in technology, it is now finally possible to have a tech wearable that looks cool on your face. You can wear smartglasses and not looking like a weird dork, but just a person wearing glasses. I pretty enjoyed trying them and taking selfies with them. I also think that they are useful because they will allow us to connect with AI in our everyday tasks: when we wear these glasses, the AI can see what we are seeing and suggest us what to do. This can be very useful, and potentially also life-changing.
On the other side, I think we should be cautious. First of all, the hype for smartglasses has stemmed from the success of Ray-Ban Meta, but it seems people are not considering that Ray-Ban had huge merits in this success story. Ray-Ban Meta are Ray-Ban glasses, so they are cool, they are stylish, they make you wear a famous brand. They are distributed by Essilorluxottica in all its glasses shops, so you can enter a shop to buy some sunglasses, and you can come out with sunglasses that beyond being cool sunglasses can also shoot photos and videos. Ray-Ban Meta is enhanced eyewear, the other devices I have tried are tech products. Yes, technically they are the same, but they are sold in a way that makes a difference. Plus Ray-Ban Meta is a product, a finished and polished one: many demos of the other smartglasses I tried had issues, while Ray-Ban Meta worked like a charm. Plus the Meta showcase booth was amazing and also had a very cool case for every one of the glasses. That’s why even if Meta glasses lacked a screen that instead the other glasses were having, I still consider them a better product than the other smartglasses I’ve tried.
Then I think we should talk about the use cases. I don’t wear glasses, so to make me put something in my face for hours, you should give me a strong reason. And live translation from Chinese to English is not one: as much as I travel a lot to China, I don’t need it every day of my life. I’m someone who travels a lot, but most people travel less than me, so translation services are pretty useless to them. No one needs to do translations every day. Taking photos and videos from your point of view is nice. Regarding fitness, I don’t think I would ever run with glasses not made for sports. Long story short, I don’t understand why I should buy them. This is the same problem that smartwatches had in the beginning, until they found their purposes, like in the healthcare and fitness sectors. I don’t think we have a killer use case for smartglasses yet.
You may counter my argument by saying that if there are no clear use cases, then why are people buying the Ray-Ban Meta? Well, people buying a Ray-Ban Meta are entering an eyewear shop because they already want to buy glasses. So they already have a need. And if they can choose between cool Ray-Ban glasses and cool Ray-Ban glasses with extra features, they buy the second one, of course. This is different than waking up one day and going to the Rokid website and buying smartglasses. Hugely different. To do that, I have to feel the need to buy smartglasses, and currently, I don’t have it. Sure, we’ll get there, it is a matter of time, I’m just saying that TODAY I’m not as hyped about smartglasses as other people are, because I don’t see a reason why someone who doesn’t need glasses should wear them every day. But I’m surely positive about the future.
One last thing about use cases and usability: I think one big issue with these glasses is that they are not programmable. Apart from a few ones, like Brilliant Labs Frame, most of these glasses just work with their companion app and deliver the features implemented by the manufacturer (which means the translation between Chinese and English…). I wonder when these glasses, and in particular Meta Ray-Ban, will allow developers to create applications for them. This would be good for developers who can so have a new source of revenue in a growing market and would be good for manufacturers because developers could envision new use cases, possibly not related to translation. This could be a good boost for the ecosystem. That’s why I was pretty intrigued by the idea of AugmentOS to offer an SDK to develop your application once and let it run on different smart glasses.
XREAL, Lenovo, and the virtual screens
Beyond the AI smartglasses, there is another category of smartglasses that is kinda popular and it is the one delivering one or multiple virtual screens to the user. At CES I tried a few devices in this sense, one being the glasses from Lenovo, that were connected to a gaming console…
…. and another one being the XREAL One Pro, which can be connected both to your phone and your PC. XREAL was one of the booths with the most visitors in the XR area, and for a reason: in these years they managed to establish themselves as one of the best brands for what concerns stylish AR glasses and smart glasses.
XREAL One Pro is a very interesting device: it is quite lightweight (even if not as much as the AI smartglasses) and can show you a big virtual version of your laptop screen. Through the buttons on the frames, you can configure a few options and for instance, decide if you want this virtual screen always attached to your eyes (0 DOF) or to stay fixed in a position in front of you (3 DOF). You can also decide if to keep the classical aspect ratio of your display or have an ultra-wide one. The colors of the display were pretty crisp and the text of the virtual screen in front of me was very readable. These glasses have two clear use cases: one is media consumption, so watching Netflix on a big screen in front of you; and the other one is productivity, which is letting you do your work on a big screen, which is great, especially for people working with multimedia.
My friend Tyriel Wood, who attended the CES with me, told me that he likes this kind of device and they are already pretty useful for him. We are at a stage where they can already be used in productivity. After my hands-on, I’m almost convinced about this piece of hardware, with my three only problems being the FOV, the connected hardware, and the eye fatigue.
XREAL did a great job in making the FOV as large as possible with its 57°. But still, when I was looking at the ultrawide virtual screen, I felt I could not see the whole screen from my glasses, but some tiny lateral parts were missing. We need more FOV so that I don’t have to turn my head to see the different portions of the virtual screen.
Regarding the connected hardware, my problem is finding a setup that lets me work on the go. For me, it would be ideal if I could just carry some smartglasses, a small keyboard with a touchpad (like the ones of tablets), my phone, and be able to work from everywhere (e.g. the planes, the buses, etc…), without having to carry the big weight and dimensions of my laptop. Having a widescreen on my desk would be good, but instead of buying an XREAL One Pro, I could buy an extra monitor for much less money. But if a future evolution of XREAL One Pro could give me a working station from everywhere, without having to take with me a big bag every time, I would insta-buy it. In fact, I loved the demo of the XREAL Air 2 Ultra because they made me try the glasses with a keyboard and a phone.
Regarding eye fatigue, I can not comment on the long-term usage of these glasses, because I’ve tried them only for a few minutes, but I wonder how would I feel after having worn them for 12 hours a day. If I had to make a bet, I would say they stress the eyes more than a standard display, but I can not be sure of that until XREAL sends me one (XREAL people, if you are reading this, send me your glasses!)
In any case, I think this type of device is already pretty nice and useful for some use cases. If the FOV was bigger, it would be even better.
XREAL Air 2 Ultra
The last type of device I want to talk about is the XREAL Air 2 Ultra, which is 6 DOF glasses. It’s good to see that XREAL is back to doing 6 DOF glasses and I have to say the device is pretty good. The glasses can show 3D objects in the environment around you, with bright colors and a decent FOV. They work being connected to the phone, so they can not render very heavy scenes. I’ve tried also the hand tracking and found it to be ok, but not as advanced as the one from Meta or Ultraleap.
The road to AR devices
I wanted to mention my brief hands-on with the XREAL Air 2 Ultra because I think that in the end, 6DOF AR glasses are the endgame for all the devices I have described in this article. AI smart glasses, and glasses for virtual screens, are all simplified versions of glasses that take care of a specific use case for an affordable price. But the final mission is having a device that can do all that these glasses can do, and even more: of course, I’m talking about 6DOF glasses that can understand the environment around us and render both 3D and 2D objects.
Unluckily, the technology today is not ready to make the AR glasses of our dreams, and in fact, the most advanced glasses we know about, Meta Orion, cost more than $20K to manufacture. But I’m a big believer we’ll get there, and in the meanwhile, all the various smartglasses that are being sold will be useful to find use cases for which people want to put some glasses on their face and to make wearing tech glasses more socially acceptable. Hopefully, this period will also be useful in understanding how to guarantee privacy to the users who are wearing glasses with cameras, but I’m not sure this will happen, unluckily.
A fun moment at CES
Since I like to always add a touch of humor to my posts, let me tell you something weird that happened when I visited the booth of a Chinese manufacturer of smartglasses.
After I had tried the device, including the usual photo-taking and AI translation, I asked the guy at the booth.
“What’s the price of this?”
and he answered something like
“58 grams”
I was pretty confused, so I asked again
“What’s the price?”
And he answered this time.
“Europe, USA”
I was getting pretty confused… it was like a slot machine that every time was given me a random sentence in English in return, so I asked again.
“What’s the price, the cost, money?”
And he started looking at the sky as if he hoped the Gods may tell him the right answer. Considering that he stood still like this for 5 seconds, I guess the Gods were busy doing something else.
In the end, I had really enough so I went into full Chinese mode and asked
“多少钱?”
to which he answered me 400$ and then said a lot of things speaking in Chinese superfast I couldn’t understand (and to which I was tempted to answer with “58 grams”).
I have a question for this guy: I understand that speaking another language is complicated… but… you are selling a device meant to make live translations between English and Chinese, WHY THE FUCK DON’T YOU USE IT???
(Header photo shot by Tyriell Wood)
Disclaimer: this blog contains advertisement and affiliate links to sustain itself. If you click on an affiliate link, I’ll be very happy because I’ll earn a small commission on your purchase. You can find my boring full disclosure here.
Related