Today I want to tell you about two companies that I met at CES that made me think about the future of immersive realities: Attention Labs, which lets people focus on a specific conversation in mixed reality, and 2Pi Optics, which builds very small metalenses. Let me tell you why I think these two companies are working on something that is very cool.
Attention Labs
Attention Labs is a company working on “Real-time auditory focus”. This is a concept that I have found fascinating since the time that Michael Abrash introduced it at an Oculus Connect some years ago. The idea is that when multiple people are speaking in the same room, and when I say “people” I mean both real people and virtual people, everyone should be able to listen only to the people he/she is interested in speaking with. Let me explain this better by telling you what happened when we met Attention Labs.
The demo we had at their booth involved four people: there was me, my CES buddy Tyriel Wood, an engineer of Attention Labs who was on the show floor, and another employee of the company who was joining remotely and referred to himself as “The Ghost”. We divided ourselves into two groups, to simulate two groups of people speaking in the same space: Tyriel was speaking with the company engineer, and I was talking with The Ghost (of course… we had some ghosty things to discuss…). Everyone was wearing an Oculus Quest 3 set up in passthrough mode and some noise-canceling isolating big headphones over our ears. When the demo started, our two small groups of people began speaking independently the one from the others, me just being interested in speaking with the Ghost, and Tyriel with the other engineer.
When the system was off, we all were speaking at the same time, and we could all hear what all the others were saying. In my headphones, I could hear the voices recorded by the microphones of everyone else. This is what currently happens in social VR: if you are in the same virtual space, and you are not distant enough, it is impossible to speak in different groups, because the conversation of one group is disturbed by one of the other groups.
But when the Attention Labs system was on, I was speaking with the virtual avatar of The Ghost and I could only hear his voice, and he could only hear my voice. The voice of the people in the other group was very attenuated, and I could only slightly hear it as if it was background noise. This way we four people could speak in two separate groups, as intended.
I asked Attention Labs how this division in groups works, and they told me that the system tries to understand where your attention stays. If you look at another person and that other person looks at you, then the system identifies you two as an independent group and tries to put all the other voices in the background. In the CES demo, the conversations were locked: once the group with me and The Ghost was created, even if I started looking at the people in the other group, I was still in the group with him. In normal conditions, instead, the moment I start looking at the people in the other group and speaking with them, I would be disconnected from the group I am currently in, and connected with the new group I am interested in speaking. I asked Attention Labs what happens in edge cases like when I am speaking with a group, but then I have just to say a quick thing to a person in another group of people and they told me they are currently working on the most common conditions, and then will focus on this special ones.
I think that what Attention Labs is doing is great, because it is the future of audio, as Michael Abrash described it. Abrash talked at an Oculus Connect about us wearing XR glasses all day, with the glasses letting us hear only what we want to hear. The glasses could act as noise-canceling headphones, but selectively, identifying what we want to hear and amplifying those audio signals, and what we do not want to hear and emitting the opposite sound waves so that (through destructive interference) those sounds are attenuated. In his famous speech, he mentioned a person entering a noisy cafe with a friend, and the audio system attenuating the sound of the cafe and amplifying the voice of the friend. When other people joined the conversation, they were added to the interest group, and only their voices were heard. Even when a virtual friend joined, it was considered a real person, and he joined the conversation, too, with his voice simulated as if it was coming from an exact spot in the physical world, as the remote person was there.
I think that Attention Labs is a first glimpse of this future of audio coming to life and that’s why I loved their demo.
2Pi Optics
2Pi Optics is a company specializing in metalenses. What are Metalenses? Well, let me copy-paste this good definition from the Synopsys website (read their full article to discover more):
Metalens or metalenses, the cutting-edge innovation in optical technology, are not your ordinary, curved lenses. Metalenses are flat lenses that use metasurfaces to focus light.
A metalens differs from a traditional curved lens by its shape and surface. Traditionally, combinations of curved lenses, such as those in cameras, are used to manipulate light to go to a receiver such as a sensor or an eye. Multiple lenses are usually needed to correct various image aberrations. However, a stack of bulky lenses takes up a lot of space, which is a consideration for compact systems such as cell phone cameras and AR/VR systems.
Metalenses usually consist of millions of subwavelength unit cells called meta-atoms, which modulate light locally and coherently over the entire metasurface. The shape and/or size of each meta-atom is determined locally based on the overall performance of the metalens.
Subwavelength nano-atoms can delay the phase of light, and when properly arranged on the surface, metalenses can create the same desired phase profile as a classic curved lens.
Long story short, metalenses are a new way to build lenses. The lens is built as many nanocomponents arranged on a flat surface, with the nanocomponents engineered to bend the light as desired. The metalenses are not lenses in the traditional sense, that’s why the “meta” prefix, but they provide the same function as lenses, that is bending the light. Every optical lens has only the property to bend the light toward a focus point, so you need a complicated stack of lenses if you want the light to follow the pattern that you want. A stack of lenses is expensive and usually takes a lot of space because you have to physically stack the lenses one on top of the other. With metalenses, instead, you use nanocomponents to redirect the light as you wish in every portion of the metalens. You basically engineer the surface of the lens, as if you are programming it so that every portion of the lens moves the light ray as you want. Since every surface of the lens already does what you want, you do not need to stack multiple lenses one on top of the other. Plus, since you are using nanocomponents, the lens can be very thin.
Many people in the VR field are interested in metalenses because they are seen as the future of optics of the VR headsets. Imagine if we could substitute the lenses of our HMDs with just two flat thin surfaces that have exactly the optical properties that we want: this would make the headsets thinner, lighter, and with crisper visuals.
2Pi Optics works exactly on these technologies, and they showed me a few of the lenses they are building. Those four little shapes on the thin film of the picture below are all metalenses.
And the one below is a metalens, too, that is installed on a set of binocular cameras.
I knew about metalenses before visiting CES, but seeing them with my own eyes, and seeing how they could be thin and small was impressive to me. I immediately asked the 2Pi OPtics representative if they are building lenses for VR headsets, and he told me that theoretically, this is possible, and they could be able to do that. He said that is rather “easy” to build metalenses for VR, while AR, considering the different nature of the visual engines needed by augmented reality, would be much more tricky.
He also said anyway that building VR lenses with metasurfaces would be not cost-effective today. This means that it would make the price of a headset grow so much, that is not convenient to use metasurfaces and that’s why all manufacturers are still using standard optical lenses. But he said that in the future, this could become viable. And this made me dream about how XR headsets could become thanks to them.
And that’s it for this new report from the CES show floor! I hope you are enjoying it, and if this is the case, consider supporting me on Patreon so that I can attend future events, and subscribing to my newsletter so that you are not missing my next article about XR technologies at this amazing event!
Disclaimer: this blog contains advertisement and affiliate links to sustain itself. If you click on an affiliate link, I’ll be very happy because I’ll earn a small commission on your purchase. You can find my boring full disclosure here.
Related