CONTRIBUTOR
Managing Editor and Podcast Host,
Techstrong

Synopsis

In this Leadership Insights interview, Amanda Razani speaks with Ohto Pentikäinen, co-founder and CEO of Doublepoint, about how virtualized touch user interface technology can revolutionize many industries.

 

Transcript

Amanda Razani: Hello, I’m Amanda Razani with Digital CxO. I’m excited to be here today with Ohto Pentikäinen. He is the co-founder and CEO of Doublepoint, and he is currently attending as a speaker, vendor and a sponsor at the Augmented World Expo in the USA today. It’s been going on the last few days. How are you doing today?

Ohto Pentikäinen: Doing great. It’s been a very energetic conference. The weather has been great in Santa Clara. The conference, even though it’s a little bit bigger than last year, it still feels very community-like. So it’s small enough so that you just bump into a lot of relevant people while you’re walking down the hallway, and that’s sometimes not the possibility at some other conferences that tend to be a bit bigger. So we’ve sincerely enjoyed it. There’s been a lot of activity both business-wise and social-wise, which our team has liked a lot in AWE so far.

Amanda Razani: Wonderful. Well, I’m certainly enjoying attending it virtually. I hope to be in person next year. It looks like a great event.
So you have some pretty big news and information to share. I know you recently came out of stealth mode on some big funding and you’re working on some big things that you’ve been showcasing at the Augmented World Expo. So can you share more about your company and what services you provide?

Ohto Pentikäinen: I’m very happy to. So we got started when my co-founder, Jamin, he has a background in classical piano. And he got his degree some years ago. And he had this tendency to subconsciously type his thoughts out. So when he thought something was kind of fascinating, he would kind of twitch his fingers just kind of subconsciously. So he had this vision of building a wristband which could detect his finger movements while he was typing his thoughts out. And so he proceeded to make a wristband prototype out of that only for his own use. But then pretty quickly that particular prototype caught the attention of some companies in Silicon Valley that are working on augmented reality input.
So there’s a big problem with being able to input discreetly into augmented realities. Now, the reason why is that there’s a lot of companies that really envision a world where the glasses would look something like this. They would be super light frame. They don’t look like that today. The reason is that there’s a lot of battery, a lot of sensors in there, a lot of computation needs, and especially cameras take up a lot of space on these headsets. So the Quest Pro, for example, has 10 cameras total and it’s a bulkier headset. And those are probably the headsets that are going to be announced. They’re going to remain to be bulky.
So we found that if you offload some of that computation and offload some of that gesture recognition capability to a watch, you can actually reduce the size of the headset going down the line. So instead of using cameras to track your hands, you can just use a watch. And I have an example here. So here’s a Samsung Galaxy watch that we’ve just embedded it with our algorithm and when I tap my fingers like that, it’s going to flash. So I can use that as a point-and-select device to control my augmented reality.
What sets us apart from some other companies that work on risk-based input is that we work fully in software. So like I said, this is a off-the-shelf Samsung Galaxy watch. We bought it from Best Buy, just downloaded some of our software on it, and it can do gesture recognition. So we’re helping hardware companies get to market quicker, as they don’t need to go and design new watches for their AR inputs but they can utilize what’s already on the market. So that really reduces cost and time to market.

Amanda Razani: Wow.

Ohto Pentikäinen: So we’re basically an interaction provider for these companies.

Amanda Razani: That is fascinating. So you can take any watch, a business can go with any watch and then you put in, explain a little bit about the algorithm that you’re putting in that works for any watch, how does that work exactly?

Ohto Pentikäinen: Yeah, so first of all, if your listeners are aware of some of the technologies in the market, there’s a technology called EMG, electromyography, which is the most well-known way of detecting a risk-based input. There’s a few companies, the most notable one is CTRL-Labs, which was acquired by Meta in 2019. So there’s a full team in materiality labs working on risk-based input. That’s no news to anyone.
The way we differ is that instead of looking at how the muscle activates the finger, we look at touch-based input, so when you come in contact with a certain object. In this case, the tip of your finger. What happens in your wrist is a certain vibration. So there’s an impact between the two fingers and the vibration kind of propagates through your body and you can feel it in your skin. The vibration can be picked up from the watch with an inertial measurement unit. And we use this kind of same touch-detection technology in a bunch of other ways as well. But the vibrations are most of the things we look out for.

Amanda Razani: That is so interesting. So how do you envision this particular product in the enterprise? How do you see it being used across businesses?

Ohto Pentikäinen: So we’re really looking forward to embedding this algorithm as deep down as we possibly can into the ecosystem. So how we imagine it being used is, first of all, if you look at it from a vendor perspective, if you’re building hardware, let’s say you’re building a smartwatch today, you’re probably going to have a step counter in there as an algorithm, something that you can just give to your users like, “Hey, here’s how many steps you’ve walked today.” You can actually get those step counters when you buy your sensor from your sensor provider or you can get that algorithm from your chip provider. And we imagine that Doublepoint will be in a similar position for vendors, that as a hardware provider, you’re just going to be able to get it together with your sensor or your chip.
And actually related to this, we announced our optimization for Qualcomm Snapdragons just two days ago. So today when companies are building their new smartwatches and acquire their chips maybe from Qualcomm, they can get their algorithm ported already onto the chip and get it straight from Qualcomm rather than having to piece these things together. So we’re really doing our best to integrate it deep down.
In terms of the use case, the need for fundamental light, augmented reality headset input is deep down enough that we imagine this being one of the primary ways of inputting. So basically you would be connecting your smartwatch to your headset and then using that as the primary way of inputting. There’s, of course, a lot more smartwatches today than there are headsets, but we imagine both to have a increasing trajectory and these devices to talk with each other.
Then secondarily, there’s going to be some other use cases like IoT control. You can point at your lights and turn them on. Or accessibility use cases. So if there’s someone who has maybe only one hand at their disposal and they want to control their smartwatch, they can use the algorithms to do a single-handed gesture control rather than having to do two-handed gesture control.
So there’s going to be some other use cases, but we think that the main one is going to be in just basic operating system level control for the AR headsets.

Amanda Razani: That’s fantastic. And I definitely can see a use for it in the medical field for sure. And I’m even thinking with the remote environment that a lot of workers find themselves in now, I know the AR and VR experience could be enhanced to really feel like you’re in a room interacting in a more exciting type of way than just via Zoom.

Ohto Pentikäinen: Absolutely. Yeah, I think that’s a big one. Then again, the one that we tend to get very excited about is how do you use augmented reality in your everyday life when you go about your normal day? And the thing that’s difficult, because in your room you’re basically able to use a mouse or a keyboard and you’re used to that, but if you’re on the metro or you’re on the train and you want to input to your augmented reality world, maybe play a game or type a message, then the methods of today are a bit difficult.
So first of all, you’re not going to carry a keyboard or a mouse with you. You might take your phone out, but that’s a bit cumbersome. I mean, it’s a bit annoying at least for a lot of people today to just take this brick and just look at it and kind of have your world immersed into that rather than just living your life. Then, hand tracking is another way that can be used to control these headsets, but you need to have your hand in front of the camera in order to know how it operates.
So if you’re sitting on a train, it’s going to be a little weird if you start raising your hands in the air and start doing what you call gorilla hands next to a lot of other passengers. So we’re looking into how we make that experience as discreet as possible so it’s barely noticeable that you’re actually in touch with some virtual objects as well as looking through the environment around you.

Amanda Razani: Awesome. So technology is moving so quickly these days. How do you envision the AR and VR technology in two years from now as far as business and enterprise?

Ohto Pentikäinen: Yeah, so I think definitely there’s going to be more use cases in these kind of industrial applications. The consumer market tends to always take more time than we think. So even though there’s been a lot of optimism in when do these light AR headsets come to market, they tend to take a lot more time than we think.
In industrial, I think we’re just going to be increasing the way we have been so far. So when you need to have some information at your hand about your work environment that you otherwise would have to inquire by cumbersome methods from your phone or a computer, you can just have that directly integrated. So maybe you’re in a factory and you’re looking through your process and you’re picking out what’s wrong, then there’s a system that directly points your attention into what’s wrong. Maybe you’re picking up packages in a warehouse, well, robots do them these days, but maybe you have a problem there that you need to fix, these mixed reality systems can point you towards the right direction.
Then of course, the one that is very close to us. So we’re from Finland and we just joined NATO a month ago or so, and thanks for having us there. It’s important for us at this point. There’s going to be a lot of training happening. And as we know, training, military training, those are huge operations that are both environmentally and financially and, of course, in other ways, very burdening. So you can create these kind of good enough training situations for a big chunk of training in this mixed reality environments rather than having people training on the ground every single day or week and having a big financial burden. So that’s also one that Finland is looking into heavily in increasing amounts.

Amanda Razani: There’s so many use case ideas. I love how technology gets our creative juices flowing and it’s an exciting feature ahead, I believe.

Ohto Pentikäinen: Yeah, I think so too. I mean, personally, I am most interested in reducing the amount of digital information that I get. And that’s actually maybe a bit counterintuitive because I see these glasses as having the potential to do both. So the potential of increasing the amount of digital information that you have, but also decreasing the amount of digital information that I have. And my rationale here is that when I open up my phone and I need to have a smartphone because otherwise I can’t access my banking, for example, they’re so integrated these days, I just end up opening 12 different apps that I don’t want to use actually while I do my banking. It’s these devices, because they’re so distant from my other world, they’re really taking a lot of time and energy and the capability of being present and a certain amount of attention as well.
So what I think is exciting is that rather than having to look through my navigation information from Google Maps, I could just have a arrow in my peripheral vision on where should I be going, and that’s all I need. That’s really all I need, and I want it to stay there. And I think the glasses have the potential to do both, and that’s what gets us excited as well is that we’re able to be there to build the future that we’d like to give forward to our kids as well at some point.

Amanda Razani: Absolutely. And more streamlined, less gadgets, more hands-free.

Ohto Pentikäinen: Yeah, exactly. Exactly. So just being able to keep ourselves more present than distant from our everyday lives.

Amanda Razani: I agree. Well, thank you so much for coming on our show today and giving your insights. And I hope you enjoy the rest of your time at the Augmented World Expo and I look forward to speaking with you again soon.

Ohto Pentikäinen: Thank you. This was a pleasure.

Amanda Razani: Thank you.