Telepresence robots seem to be a hot topic in the news these days – there have been a lot of articles written about them lately! The tech blogs are all buzzing with their stories of driving robots around exotic (or mundane) office buildings and interacting with people, and it seems like nearly every day there’s news of another product in development. With all this excitement, I feel almost obligated to try and find some solid ground, and not get too swept up in the movement.
Don’t get me wrong – I think there is a tremendous future here, and I’m looking forward to spending some time behind the “wheel” of one of these soon. But I can’t help but think that there’s something…missing…in the current offerings.
Let’s start with the basics – a telepresence robot is a mobile mechanism that you control, remotely. You’re sitting in front of your computer, at home or in a remote location, but you’re driving this robot around somewhere else, able to interact with people on the other end. These robots have cameras and microphones so you can see and hear what’s going on around the robot. Most of them have speakers and screens so people can see and hear you. From your screen, you can control where the camera points, and drive the robot around – down the hallway or into meeting rooms, wherever. Instead of just being a voice on the phone, you’re commanding a physical presence, you can follow the team out of the conference room, and carry on ‘walking’ conversations down the hallway.
So far, so good. The concept is there – but how good are they, in the real world? Of the offerings I’ve seen so far, they all tend to fall short in different ways. Some of them, like the Texai from Willow Garage and the Tilr from RoboDynamics, have a kind of industrial, functional look to them. They look too much like their roots in the research robotics fields, and not much like a polished consumer product. Others have a more pleasing exterior, such as the Qb from Anybots, the Vgo, and the Giraffe from HeadThere. Only a few of these companies are brave enough to publish prices, and they range from around $5k(Vgo) up to about $15k (Qb)
The Qb seems to be the most prevalent in articles and blog posts, but frankly, it kind of creeps me out. It’s got a nice base, it seems smooth. It’s got a kind of cute face with the two big circles that look like eyes (one for the video camera and another for a laser pointer). But it’s the display (or lack of) that bothers me. Yes, it’s got a (teeny little) screen, but so far it only seems to display a cute icon, not the actual face of the person on the other end of it. So now you’ve just got this thing with two big bug-eyes creeping up behind you in the hallway and you don’t know who’s on the other end. It’s like the pervert-bot! Most of the others have a screen in the 7″ to 15″ range, so the little 3″ screen on the Qb just makes it seem like you’re hiding something.
But even that doesn’t really capture my issue with these products. Right now the people most excited about them are, to be blunt, geeks. People like me, who have a genuine interest in technology and robots, and really enjoy tinkering with things like this. We like the notion of driving a robot around an office on the other side of the continent. But the telepresence robots need to focus more on the telepresence side, and less on the robot. The experience needs to feel more like you’re actually there, and not like you’re driving a robot. People walking down the hall should see the device and think, “there’s John”, not “there’s the robot – I wonder who’s driving it today.” If you need to go to a meeting in Conference Room A, you just want to get there, not worry about clicking buttons and negotiating doorways. When the user has to drive a robot, it puts a barrier between him and the remote location. The less this feels like a robot, the more successful it will be.
Autonomous navigation is a big part of that. Instead of having to drive the mechanism from room to room and watch out for obstacles, the robot should be able to find its own way there. I should be able to send the robot to Conference Room A with a click of the mouse, and then go get a fresh cup of coffee while the robot finds its own way there. If it involves an elevator or doorways, so be it – the robot should be capable of interfacing with such standard obstacles. If someone has moved a chair in the way, the robot should be able to route itself around it. And if (horror of horrors) the wireless connection is weak in a certain hallway and the robot loses signal, it should be able to continue on without me having to keep clicking the “forward” button!
Maps – everybody loves maps, right? Why should I have to try and read signs on the wall, or remember which hallway to go down in an unfamiliar building? If there’s a robot there, it should be able to tell me exactly where I am. Knowing where you are in a building, or who you are near, can be very helpful in dropping by someone’s cubicle to introduce yourself and interact. The robot should provide a nice map of the surroundings, with labels for offices, conference rooms, etc. On the practical side, most of these robots can’t even find their own charging stations, so the user has to keep track of how much juice is left and where the nearest docking station is.
Although I may seem down on them here, I really do think telepresence robots have a bright future ahead of them. I just think they’re forgetting that in the long run, a telepresence application is more about presence, less about the robot. In short, I think we can do better.