Almost all of the impressive VR work done to date has focused on just two senses: your sight and your hearing. That’s a great start, and one that will enable lots of powerful experiences, but it is incomplete. In order to fully immerse users in interactive virtual reality environments, it’s going to be necessary to build peripherals that fully engage your sense of touch.
Unfortunately, touch is a much harder sense to fool than vision is. With vision, all the hardware has to do is interrupt signals travelling to the eyes. Skin, in contrast, covers about two square meters of your body and articulates complicated two-way interactions with the world.
This is the organ that haptic technology is trying to trick, and it’s difficult. There are a number of peripherals that exist to help build immersion, but none available right now provide really compelling haptic experiences.
The problem is made worse because skin stimulation does not have the long history of research that optical displays do. The first use of a scanning display to re-create an image was in 1907, and it took researchers and engineers nearly a full century to get displays small and accurate enough to provide a good virtual reality experience. The equivalent journey, for touch, is only now starting.
In this article, we’re going to explore some technologies in development today which can provide some sense of touch to VR users. I’ve ranked the technologies by the quality of experience they can potentially provide, and how much work is needed before they can be commercialized.
One simple way to provide rudimentary force feedback is through the use of simple vibrating motors, of the sort found in the rumble packs of modern videogame controllers. These take on a new dimension in VR, as they are able to associate specific vibrating frequencies and intensities with the boundaries of virtual objects.
Users could feel a small blip when they touch an object or a UI element, and a stronger pulse when they activate it (similar to force-feedback on modern smartphone screens).
This sort of feedback could also be used to convey the texture of surfaces. With a force feedback unit on each finger, as in the case of the Glove1, this technology could be useful for navigating virtual interfaces with your eyes closed. That said, this technology provides a very spartan, functional approach to touch, and will never be much of an immersion builder.
Skin Shear Haptics
Skin shear technology is based on a surprising fact about our sense of touch, which is that we primarily judge light, non-painful pressure by the degree to which our skin slides around (something you can easily test by gently touching a spot on your skin and sliding your finger.
As the skin stretches, the sensation of pressure increases. This is handy, because skin shear is something that it’s easy to reproduce mechanically, and can provide the illusion of sustained pressure, something that isn’t possible with a simple vibrating motor.
Right now, the most advanced implementation of this technology is the Tactical Haptics controller, which attaches to the STEM motion control system and provides rough pressure feedback in response to virtual interactions like gun recoil, moving a wand through a material, and swinging a virtual weight around on a virtual chain.
The results are surprisingly convincing for the simplicity of the mechanism. It’s easy to imagine building a glove that provides this sort of feedback with more precision, allowing virtual objects to have density, if not solidity: objects can feel hard, they just won’t be able to stop the motion of the user’s hand.
This is a major improvement, although it has many of the same limitations as simple rumble – skin sheer technology can fool the sense of touch, but it can’t fool proprioception (the intuitive sense of where your limbs are and how they’re moving). Even if the user’s skin tells them they’ve hit something solid, their muscles know that their hand is fluidly moving through it.
This is the part where it all starts to get a little weird. Let’s say the technology needs to be able to stop users from pushing their hands through objects, to create a more compelling illusion of solidity. That means that you need to exert force on the limb from some external frame of reference.
The simplest way to achieve that is to use robotics, which attach either to your body or to the ground, preventing its motion outside the limits of the virtual geometry.
For just a hand (allowing the user to grab and feel the solidity of virtual objects, that looks something like this.
Kinda scary, right? Well, there’s a lot of things that glove still can’t do. What if the object you’re touching is heavy? What if it’s something solid, like a wall, that needs to resist motions from the shoulders and elbows, as well as the wrist and fingers? Well, then you need something like this:
The cyberglove website does not list a price for the device in the video above, but other systems like it run into the hundreds of thousands of dollars. Part of the reason for this is that only a few industrial and military organizations actually buy these devices (and in very small numbers), which drives the price up.
The other part is that these are genuinely impressive pieces of equipment on a technical level. Consider what’s necessary to provide a convincing haptic feedback experience of touching a solid object. If the user rests their hand against a virtual wall and pushes, the system must detect the motion, consult with the simulation to determine that they are touching a solid object, then physically (and fluidly) move the armature to resist the motion and return the user’s hand to its original position.
All this needs to be accomplished before the brain can register that the motion has begun. That’s an enormous technical challenge, and even the best hardware today doesn’t quite achieve it perfectly.
The other limitation here, aside from the challenges of getting the manufacturing costs down to an acceptable level, has to do with making the technology convenient. Literally strapping yourself into an elaborate and powerful mechanical armature has a substantial psychological barriers associated with it. It’s dubious whether users will be willing to put up with that sort of inconvenience on a regular basis, even if the technology is sophisticated enough to provide a good experience.
The closest this technology has come to being deployed on a consumer level is in the form of devices like the Novint Falcon. The Falcon is not a virtual reality device as such, given that its work space is a sphere only a few inches across — that said, it does provide high precision, three-axis force feedback, and is the only device at a consumer price point that does so.
Novint has been working on an arm-based exoskeleton called the Xio for a while, although that project seems to be in limbo for the time being, following the company’s financial troubles.
Potentially, these sorts of armatures could be made simpler and cheaper through the use of electroactive polymers — artificial ‘muscles’ made of plastics that contract in response to electric current, and are generally cheaper and more compact than equivalent linear motors.
An entirely independent approach to the problem is to use phased ultrasound grids to create dense interference patterns in the air, which are registered by the skin as solid, and can provide actual resistance. The technology can be used to project virtual 3D objects into the air that users can touch, with the nodes of intersecting pressure waves producing genuine force on the user’s hands.
At first blush, this might appear to be the magic bullet for VR haptic feedback. Unfortunately, there are some limitations. Resolution is limited by the frequency response of the speakers, as well as the number of them: being able to cover a large spatial area is not necessarily practical.
More significantly, there’s substantial “leakage” — acoustic energy forms inadvertent nodes and semi-nodes in the space around where intentional patterns are being created (something you can see in the oil). The pressures produced by this system are very faint: attempting to scale them up to volumes that could exert multiple pounds of pressure onto your body would involve an enormous amount of energy and could be physically dangerous to users.
Lastly, we’re going to take a moment to touch on a more speculative technology. One way (some people would argue the ultimate way) to engage with the sense of touch is by directly stimulating the nerves in the user’s arms, spine, or brain. By doing this, it’s possible to fool touch, proprioception, the whole nine yards – including sensations like temperature that might be impractical to achieve with a suit or robotic armature. Potentially, scientists could do all of this without requiring the cumbersome robot suits or phased acoustic grids.
There’s already been some work done on this front in the field of prosthetic limbs, directly tapping into severed nerves to send signals back from sensors in the prosthesis, to create a synthetic sense of touch.
Brain stimulation can provide similar feedback. The basic issue with these sorts of technologies is that they require fairly invasive surgery in order to be able to install the nerve interfaces – surgery that’s unacceptably risky in healthy people. They’re also fairly crude and coarse-grained, in terms of the precision of the feedback.
In order for these to be practical as a haptic interface paradigm, you really need to be able to get the resolution of the electrode interface much finer, and reduce the invasiveness of the procedure. There are a few approaches here, ranging from nanotechnology to optogenetics, but it seems safe to say that major breakthroughs are unlikely in the next few years.
The Future of Touch
It’s still early days for virtual reality, and there isn’t yet broad consumer demand for haptic interfaces — but there will be. The enormous gold rush of virtual reality innovation is only now starting, and we’re likely to see all of these techniques massively improved in the years to come.
That said, none of the current technologies seem perfect. All of them have at least one serious drawback, either in terms of the quality of sensation they can provide, or the barriers to their use. It’s entirely possible that the eventual “perfect” solution to VR input has not been invented yet. If that’s the case, I’m eager to see what developers come up with next.
Are you excited for haptic VR interfaces? Is there an exciting product or technology that we didn’t cover here? Let us know in the comments!
Image Credits: Hand catch Via Shutterstock