10 ways we’ll move beyond the keyboard

Speech recognition, gesture control and brain-computer interface systems are changing how we talk to our machines. Oh, and don’t forget about those digital tattoos.

futuristic user interface - digital transformation
Thinkstock

Communication evolution

Since the very first days of the computer age we’ve been tinkering with how, exactly, we talk with our machines. The traditional keyboard and mouse have had a long run, but the touchscreens that now dominate mobile devices won’t be the last word. Cozy up with your old-fashioned touchpad and mouse and settle in for a leisurely scroll through the future of input devices.

voice recognition, mobile phone
Thinkstock

Speech recognition

If you’ve had a recent chat with Apple’s Siri, Microsoft’s Cortana or Google Assistant (“Ok Google”), you may have noticed that speech recognition has gotten very good, very quickly. Even those automated customer service phone lines are getting less infuriating. How’d that happen, anyway?

The short answer: artificial intelligence. Dedicated machine-learning systems have made huge advances in speech recognition by constantly chewing on colossal amounts of data — digitally recorded conversations and dictations — and looking for patterns. Industry heavyweights such as Google and Apple are developing A.I.’s that sift through years of audio recordings, which in turn allow their proprietary algorithms to predict what you’re trying to say. The basic approach is similar to predictive functions in email or text apps.

Combined with improved microphone technology, these algorithms have made efficient and accurate voice recognition a reality. You can expect this rapidly improving input system to largely dictate (heh) how we communicate with our computers in coming years. Smartphone assistants and tabletop smart speakers such as the Amazon Echo and the Apple HomePod are just the beginning.

speech recognition vs typing
Thinkstock

Fact, not hype

Perhaps you crave some hard numbers to back up all this optimistic conjecture. As it happens, Stanford University recently concluded a study comparing the relative efficiency of speech dictation versus typing on smartphones.

Using Apple phones and the Baidu Deep Speech 2 engine — the underlying system that powers many commercial dictation apps — the Stanford group determined that the average person using speech recognition for text or email could input English three times faster than via the smartphone keyboard. Furthermore, the error rate was 2.93% using speech recognition, vs. 3.68% for the keyboard, meaning that speech recognition is now both faster and more accurate than typing. (The researchers also tested in Mandarin Chinese, by the way. The numbers there: 2.8 times faster and an error rate of 7.51%, vs. 20.54% with the keyboard.) 

Bear in mind that these are the results for touchscreens, not full QWERTY keyboards. But Stanford chose to test smartphones because the quickest way to wide adoption of speech-recognition technology is through the ubiquitous smartphone.

gesture control - smart car - BMW

Gesture control

For several million years now, our species has been refining the way we communicate with one another through various kinds of language, including body. Gesture-recognition systems attempt to leverage all that evolutionary progress. If talking to our machines is the most natural way to input information, gesture is probably a close second.

Hundreds of companies and research labs around the world are working on gesture-control systems, but interestingly, the automotive industry is taking a leading role. High-tech luxury rides like the BMW 7 Series let you control the radio or navigate the dashboard display using hand gestures.

In the workplace, peripherals such as the Leap Motion controller are likely to inform the near future of gesture recognition. Using infrared cameras and sensors, the system tracks your hands and fingers within a designated control area, then inputs commands depending on the software you’re using. The Leap Motion system has been around a few years, with developers using it for all sorts of interesting ends, mostly in virtual reality applications (for instance, virtual sculpting).

gesture control - Thalmic Labs
Thalmic Labs

Another approach

Leap Motion’s external cameras and motion sensors are one way to realize gesture control, but virtual reality gloves remain an enormous area of research. 

Dedicated VR gloves such as Manus VR and Gloveone are already in the hands of third-party developers (these puns just keep happening). You can expect the technology to gradually migrate from the hard-core VR crowd to mainstream users, if and when VR itself makes its way into the typical office. Meanwhile, researchers at MIT Media Lab have dozens of concept-stage designs for fast-forward haptic input devices such as the T(ether) glove, designed for everyday office use.

Others are working farther up the arm: The Myo armband (pictured) is a wearable motion controller that lets you interact with your computer, phone, TV or game system by monitoring muscle activity in the forearm. The Myo system can recognize particular gestures — making a fist, pointing a finger — and third-party developers are working on ways to use the Myo for typing on what amounts to an imaginary keyboard.

DuoSkin digital tattoos, by MIT Media Lab
MIT Media Lab

Digital tattoos

Then there’s the sci-fi stuff. MIT Media Lab is a perennial go-to destination for this kind of thing, and the online projects page is a fun place to browse. Those rascals sure stay busy.

Last year, a team of researchers at MIT officially presented the DuoSkin system, a rather bananas input device concept that leverages the use of temporary digital tattoos. DuoSkin is a fabrication technique that produces on-skin interfaces that wirelessly connect with your computer, phone or other device. The tattoos rely on the electrically conductive properties of gold leaf and could eventually be powered by tiny circuits that harvest friction and kinetic energy as you move around.

It’s all concept-stage noodling at this point, but the MIT team has already prototyped a system in which you can design and output your own tattoos using graphics software and a standard printer. The electric tattoos can serve as a trackpad or control interface for your music player, and down the line could be adapted to make a decorative, functional keyboard on your forearm (or your fingernail).

Apple\'s textile-based touch-sensitive technology - United States Patent and Trademark Office

Keep your shirt on

The first wave of wearable computing devices tended to be … well, “awkward” is perhaps the polite word. But with quickening advances in smart fabrics and flexible electronics, a whole new generation of truly wearable computers — actual interactive clothing — will soon be hanging from the racks.

How do we know? One generally reliable way of forecasting future technology is to follow the patents. A recent case in point: In August 2017, Apple filed three major patents concerning smart fabrics and wearable devices. Among them is a wide-ranging initiative Apple is calling “textile-based touch-sensitive” technology.

The concept is classified specifically as a new kind of input device and compared to existing technologies such as keyboards and touchscreens. According to the patent, the material uses a system of conducive fibers to turn virtually any garment or fabric-covered object — a purse, a couch — into a kind of soft touchscreen input device. We’ve seen gimmicky smart garments before, but the Apple patents suggest the company is, once again, thinking different.

eye tracking - biometric ID iris scan
Thinkstock

Eye tracking

Computers can’t tell what you’re thinking. Not yet, anyway — more on that in a bit. But they can tell where you’re looking. The technology known as eye tracking does just that, monitoring the position and appearance of your eyeballs to determine what you’re looking at, and for how long. The concept goes back a surprisingly long way, actually, and includes some delightful weirdness concerning aluminum contact lenses

Most contemporary systems track your gaze by bouncing infrared light off the eye, capturing the reflection with a separate camera, then using heavy-duty math to determine eye position. These days, eye-tracking tech comes in many forms, from high-tech VR systems to marketing research kits that use cameras to see which products catch your eye. Such systems can also be used as input devices. Rest your gaze on a particular onscreen button, for instance, and the system figures out what you want to click. This allows for slow but steady no-hands typing. The approach has been modified to help disabled people in recent years.

This particular input device option may be coming to the workplace sooner rather than later. Microsoft recently announced it will support eye tracking in the next major update to Windows 10. It’s a safe bet that improved eye tracking will combine with voice and gesture recognition in future input devices.

Azio Corporation - Retro Classic Keyboard
Azio Corporation

Whither the keyboard?

Since the dawn of the PC, the keyboard has been the primary input device for those old-fashioned modes of communication — you know, language and words — that we used before the age of emoji. It’s entirely likely that we will continue to use keyboards — but they’re definitely getting smarter.

Much of the progress on this front is being done behind the keys, so to speak, with advanced software that provides autocorrect and predictive text functions. For touchscreen keyboards, on your phone or tablet, commercial apps such as Swype, TouchPal and Fleksy use machine-learning algorithms and innovative gesture-control options to make typing on miniature keyboards faster and more intuitive.

As for traditional keyboards, we’ve already seen space-age riffs on the template such as projection keyboards, paper keyboards, fold-out keyboards, roll-up keyboards and even invisible keyboards. Technologically, we’ve taken the device about as far as it’s going to go. As such, the best future keyboards are likely to value form over function. To wit: The crowdfunded Azio Retro Classic, pictured here, is a mechanical-digital hybrid keyboard made from leather and zinc alloy. General consensus in the nerd community is it that it’s objectively and empirically the single coolest keyboard ever assembled.

brain-computer interface, PR shot by MyndPlay Ltd
MyndPlay Ltd

Brain-computer interface

Proponents of the singularity theory, and many old-school sci-fi fans, will tell you that the fusion of man and machine isn’t just possible; it’s inevitable. We will subsume our computers, or they will assimilate us, and the next stage on mankind’s evolution will begin. Either that or we’ll all be killed

But let us remain optimistic and look at the input device aspect of all this. Brain-computer interface, or BCI, is the emerging designation for systems that establish a direct connection between brain and an outside device. This technology is further along than you might think. Commercial headsets are already available that can read brainwaves right through your scalp, then translate those brainwaves into specific inputs for a computer, game system or other device.

The U.K. company MyndPlay, for example, has developed an entire line of games and apps that use an EEG headset to pick up and interpret specific electrical signals in the brain. The U.S. company NeuroSky hawks a similar line of EEG biosensors, including the MindWave Mobile. These devices have rather limited functionality as of now, largely because users generally prefer to have brainwave sensors placed outside their skull. What can we say? Some people are irrationally squeamish about cranial implants. If that’s not a concern for you, read on.

brain-computer interface - binary mind - telepathic computing
Thinkstock

Telepathic typing

In early 2017, scientists at Stanford unveiled a groundbreaking BCI system that allows paralyzed patients to type up to 39 characters per minute via direct brain control. Using impossibly complex biotech algorithms, the system is able to translate brain waves into point-and-click commands on a computer screen.

The downside: The system requires a small electrode array to be surgically implanted onto the surface of the brain, then connected to an outside device via cable. But the Stanford team is confident that they will be able to eventually make the system wireless and possibly even refine it to the point where surgical implants won’t be required at all.

Innovations such as the Stanford system suggest that the future of input devices is going to get very ambitious indeed. Several labs around the world are working on establishing direct brain-computer connections, ideally without the cranial surgery. For a sneak-peek glimpse of one company’s bold vision, check out this recent profile on the New York company CTRL Labs, which may be on the verge of a startling BCI breakthrough.

The world of computer input devices is changing rapidly, and the old traditions are fading fast. So be nice to your poor little mouse, won’t you? It’s likely to be a museum relic soon. Maybe give it some fresh batteries. They like that.