Input Devices: There Is No Holy Grail

Many different input devices already exist for wearable computers. No one input device is perfect for everyone. Wearable computers must be tailored to their user much like a fine suit is tailored to its wearer. Correspondingly, wearable users must be continuously open minded and adventurous with new interfaces in order to find the combination that is best for them.

Speech

While many people concentrate on speech recognition, this interface can be severely limiting during a conversation or conference or whenever privacy is required. Current speech recognition systems depend on noise canceling microphones that are worn close to the mouth. Even state-of-the-art large vocabulary speech recognizers are dependent on such microphones and may even depend on a particular model of microphone to achieve acceptably low error rates. Thus, these systems can not be used as running transcription machines. Even if this was possible, most transcriptions are not worth storing due to the amount of searching it would take to find the salient parts of the conversation later. Such editing can not be done during the conversation because the user would have to know a priori what his companion would say.

This does not mean that voice does not have a place in wearable computing. In fact, there is an excellent overview on practical uses of voice in Christopher Schmandt's "Voice Communication with Computers," Van Nostrand Reinhold, New York. Furthermore, speech has recently become a very interesting interface for wearables since wearable-class computers are now capable of running some of the more powerful speech recognizers in the industry.

Handwriting

The PDA market has pushed the pen as the primary input medium ever since their introduction. However, as a researcher in the field, I find handwriting to be unacceptable. Today's PDA's have the following problems:

Keyboards

So far, keyboards on notebook and pocket computers are either too large for convenience or too small to use. This is a direct result of assuming the standard QWERTY keyboard is good for portable computing. Manufacturers are afraid that it would take users too long to learn a new way of typing. However, some of the one-handed chording keyboards, for example the Twiddler, can be taught to anyone in about 5 minutes. It is certainly much easier to learn than the QWERTY interfaces (the letters actually go in order-"abcdef..."-but are arranged so that speed is not particularly limited). In an hour, a beginner can be touch typing. In a weekend a speed of 10 words/minute can be obtained. Shortly, 35+ words/minute can be achieved (my personal rate is around 50 words/minute with a macro package). This is the Twiddler keyboard from HandyKey (addresses are included at the end of this article). It even includes a tilt activated mouse. The Twiddler is but one of many one-handed designs out there. Some designs, like the Chord Keypad (TM), allow instant access to both hands if necessary (the Twiddler straps on to one hand). This feature may be very desirable in medical fields. Furthermore, this type of device only uses 7 keys which can be integrated into the clothing. In any case, these devices allow the use of full-featured keyboards anywhere (including walking down the street in the rain). When finished, they can be stuck in a pocket or left on the belt for easy, instant access. Not only are these keyboards convenient, they do not require much CPU power (unlike handwriting), always correctly recognize a user's input, and can take an amazing amount of abuse (I have kicked mine into a door, stepped on it, and gotten it wet, etc.).

Video

Tactile

Direct Neural

This section is in response to a question posed during a New Scientist interview "Will wearble computing move on to direct neural interfacing, if so on what timescales, and what are the immediate barriers?"

Neural prostheses are interesting, but I don't see an application in wearable computing, except for the disabled. The most obvious suggestion for a neural prosthetic is an improved display interface. For some applications, my current display (the Private Eye available from the Phoenix Group) is too obtrusive. Thus, many people feel that going directly to the visual cortex (back of the head) with electrodes is the answer. Unfortunately, this does not work too well. First, any electrode that is pushed into neural tissue will cause the immediate tissue to die over time. However, it is possible to sever nerves and allow them to regenerate through electrode rings. This may provide a stable connection. The problem is finding the right nerve and hoping it regenerates properly. Needlesstosay this is difficult in a area as densely packed as the visual cortex. There is yet another obstacle to the visual cortex interface. Stimulation in this area does not lead to a one to one mapping in the visual field when trying to display more than one spot at a time. This is because 5 layers of visual processing are already performed at the retina. In fact, looking into someone's eyes is the easiest way to see their brain! Thus, until we understand visual phsyiology better, the best place for a prosthetic display is in the eye. However, the retina is very fragile and interfacing to it has not, as yet, proved fruitful. A more promising idea is to make a display that can be used like a contact lens or mounted inside the eye without touching the retina. Phil Alvelda's work on microdisplays may make this method possible (web address). Also, non-intrusive displays are already being made much smaller and unobtrusive. Therefore, the market for neuroprosthetic displays seem limited to those people who can see no other way.

Aural neuroprostheses already exist. However, these are again limited to medical conditions. Pushing electrodes into the inner ear tends to destroy the normal structures for hearing. The best case for these cochlear implants, to my knowledge, is a five electrode device which is generally billed as an assistant to lip reading. For those with normal hearing, small bone conduction earphones/microphones are the obvious choice.

Having covered sound and vision, the remaining senses are touch, taste, and smell. Smell and taste we are just beginning to understand, and I am not qualified to talk about prosthetics for these senses. However, touch has some potential. These nerve bundles are more accessible. However, again, less intrusive interface mechanisms may limit the market for such devices. The proprioceptive system can be fooled by muscle vibration, and some of the VR glove equipment has showed that piezoelectic elements can provide a sort of tactile feedback.

A much better case for neuroprosthetics is for computer input. Research in bionics show that muscle activation signals can be used to control prosthetic arms, and EEGs can be used, albeit slowly, for input as well. Thus, it is easy to imagine a typing method using neuro-electical signals. Furthermore, neuro-devices for sensing the mood of the user (and having the computer adapt appropriately) may also be useful. However, the encumbering wires of EEG's and the reluctance to cut into functioning tissue by the medical community will probably limit the availability of these sorts of neuroprosthetics for a good time to come.

More sophisticated devices, such as augmenting the brain directly with computer memory, seem decades away at least. We simply do not understand enough of the brain for such an undertaking.

The question was "will wearable computing move to neuroprosthetics and what will the barriers be?" First, I think a clear need for the invasive technology has to be demonstrated. At this stage I do not see the need except for the disabled. Secondly, a finely controllable, long-term method for interfacing to neural bundles will have to be invented. This will take many years. Finally, the reluctance of the medical community to operate except in cases of illness and cosmetics will have to be overcome. This may prove a harder step than most people realize.


Last modified: Thu Jul 20 01:14:07 1995