We seem to be in the verge of an important change on computer interfaces. The mighty keyboard dominated the human-to-computer interaction from the last 50+ years. Indeed, since its beginning, in the form of a teletype first, then in a more direct way. Output was more complicated, and moved ahead a bit faster with the jump from cards to printers and finally to the still dominating (but slowly fading) Cathodic Ray Tube.
Anyway, computer history is not linear, but logarithmic, and while the mouse started shaking its tail just a mere twenty-something years ago, it became so ubiquitous as his cousin keyboard, and certainly more fashionable.
But in the last years we are seeing a lot of wanna-be interfaces trying to win over this dynamic duo. Technology declared obsolete the CRT and it is being replaced by LCDs, which are maybe in the process of being replaced by OLEDs, but the important thing is that this sparked a myriad of new screen formats, from 19" notebooks to 2" portable video players. In one corner, multi-monitor setups are gaining adoption, while many of us have several tiny screens where we keep up with our schedules, calls and videos.
Pointing at screens with a stylus first and then with a finger is so common, than many are pushing for the next step which is allowing you to use as many fingers as you want, as in the talked-about-ad-nauseam iPhone to the still niche Microsoft Surface. But really, if you try any of this devices (I played with a prototype) you see there is a lot of great uses it can enable.
Still, I can't see touch screens of any kind wining over the mouse any time soon. I can't imagine having to touch my notebook screen all the time. The small arrow has the nice property of not hiding anything, and your hand can move the mouse over the table, so you don't get tired too soon. But someone smarter can figure out a better way than the mouse, maybe...
The keyboard seems a more difficult rival to me. Even if voice recognition improves a LOT, and even if you are a lousy typist as I am, I guess the keyboard is more practical. For people who mostly write prose document voice can be a great complement, but I can't imagine how to edit text easily with voice commands. And definitively, I can't think of programming with voice... can you? Just try to read a piece of code aloud...
In any case, these innovations and additional input and output devices will be increasingly common, if they still can't beat old Querty and Mickey. And we as developer have to start thinking on them more seriously. I guess the days of the pop-down menus are over. Even desktop application interfaces are sliding off them. Look at Office and their ribbons. Even if you don't like them too much, I think they are more effective than the previous polluted menus.
Well, maybe what we have to think more and more about is separating the interface from the logic, but this time for real. While we got accustomed to tiered applications, we still have a tendency to think in menus and data entry screens as the UI when we think in our domain model. It noticed that some time ago when I had to provide an IVR interface (phone-based, with a mix of voice and button commands) for a system. My team have to make some changes to the business objects to make them more usable from something different than a windows or a web page.
Windows Presentation Foundation, and now Silverlight, as Flash is doing from some years ago, makes the need for this separation more clear, and we can see separated tools, with some capabilities in the middle, for both interaction designers and programmers. And I avoid saying "graphic designers" on purpose, because there the web was full of graphic designers producing nightmarish interfaces because they were not prepared. Interaction design is much more than that. I guess this could be the next hot area for the upcoming generations, together with our now old profession.
|Leave a Comment:|