Every year I greet a new group of computer science students who have signed up for my HCI course. By the end of the semester, most of them will have a reasonable grasp of the basics of HCI, and some will even be enthusiastic about the topic. Projects turned in by students, working in teams, have included a voice-controlled video game, a gesture-controlled Web browser, a social networking application for gamers, and a variety of personal information organizers, on the desktop as well as on cell phones and other mobile devices.
Over the past ten years or so I've noticed students becoming more interested in applications that push the bounds of what's currently possible. The projects generally target what Jonathan Grudin calls discretionary hands-on use (Three Faces of Human-Computer Interaction, IEEE Annals of the History of Computing, 2005). That is, students are less interested in building a better calendar system, financial planner, or electronic voting ballot; they look to applications and devices that fit into the natural and often optional activities of our everyday lives. How can I contact my friends? Could I play a familiar game in a different way? What would people like to do with their mobile phones that isn't easy to do now?
In some of these applications, the context of use is critical. Surprisingly, it's not always easy to translate experiences with the real world into insights about the design of interactive software. I'll ask my students, “Have you ever heard the phrase 'stupid user'? [Everyone has.] Have you ever put salt in your coffee instead of sugar, because the paper packets look similar? [Many have.] Did you think of yourself as a stupid condiment user?” Of course, that's ridiculous, in the same way that the mistakes we make in using computers are much more often the fault of the designer than the fault of the user.
More detailed examples that students tell me about contain similar ideas:
I work as a technician in a veterinary hospital. We give puppies a vaccine called DHLPPC, which requires booster shots later. Some dogs have an allergic reaction to the Lepto virus (the L part of the vaccine), so later shots use the DHPPC vaccine (without the L). The problem is that both vaccines come in bottles that are the same size and have labels that are almost identical, except for tiny print on one reading “without the Lepto virus". The wrong vaccine can be deadly, but the labels make it really easy to make a mistake.
I work in a bank as a teller. Sometimes I'll go to the vault to pull out a stack of bills, which are wrapped in straps. The straps are colored and labeled with the total amount in the stack. The problem is that some of the straps are wrapped around different denominations of bills, but they're colored the same and have the same total dollar value. So if I'm in a hurry, I might want $1000 in $50s but I'll get $1000 in $100s by mistake, because it's hard to tell the difference between them when I'm rushed.
The shower faucet in my bathroom has a lever with a circle going around it. There's an Off label at the bottom. A blue arrow labeled Cold curves up on the left side, and a red Hot arrow continues downward on the right. The problem is that the lever must be turned in the direction opposite the arrows to turn on the water, and the labels are just relative--you turn one direction for colder and the other for hotter water. But if you just position the lever over the Hot label, only cold water comes out.
A few of the prototype user interfaces turned in by my students for the class will include labels and icons that are too small or too similar to distinguish at a glance. This often happens with simulations of handheld devices, which may include closely-spaced icons and small text labels. This is fine, in some cases, for a mouse-driven desktop application, but it's prone to error on a touch or handheld keypad interface. (Sometimes a touch screen interface will be simulated to include roll-overs, in which an icon changes its visual appearance when the pointer moves over without selecting it. The developers are usually surprised that they hadn't realized that touch interfaces aren't well-suited for this type of feedback.) Misleading icons and text are just as common, when students haven’t thought hard enough about the mental models that users may bring to the use of their application.
These are straightforward problems that are well-addressed by evaluation in HCI, and students in my class usually find the results compelling: “One of the users we worked with said that our prototype didn’t have X. It actually did, but none of the users could find it.” To those who still wonder whether the attention to usability issues is worth the effort, I can say, “Imagine building an application that turns out to be popular with, say, 10,000 users. Now think about the last thing that you used—a faucet, a plastic container, even a toy—that was a bit too complicated and took you an extra five seconds to figure out. If you could save five seconds for every one of your application’s users, how many hours of your individual effort would it take to balance that out?” It’s not a perfect argument, but it does bring home the amount of leverage that good design can apply.