This is a semi-random assortment of questions I've answered on Quora. I'm still getting used to the platform. I've copied my thoughts here, lightly edited, in case anyone's interested.
On computer science
Who are the real heroes in computer science history and why?
In Computing for Ordinary Mortals, I made an effort to mention people I thought everyone (not just computer scientists) should know about. I missed some important people, especially in systems, but I think all of these are good candidates (with the context in which I mentioned them in parentheses, though that doesn't capture all of their contributions):
Vannevar Bush, Ivan Sutherland, Doug Engelbart, Tim Berners-Lee (big ideas in interactive computing); Richard Hamming; Charles Babbage, J. Presper Eckert, John Mauchly (computing history); John von Neumann, Donald Knuth (architecture); C. A. R. Hoare (algorithms); E. F. Codd (databases); John Backus, Alan Kay, the ENIAC women (programming); J. C. R. Licklider, Leonard Kleinrock, Robert Taylor, Lawrence Roberts, Robert Kahn, Vinton Cerf (networking); Alan Turing, Alonzo Church, Stephen Cook (theory); Allen Newell, Herbert Simon, Nils Nilsson, Alan Robinson (artificial intelligence).
What are some good computer science books for starters?
There are a few good starter books about computer science, depending on your interests. Some excellent textbooks have been recommended; I'll add some in the genre of popular science (which is very limited in the area of computer science).
Danny Hillis gives a good overview in The Pattern On The Stone, which covers the basics of logic, programming, algorithms, Turing machines, and moves on to speculate about the future of computing.
Charles Petzold's Code works its way up from bits to computer architecture, in clear detail; it's a nice introduction with a computer engineering flavor to it.
Paul Rosenbloom's On Computing argues that "computing is a great scientific domain on a par with the physical, life, and social sciences." It's a wide-ranging, philosophical perspective on the field, a rare effort.
And (with apologies for the self-promotion) I'll mention my own book, Computing for Ordinary Mortals. I thought, "What are the most important ideas in computer science, and how could they be explained to readers without a technical or mathematical background?" There weren't any other big-picture books out there, aside from textbooks, so I wrote one.
On human-computer interaction and embodied cognition
What books are required reading for students studying HCI?
Another commenter has good recommendations for books. I'll offer a few more:
- Card, S. K., Moran, T. P., and Newell, A. (1983). The Psychology of Human-Computer Interaction, LEA. This book gives the classic perspective on HCI as human information processing. A cognitive scientist friend described this approach as the best candidate for creating a science of HCI.
- Dourish, P. (2004). Where the Action Is: The foundations of embodied interaction, MIT Press. This book explores some of the relationships between HCI and philosophy that are relevant to how we interact with computers in the real world.
- Dix, A., Finlay, J., Abowd, G., and Beale, R. (2004). Human-Computer Interaction, third edition, Prentice Hall. This textbook gives a good introduction to modeling approaches to HCI.
- Baecker, R., Grudin, J., Buxton, B. and Greenberg, S. (ed.) (1995). Readings in Human Computer Interaction: Towards the Year 2000, second edition, Morgan-Kaufman. I like the historical perspective, an early section in this collection, and the editors have included papers that represent foundational work across the entire field.
Something I'd like to see but haven't come across is a book or collection on the broad history of ideas in HCI, which would include Bush's "As we may think," Sutherland's "The ultimate display," Engelbart's "Augmenting human intellect," and so forth, putting it all together. That might be asking too much for a single book, though.
What are the best resources for quickly learning the core fundamentals of UI and UX design?
I don't really think we're far enough along to have a science of UI/UX analogous to what's conveyed in a a book like Structure and Interpretation of Computer Programs. Recall a line in the introduction to SICP:
...procedural epistemology the study of the structure of knowledge from an imperative point of view... Computation provides a framework for dealing precisely with notions of "how to."
For UI/UX, we'd be interested in the science (or epistemology--a theory of knowledge) of human behavior in the context of interacting with computers. As mentioned by others, we're not talking about the behavior of abstract mathematical constructs or predictable machines; that makes analysis much harder. UI/UX is an very broad area, and we don't yet know a lot about it. I like Can Duruk's suggestions about the topic; here are a few more resources that take a formal/scientific perspective:
-
Card, S. K., Moran, T. P., and Newell, A. (1983). The Psychology of Human-Computer Interaction, LEA. Interacting with computers as human information processing, based on the Model Human Processor and a set of principles governing decision making, learning, and so forth.
- Carroll, J. M., ed. (2003). HCI Models, Theories, and Frameworks: Toward a Multidisciplinary Science, Morgan Kaufmann. A wide range of approaches to understanding interaction, from the cognitive side (as with CMN, above), to formal modeling, to social psychology, to theories of work, ... It's a good introduction.
Work like this is about the scientific findings that get compiled into design guidelines, but the connections are often tenuous, given how fast UI and UX move in the real world. There's an analogy to the distinctions between science, engineering, and design. A physicist understands electricity, but I wouldn't ask a physicist to handle the electrical wiring in my house; an electrical engineer could probably do this but wouldn't necessarily make the best decisions about where the light and switch fixtures should go; on the other hand, I'd hope that a designer/architect would have some knowledge of the science and engineering so as not to produce an unworkable design.
And going back to UI/UX: some areas are just hard to address. For example, we have some instruments to measure user satisfaction, one of the core components of usability, and we can sometimes explain why some users find a given application satisfying to use or not, but predicting it in advance from first principles? That's not yet possible in general.
Are you familiar with concept of embodiment in HCI?
There are a few good starting points for learning about embodied cognition. I'll mention some that have shaped my thinking about the topic.
As already mentioned, Paul Dourish's Where the Action Is is a good introduction to the topic. Also relevant is Rob Jacob's work on reality-based interaction, a framework for understanding interaction in terms of four themes: naive physics, plus awareness and skills of the body, environment, and social context. Earlier influences on HCI are Ed Hutchins's work, especially his book Cognition in the Wild, about distributed cognition, and Terry Winograd's Understanding Computers and Cognition: A New Foundation for Design.
For a psychological perspective, there's Margaret Wilson's "Six Views of Embodied Cognition," which separates and evaluates six different ways of interpreting embodiment, the most relevant for HCI being "(1) cognition is situated; ... (3) we off-load cognitive work onto the environment; (4) the environment is part of the cognitive system; (5) cognition is for action..." It's useful for understanding the potential implications of embodiment for HCI.
There's also been a good deal of work in philosophy that's worth reading; the concept of embodiment has a long history. I can't make much sense of Heidegger directly, but Terry Winograd has written about "Heidegger and the Design of Computer Systems," which (if I remember correctly) explains the relevance of such concepts as being-in-the-world and thrownness. Any of Andy Clark's books is worth picking up, though his thoughts on embodied cognition are more directly applicable to AI and robotics than HCI; his insights are still helpful. And then there's ecological psychology for thoughts about James Gibson's concept of affordance. I think Gibson himself is good to read, specifically The Ecological Approach to Visual Perception, because it's widely talked about, but Don Norman's cautionary writing on affordances (Affordances and Design) is important as well.
I'm probably missing some obvious references; if I think of any I'll come back to add them.
What is the practical relevance of embodied cognition theories for human computer interaction design?
I'm not a designer or a cognitive scientist, but I've read a bit about embodied cognition. I think that the concepts associated with embodied cognition (Wilson gives a nice breakdown) can influence UI design in a couple of important ways. They can suggest new designs, and they can explain why some designs are effective. I'll give examples from my own work.
In the late 1990s I was interested in mouse gestures, and my research group came up with an idea for what we called a flick gesture: mouse down on an object in a graphical user interface, then flick the mouse in a given direction just before the mouse up. The analogy was to putting your finger on an object on a tabletop, like a penny, and flicking it to slide toward a target. We ran experiments to see whether the event could reliably detected, how long it took, how accurately users could flick in a given direction, and so forth. It turned out to be a reasonable idea, and a couple of years later (though probably not influenced by our work) the same gesture appeared in the Opera Web browser for forward and backward navigation.
It's reasonable to say that we didn't need to be thinking about embodied cognition to do this work, and in fact we didn't even mention it in the paper, as far as I remember. But it was our inspiration...
Here's another example, on the explanation side. A few years later I bought a Roomba vacuum cleaner, which came with an "invisible wall" device, an infrared beam that prevented the robot from moving past it. I started to think about how end users might control or direct the robot's behavior. My research group set up a simple simulation environment in a graphical user interface, a maze, with colored tokens at the intersections. The simulated robot mapped specific tokens to actions, in condition-action rules: on seeing a green square at an intersection, go straight; on seeing a purple circle, go left; and so forth. In an experiment, we gave participants a simulated robot with an incorrect program, and we asked them to fix things so that the robot could move from its starting point to the goal point in the maze. The participants had the choice, though, of either changing the token for a given rule (e.g., change "on seeing a green square, go straight" to "on seeing a red triangle, go straight"), or of changing the token at an intersection in the maze itself. (If this isn't clear enough, I've described the experiment elsewhere.) That is, what do people do when given the choice between fiddling with a robot's internal program or changing the environment so that it's a better match for what the robot is already programmed to do? We found some interesting patterns. For example, in this environment, changes to the program affect the robot's global behavior over the entire maze, while changes to the environment produce local effects, and participants tended to start with global changes to the program and then local changes to fine-tune the environment once the program was close to being a solution.
Here we were directly drawing on examples of situated cognition, such as leaving a package by the front door so that when you go outside later you'll take it with you; you've offloaded something you need to remember to do into the structure of the environment. We thought the results were interesting, in line with other work on situated and embodied cognition, though they haven't been taken up in any deployed system I'm aware of. So, practical relevance? You'll have to judge.
Others have made a more general case for the relevance of embodied, situated, and distributed cognition to HCI:
- Dourish, P. (2004). Where the Action Is: The Foundations of Embodied Interaction.
-
Hutchins, E. (1995). Cognition in the Wild.
- Norman, D. A. (1988). The Design of Everyday Things.
-
Winograd, T., and Flores, F. (1986). Understanding Computers and Cognition: A New Foundation for Design.
Norman and Zhang's papers on external representations and cognitive artifacts also contain lots of good examples.
Is it possible to create something that is both a computer and human being?
Short answer: Yes.
Longer answer: As other commenters have suggested, there are a few different ways to interpret this question, depending on what a computer is. Here are a few possibilities.
Can what a physical computer does be integrated with what a human being does? Sure. Here's an example. Erik Hollnagel and David Woods introduced the idea of a joint cognitive system a few decades ago, that we can engineer systems to take advantage of the complementary capabilities of humans and computers. You can see examples in well-designed power plan control rooms, intensive care units, and airplane cockpits, where computers are doing part of the work, human operators other parts, and their interaction gives better results than either could acting alone.
We can see such integration at finer levels of granularity as well. There are visual prostheses that stimulate the optic nerve, creating patterns that let some blind people see. There are brain-computer interfaces that allow people to control prosthetic limbs. Aside from input and output, there's also ongoing work on cognitive prosthetics, but I don't know of any good examples.
Can computational processes be integrated with how human beings think? Again, sure. We carry out algorithms all the time, for example in doing long division. Students in computer science will sometimes walk through the behavior of a Turing machine, simulating it by hand. So we're certainly already capable of carrying out computational processes consciously. And of course it's possible to view what individual neurons in our brains do (not to mention other systems in the body) as computation.
I'm answering this question not to say, "See? It's already being done," but to point out some of the subtleties in thinking about the nature of computation. We can think about physical computers or abstract computation; we can view what's happening at different levels of abstraction; sometimes the boundaries of actions don't break down in a modular way. These are all important ideas in computer science.
Would a touchscreen or a trackpad/mouse be easier for an elderly person to learn/use?
Web Accessibility for Older Users: A Literature Review, from the W3C, describes some of the challenges older users deal with when they interact with the Web, and the observations generalize fairly well. Three limitations the document mentions are relevant: vision decline, motor skill diminishment, and cognition effects.
Trackpad/mouse interaction allows for fairly precise selection of icons and such on the display, which means that user interfaces and Web pages sometimes contain targets that not easily noticeable to older users, visually, or are are too small to be easily manipulated, due to motor skills. (Over the years things have gotten slighty better for older users, but there's still a tendency for user interface designers to concentrate on users with 20/20 vision and very good hand-eye coordination.)
Cognitive limitations may make it harder for some older users to remember how to carry out infrequent tasks. Touchscreen interfaces tend to be simpler on these fronts, I think, which is a point in their favor.
On the other hand, there are still some gotchas. The problems that Jakob Nielsen identifies in his usability studies (e.g., iPad Usability: Year One) disproportionately affect older users. For example, sometimes Web pages designed for laptop/desktop systems are translated to touch interaction without much thought given to what works and what doesn't. The conventions for touch interaction lead to different problems, some in the area of discoverability. It may not be obvious in a given touch application that a long press or a swipe on an icon does something, for example (there's typically no visual indication that they're possible), and these are the kinds of actions that older users may activate by accident.
I agree with the other commenters who have said that directness of interaction is a benefit. Individual differences can be so large, though, that I also agree with the idea of trying out both options.
On animal tool use
What are some examples of animals that use tools?
A surprisingly wide variety of non-human animals use tools. The best source I know of is Shumaker, R. W, Walkup, K. R., and Beck, B. B. (2011). Animal Tool Behavior: The Use and Manufacture of Tools by Animals. JHU Press.
My favorite examples, which I'll describe mostly from memory, are these:
Betty the Crow (sadly deceased) was an Einstein among birds. She not only used tools but made them herself, for example creating a hook out of a straight piece of wire for fishing. New Caledonian crows have also been observed making tools in the wild, snipping leaves into specific shapes for different purposes. Other birds, including parrots, use and make tools as well.
Tool use isn't limited to "smart" animals, though. Some wasps will use a pebble to pound earth down around the opening to a nest, for example. (This is interesting in that tool use is typically viewed as goal-driven behavior, but in some cases it's evolution that builds in a goal rather than the animal specifically choosing it.)
We might think of tool use as requiring hands, but some dolphins use sponges to protect their rostrums when foraging for food. Remarkably, mother dolphins demonstrate the technique to their young, who adopt it for themselves. (This is interesting because of cultural transmission and material culture issues.)
And of course there's primate tool use, especially by chimpanzees and orangutans. (See Whiten, A., Goodall, J., McGrew, W. C., Nishida, T., Reynolds, V., Sugiyama, Y., Tutin, C. E. G., Wrangham, R. W. & Boesch, C. (1999). Cultures in chimpanzees. Nature, 399, 682-685, and van Schaik, C. P., Ancrenaz, M., Borgen, G., Galdikas, B., Knott, C. D., Singleton, I., Suzuki, A., Utami, S. S. & Merrill, M. (2003). Orangutan cultures and the evolution of material culture. Science, 299, 102-105, respectively, for surveys.) There's a huge range, but Benjamin Beck, in Animal Tool Behavior, gives this wonderful summary and contrast:
There is an anecdote that circulates among zoo folk describing the results of placing a screwdriver in the cages of an adult gorilla, chimpanzee, and orangutan. The gorilla would not discover the screwdriver for an hour and then would do so only by stepping on it. Upon discovery, the ape would shrink in fear and only after a considerable interval would it approach the tool. The next contact would be a cautious, tentative touch with the back of the hand. Thus finding it harmless, the gorilla would smell the screwdriver and try to eat it. Upon discovering that the screwdriver was inedible, the gorilla would discard and ignore it indefinitely.
The chimpanzee would notice the tool at once and seize it immediately. Then the ape would use it as a club, a spear, a lever, a hammer, a probe, a missile, a toothpick, and practically every other possible implement except as a screwdriver. The tool would be guarded jealously, manipulated incessantly, and discarded from boredom only after several days.
The orangutan would notice the tool at once but ignore it lest a keeper discover the oversight. If a keeper did notice, the ape would rush to the tool and surrender it only in trade for a quantity of preferred food. If a keeper did not notice, the ape would wait until night and then proceed to use the screwdriver to pick the locks or dismantle the cage and escape.
Great stuff. (I'm a computer scientist, but I'm fascinated by this sort of thing.)