Thursday, March 27, 2014

From Quora: How does evolution explain the quantum gap in intelligence between humans and all other species?

By a "quantum gap" I imagine that you mean a large qualitative difference in intelligence that needs to be explained. But the differences we see are arguably a matter of degree (very large degree, but still).

A century ago, asked what separates humans from other animals, we might have said language,  tool use, symbolic reasoning, reasoning with probabilities, analogical reasoning, reasoning about the future, cultural transmission of behaviors, and so forth. But as we've learned more about animals' abilities, the gaps become smaller.

For example, human languages are much more sophisticated than any animal language. One of the most striking characteristic of a human language is its generativity, something that for a long time was thought to be impossible for animals to manage. But that's not necessarily the case [Gentner et al., 2006]:
European starlings (Sturnus vulgaris) accurately recognize acoustic patterns defined by a recursive, self-embedding, context-free grammar. They are also able to classify new patterns defined by the grammar and reliably exclude agrammatical patterns. Thus, the capacity to classify sequences from recursive, centre-embedded grammars is not uniquely human.
Or take tool use, which was once thought of as a hallmark of human intelligence. When we started observing animals using tools (even animals as simple as wasps), the criterion was changed to making rather than only using tools. And when animals such as chimpanzees and crows were observed making tools, the criterion was narrowed further--using tools to make other tools. (Other tool-related capabilities have been suggested as well.) But chimpanzees can be taught to make stone tools by knapping [Roffman et al., 2006]:
Here we describe the ability of two language-competent bonobos (10), Kanzi (KZ; male, age 30 y) and Pan-Banisha (PB; female, age 28 y), to produce novel stone tools and effectively use them, supporting the hypothesis that present-day Pan exhibit technological competencies formerly assigned only to the Homo genus. In the 1990s, KZ and PB were taught by Toth et al. (11, 12) to knap flint flakes and use their sharp edges to cut rope or leather.
Or reasoning about the future [Clayton et al., 2003]:
A more convincing case of planning was provided by Osvath and Osvath. In a recent series of experiments, these authors demonstrated that when selecting a tool for use in the future, chimpanzees and orangutans can override immediate drives in favor of future needs. One of the most striking examples of the spoon test in animals comes from recent studies of the food-caching scrub-jays. In the laboratory, work by Raby and colleagues showed that our jays can spontaneously plan for tomorrow’s breakfast without reference to their current motivational state.
You start to see a pattern: Someone speculates, "This may be a uniquely human intellectual capacity." And then scientists in animal cognition decide to test whether that's true...

The overall point is that what looks like a huge qualitative difference may not be; it could be something more like me in a race against Usain Bolt. Evolution can explain that.

Sunday, June 30, 2013

Stuff I've written on Quora


This is a semi-random assortment of questions I've answered on Quora. I'm still getting used to the platform. I've copied my thoughts here, lightly edited, in case anyone's interested.


On computer science


Who are the real heroes in computer science history and why?


In Computing for Ordinary Mortals, I made an effort to mention people I thought everyone (not just computer scientists) should know about. I missed some important people, especially in systems, but I think all of these are good candidates (with the context in which I mentioned them in parentheses, though that doesn't capture all of their contributions):



Vannevar Bush, Ivan Sutherland, Doug Engelbart, Tim Berners-Lee (big ideas in interactive computing); Richard Hamming; Charles Babbage, J. Presper Eckert, John Mauchly (computing history); John von Neumann, Donald Knuth (architecture); C. A. R. Hoare (algorithms); E. F. Codd (databases); John Backus, Alan Kay, the ENIAC women (programming); J. C. R. Licklider, Leonard Kleinrock, Robert Taylor, Lawrence Roberts, Robert Kahn, Vinton Cerf (networking); Alan Turing, Alonzo Church, Stephen Cook (theory); Allen Newell, Herbert Simon, Nils Nilsson, Alan Robinson (artificial intelligence).


What are some good computer science books for starters?


There are a few good starter books about computer science, depending on your interests. Some excellent textbooks have been recommended; I'll add some in the genre of popular science (which is very limited in the area of computer science).

Danny Hillis gives a good overview in The Pattern On The Stone, which covers the basics of logic, programming, algorithms, Turing machines, and moves on to speculate about the future of computing.



Charles Petzold's Code works its way up from bits to computer architecture, in clear detail; it's a nice introduction with a computer engineering flavor to it.



Paul Rosenbloom's On Computing argues that "computing is a great scientific domain on a par with the physical, life, and social sciences." It's a wide-ranging, philosophical perspective on the field, a rare effort.



And (with apologies for the self-promotion) I'll mention my own book, Computing for Ordinary Mortals. I thought, "What are the most important ideas in computer science, and how could they be explained to readers without a technical or mathematical background?" There weren't any other big-picture books out there, aside from textbooks, so I wrote one.



On human-computer interaction and embodied cognition



What books are required reading for students studying HCI?


Another commenter has good recommendations for books. I'll offer a few more:

  • Card, S. K., Moran, T. P., and Newell, A. (1983). The Psychology of Human-Computer Interaction, LEA.  This book gives the classic perspective on HCI as human information processing. A cognitive scientist friend described this approach as the best candidate for creating a science of HCI.
  • Dourish, P. (2004). Where the Action Is: The foundations of embodied interaction, MIT Press. This book explores some of the relationships between HCI and philosophy that are relevant to how we interact with computers in the real world.
  • Dix, A., Finlay, J., Abowd, G., and Beale, R. (2004). Human-Computer Interaction, third edition, Prentice Hall. This textbook gives a good introduction to modeling approaches to HCI.
  • Baecker, R., Grudin, J., Buxton, B. and Greenberg, S. (ed.) (1995). Readings in Human Computer Interaction: Towards the Year 2000, second edition, Morgan-Kaufman. I like the historical perspective, an early section in this collection, and the editors have included papers that represent foundational work across the entire field.



Something I'd like to see but haven't come across is a book or collection on the broad history of ideas in HCI, which would include Bush's "As we may think," Sutherland's "The ultimate display," Engelbart's "Augmenting human intellect," and so forth, putting it all together. That might be asking too much for a single book, though.


What are the best resources for quickly learning the core fundamentals of UI and UX design?


I don't really think we're far enough along to have a science of UI/UX analogous to what's conveyed in a a book like Structure and Interpretation of Computer Programs. Recall a line in the introduction to SICP:
...procedural epistemology ­ the study of the structure of knowledge from an imperative point of view... Computation provides a framework for dealing precisely with notions of "how to."

For UI/UX, we'd be interested in the science (or epistemology--a theory of knowledge) of human behavior in the context of interacting with computers. As mentioned by others, we're not talking about the behavior of abstract mathematical constructs or predictable machines; that makes analysis much harder. UI/UX is an very broad area, and we don't yet know a lot about it. I like Can Duruk's suggestions about the topic; here are a few more resources that take a formal/scientific perspective:

  • 

Card, S. K., Moran, T. P., and Newell, A. (1983). The Psychology of Human-Computer Interaction, LEA.  Interacting with computers as human information processing, based on the Model Human Processor and a set of principles governing decision making, learning, and so forth.


  • Carroll, J. M., ed. (2003). HCI Models, Theories, and Frameworks: Toward a Multidisciplinary Science, Morgan Kaufmann. A wide range of approaches to understanding interaction, from the cognitive side (as with CMN, above), to formal modeling, to social psychology, to theories of work, ... It's a good introduction.



Work like this is about the scientific findings that get compiled into design guidelines, but the connections are often tenuous, given how fast UI and UX move in the real world. There's an analogy to the distinctions between science, engineering, and design. A physicist understands electricity, but I wouldn't ask a physicist to handle the electrical wiring in my house; an electrical engineer could probably do this but wouldn't necessarily make the best decisions about where the light and switch fixtures should go; on the other hand, I'd hope that a designer/architect would have some knowledge of the science and engineering so as not to produce an unworkable design.


And going back to UI/UX: some areas are just hard to address. For example, we have some instruments to measure user satisfaction, one of the core components of usability, and we can sometimes explain why some users find a given application satisfying to use or not, but predicting it in advance from first principles? That's not yet possible in general.


Are you familiar with concept of embodiment in HCI?


There are a few good starting points for learning about embodied cognition. I'll mention some that have shaped my thinking about the topic.



As already mentioned, Paul Dourish's Where the Action Is is a good introduction to the topic. Also relevant is Rob Jacob's work on reality-based interaction, a framework for understanding interaction in terms of four themes: naive physics, plus awareness and skills of the body, environment, and social context. Earlier influences on HCI are Ed Hutchins's work, especially his book Cognition in the Wild, about distributed cognition, and Terry Winograd's Understanding Computers and Cognition: A New Foundation for Design.



For a psychological perspective, there's Margaret Wilson's "Six Views of Embodied Cognition," which separates and evaluates six different ways of interpreting embodiment, the most relevant for HCI being "(1) cognition is situated; ... (3) we off-load cognitive work onto the environment; (4) the environment is part of the cognitive system; (5) cognition is for action..."  It's useful for understanding the potential implications of embodiment for HCI.



There's also been a good deal of work in philosophy that's worth reading; the concept of embodiment has a long history. I can't make much sense of Heidegger directly, but Terry Winograd has written about "Heidegger and the Design of Computer Systems," which (if I remember correctly) explains the relevance of such concepts as being-in-the-world and thrownness. Any of Andy Clark's books is worth picking up, though his thoughts on embodied cognition are more directly applicable to AI and robotics than HCI; his insights are still helpful. And then there's ecological psychology for thoughts about James Gibson's concept of affordance. I think Gibson himself is good to read, specifically The Ecological Approach to Visual Perception, because it's widely talked about, but Don Norman's cautionary writing on affordances (Affordances and Design) is important as well.



I'm probably missing some obvious references; if I think of any I'll come back to add them.


What is the practical relevance of embodied cognition theories for human computer interaction design?


I'm not a designer or a cognitive scientist, but I've read a bit about embodied cognition. I think that the concepts associated with embodied cognition (Wilson gives a nice breakdown) can influence UI design in a couple of important ways. They can suggest new designs, and they can explain why some designs are effective. I'll give examples from my own work.



In the late 1990s I was interested in mouse gestures, and my research group came up with an idea for what we called a flick gesture: mouse down on an object in a graphical user interface, then flick the mouse in a given direction just before the mouse up. The analogy was to putting your finger on an object on a tabletop, like a penny, and flicking it to slide toward a target. We ran experiments to see whether the event could reliably detected, how long it took, how accurately users could flick in a given direction, and so forth. It turned out to be a reasonable idea, and a couple of years later (though probably not influenced by our work) the same gesture appeared in the Opera Web browser for forward and backward navigation.



It's reasonable to say that we didn't need to be thinking about embodied cognition to do this work, and in fact we didn't even mention it in the paper, as far as I remember. But it was our inspiration...



Here's another example, on the explanation side. A few years later I bought a Roomba vacuum cleaner, which came with an "invisible wall" device, an infrared beam that prevented the robot from moving past it. I started to think about how end users might control or direct the robot's behavior. My research group set up a simple simulation environment in a graphical user interface, a maze, with colored tokens at the intersections. The simulated robot mapped specific tokens to actions, in condition-action rules: on seeing a green square at an intersection, go straight; on seeing a purple circle, go left; and so forth. In an experiment, we gave participants a simulated robot with an incorrect program, and we asked them to fix things so that the robot could move from its starting point to the goal point in the maze. The participants had the choice, though, of either changing the token for a given rule (e.g., change "on seeing a green square, go straight" to "on seeing a red triangle, go straight"), or of changing the token at an intersection in the maze itself. (If this isn't clear enough, I've described the experiment elsewhere.)  That is, what do people do when given the choice between fiddling with a robot's internal program or changing the environment so that it's a better match for what the robot is already programmed to do?  We found some interesting patterns. For example, in this environment, changes to the program affect the robot's global behavior over the entire maze, while changes to the environment produce local effects, and participants tended to start with global changes to the program and then local changes to fine-tune the environment once the program was close to being a solution.



Here we were directly drawing on examples of situated cognition, such as leaving a package by the front door so that when you go outside later you'll take it with you; you've offloaded something you need to remember to do into the structure of the environment. We thought the results were interesting, in line with other work on situated and embodied cognition, though they haven't been taken up in any deployed system I'm aware of. So, practical relevance? You'll have to judge.



Others have made a more general case for the relevance of embodied, situated, and distributed cognition to HCI:



  • Dourish, P. (2004). Where the Action Is: The Foundations of Embodied Interaction.
  • 
Hutchins, E. (1995). Cognition in the Wild.

  • Norman, D. A. (1988). The Design of Everyday Things.
  • 
Winograd, T., and Flores, F. (1986). Understanding Computers and Cognition: A New Foundation for Design.




Norman and Zhang's papers on external representations and cognitive artifacts also contain lots of good examples.


Is it possible to create something that is both a computer and human being?


Short answer: Yes.


Longer answer: As other commenters have suggested, there are a few different ways to interpret this question, depending on what a computer is.  Here are a few possibilities.



Can what a physical computer does be integrated with what a human being does? Sure. Here's an example. Erik Hollnagel and David Woods introduced the idea of a joint cognitive system a few decades ago, that we can engineer systems to take advantage of the complementary capabilities of humans and computers. You can see examples in well-designed power plan control rooms, intensive care units, and airplane cockpits, where computers are doing part of the work, human operators other parts, and their interaction gives better results than either could acting alone.



We can see such integration at finer levels of granularity as well. There are visual prostheses that stimulate the optic nerve, creating patterns that let some blind people see. There are brain-computer interfaces that allow people to control prosthetic limbs. Aside from input and output, there's also ongoing work on cognitive prosthetics, but I don't know of any good examples.



Can computational processes be integrated with how human beings think? Again, sure. We carry out algorithms all the time, for example in doing long division. Students in computer science will sometimes walk through the behavior of a Turing machine, simulating it by hand. So we're certainly already capable of carrying out computational processes consciously. And of course it's possible to view what individual neurons in our brains do (not to mention other systems in the body) as computation.



I'm answering this question not to say, "See? It's already being done," but to point out some of the subtleties in thinking about the nature of computation. We can think about physical computers or abstract computation; we can view what's happening at different levels of abstraction; sometimes the boundaries of actions don't break down in a modular way.  These are all important ideas in computer science.


Would a touchscreen or a trackpad/mouse be easier for an elderly person to learn/use?


Web Accessibility for Older Users: A Literature Review, from the W3C, describes some of the challenges older users deal with when they interact with the Web, and the observations generalize fairly well. Three limitations the document mentions are relevant: vision decline, motor skill diminishment, and cognition effects.



Trackpad/mouse interaction allows for fairly precise selection of icons and such on the display, which means that user interfaces and Web pages sometimes contain targets that not easily noticeable to older users, visually, or are are too small to be easily manipulated, due to motor skills. (Over the years things have gotten slighty better for older users, but there's still a tendency for user interface designers to concentrate on users with 20/20 vision and very good hand-eye coordination.)  

Cognitive limitations may make it harder for some older users to remember how to carry out infrequent tasks. Touchscreen interfaces tend to be simpler on these fronts, I think, which is a point in their favor.



On the other hand, there are still some gotchas. The problems that Jakob Nielsen identifies in his usability studies (e.g., iPad Usability: Year One) disproportionately affect older users. For example, sometimes Web pages designed for laptop/desktop systems are translated to touch interaction without much thought given to what works and what doesn't. The conventions for touch interaction lead to different problems, some in the area of discoverability. It may not be obvious in a given touch application that a long press or a swipe on an icon does something, for example (there's typically no visual indication that they're possible), and these are the kinds of actions that older users may activate by accident.



I agree with the other commenters who have said that directness of interaction is a benefit. Individual differences can be so large, though, that I also agree with the idea of trying out both options.


On animal tool use


What are some examples of animals that use tools?


A surprisingly wide variety of non-human animals use tools. The best source I know of is Shumaker, R. W, Walkup, K. R., and Beck, B. B. (2011). Animal Tool Behavior: The Use and Manufacture of Tools by Animals. JHU Press.


My favorite examples, which I'll describe mostly from memory, are these:



Betty the Crow (sadly deceased) was an Einstein among birds.  She not only used tools but made them herself, for example creating a hook out of a straight piece of wire for fishing.  New Caledonian crows have also been observed making tools in the wild, snipping leaves into specific shapes for different purposes.  Other birds, including parrots, use and make tools as well.



Tool use isn't limited to "smart" animals, though. Some wasps will use a pebble to pound earth down around the opening to a nest, for example. (This is interesting in that tool use is typically viewed as goal-driven behavior, but in some cases it's evolution that builds in a goal rather than the animal specifically choosing it.)



We might think of tool use as requiring hands, but some dolphins use sponges to protect their rostrums when foraging for food.  Remarkably, mother dolphins demonstrate the technique to their young, who adopt it for themselves.  (This is interesting because of cultural transmission and material culture issues.)



And of course there's primate tool use, especially by chimpanzees and orangutans.  (See Whiten, A., Goodall, J., McGrew, W. C., Nishida, T., Reynolds, V., Sugiyama, Y., Tutin, C. E. G., Wrangham, R. W. & Boesch, C. (1999). Cultures in chimpanzees. Nature, 399, 682-685, and van Schaik, C. P., Ancrenaz, M., Borgen, G., Galdikas, B., Knott, C. D., Singleton, I., Suzuki, A., Utami, S. S. & Merrill, M. (2003). Orangutan cultures and the evolution of material culture. Science, 299, 102-105, respectively, for surveys.)  There's a huge range, but Benjamin Beck, in Animal Tool Behavior, gives this wonderful summary and contrast:
There is an anecdote that circulates among zoo folk describing the results of placing a screwdriver in the cages of an adult gorilla, chimpanzee, and orangutan. The gorilla would not discover the screwdriver for an hour and then would do so only by stepping on it. Upon discovery, the ape would shrink in fear and only after a considerable interval would it approach the tool. The next contact would be a cautious, tentative touch with the back of the hand. Thus finding it harmless, the gorilla would smell the screwdriver and try to eat it. Upon discovering that the screwdriver was inedible, the gorilla would discard and ignore it indefinitely.

The chimpanzee would notice the tool at once and seize it immediately. Then the ape would use it as a club, a spear, a lever, a hammer, a probe, a missile, a toothpick, and practically every other possible implement except as a screwdriver. The tool would be guarded jealously, manipulated incessantly, and discarded from boredom only after several days.

The orangutan would notice the tool at once but ignore it lest a keeper discover the oversight. If a keeper did notice, the ape would rush to the tool and surrender it only in trade for a quantity of preferred food. If a keeper did not notice, the ape would wait until night and then proceed to use the screwdriver to pick the locks or dismantle the cage and escape.
Great stuff. (I'm a computer scientist, but I'm fascinated by this sort of thing.)


Tuesday, April 23, 2013

The affordances of glass walls

James Gibson defined the term affordance in his 1979 book, The Ecological Approach to Visual Perception:
The affordances of the environment are what it offers the animal, what it provides or furnishes, either for good or ill. The verb to afford is found in the dictionary, but the noun affordance is not. I have made it up. I mean by it something that refers to both the environment and the animal in a way that no existing term does. It implies the complementarity of the animal and the environment.
Chairs, for example, afford sitting. Not just any chair, though, and not just any person, as you'll realize if you ever try to cram yourself into a desk built for a second grader. A chair affords sitting if its seat is the right height from the ground (about the same as the distance from your heels to the backs of your knees), if it's wide enough for your posterior,  if it's strong enough to hold your weight, and so forth.

We can look downward in size, to ask whether a coffee cup affords being held in one hand. (Some large cups have very small handles and when filled are too heavy to hold with just a thumb and forefinger.) We can think bigger, to ask whether a given architectural design affords the kinds of interactions between people that we might like to see. (Cabrini Green is one famous failure.)

And, as Don Norman points out in The Psychology of Everyday Things, it's critical that we're able to identify what can be done with objects and environments, to recognize their "perceived affordances." Which brings me to this:



That's not artistic purple shading around my eyes. Here's how it happened. I was heading to a meeting at the new Hunt Library on Centennial Campus, to give a presentation for our industrial advisory council. I saw one of my students in the lobby, just outside the main area of the library.


We talked for a minute or so about a new project, and then I turned to go to my meeting...


...and walked into the glass wall that flanks the turnstiles. This raised an enormous knot on my forehead, noticeable enough that the staff called a team of paramedics to look me over. (I checked out fine.) A few days later I look as if I've lost a fist fight.

And of course, being introspective, I think about the affordances of glass walls. They're great--you can see more of what's around you than with ordinary walls. You have a sense of openness, connected at a distance even with the outside. But for a specific person in a specific state of mind, it may not be immediately perceived that glass walls do not afford passage.

Friday, February 8, 2013

A four-minute introduction to computing

This is a draft script for a very short talk about computing concepts for a non-technical audience. It doesn't go deep, but I try to bring across some important ideas...

When I was a kid, I had a Play-Doh machine. You drop in a lump of colored clay, insert a little plastic stencil, and then push down on the plunger. You get a long tube with the cross section of a triangle or a star. Was it fun? Well, I still remember it today...

When I explain what computer science is about, I sometimes start with a machine like this. Except that computers mold information into different shapes. They're information processing machines. There's more to it, of course. Information is different from physical raw materials. Think of information as a stream of numbers that you could write down. We might be talking about data from a scientific experiment or quarterly business reports, or a DVD video--it's all information. That's what's going into the machine, as input, and a new stream of information is the output.

But here's something cool. Inside every computer is a controller. That controller treats some streams of information differently from other kinds of data: it interprets each number as an instruction to follow. A stream of numbers can be a stream of instructions--a program.

So unlike my Play-Doh machine, a computer can handle two different kinds of input. One is information to work on, the other is instructions for what to do with the information. And some of those instructions involve making decisions about what it should do next. This is why we talk about a computer as being a different kind of machine than we usually think of--a Play-Doh machine, a loom for weaving, and so forth--because it can make some decisions for itself, on our behalf. That's more than a stencil or a template for repeating actions; it's real automation.

And there's something else. Programs are information, right? And programs take information as input... This means that we can feed one program to another program. Now things get really interesting.

Think of the instructions that a computer can carry out as its language. To get the computer to do something useful, you need to speak its language. But machine language is incredibly tedious, and it takes forever to write down instructions in just the right way to make just the right things happen.

And what if I'm a video artist or a baseball statistician? I have information that I'd like to be processed--maybe color corrections or on-base plus slugging numbers--but of course the machine's language doesn't include anything close to these abstract concepts. But here's the thing--information is malleable, and we know a lot about translating it from one language into another. My information comes in abstract chunks of information that I can talk about in my specialized language of video artistry or baseball statistics. With a lot of work, I may be able to translate my own information abstractions into terms that a computer can handle. And I don't have to do this entirely by hand--once I figure it all out, I can write a program to do the translation for me.

So when I'm using my computer, I don't have to work at the level of the machine; I can express myself in the concepts I'm familiar with, and the computer will translate those concepts into its own language and do whatever work I tell it to do.

This is the practical side of computing, what programming is basically all about--building computational abstractions that help people solve problems. And on the theoretical side, you might be thinking, "So we can transform a computer to behave as if it's a completely different machine..." Yes. Computers are universal machines. I don't mean that a computer can do everything. What I mean is that when we think about what "ordinary" machines do, we tend to say that you have to pick the right tool for the job. You don't use a chain saw to drive screws, or a kitchen blender to paint your walls. If the job is processing information, though, we choose a computer. We might think about how fast it runs and how much information it can store (it's something like saying, "I need a bigger hammer"), but the practical details are less important than the idea that we don't need different kinds of machines for different information processing tasks. Every computer is theoretically equivalent to every other computer for solving problems--we can transform one into another. It's just a matter of which abstractions we build on top of it.

Computing is as general and as powerful as you can imagine.

Saturday, February 2, 2013

Usability problem of the day (Unix ln documentation)

In the courses I teach about human-computer interaction, I typically open each class with an example of a usability problem. I'm putting these online, in case others find them useful.

I've been using Unix off and on since the early 1980s. Although I still occasionally write shell scripts, I'm not anything like an expert.

Unix is still my go-to source, though, when I talk about command line interfaces in my HCI classes. The Unix command line is a canonical example of the interaction style: short, powerful commands with a flexible grammar for constructing and combining results. It's also different in abstract ways from more familiar graphical user interfaces, enough to make it worth discussing. For example, on the command line you enter a command and then an optional sequence of flags and file names; in a GUI you typically first choose objects, such as file icons, and then choose a command, such as a menu item, to execute on the objects. The different grammars are appropriate for the different metaphors.

Friday, January 25, 2013

An unexpurgated interview

In the pulp science fiction novels I read as a kid, the authors tended to work within the social norms of the day with respect to language. Here's an example from E. E. "Doc" Smith's First Lensman:
Jack started to express an unexpurgated opinion, but shut himself up. Young cubs did not swear in front of the First Lensman.
And another:
Do you think you can get away with this?" she demanded. "Why, you..." and the unexpurgated, trenchant, brilliantly detailed characterization could have seared its way through four-ply asbestos.
I liked "unexpurgated", once I looked up what it meant. Hence the title of this post. I recently exchanged email with Nikki Stoudt, a writer for the NCSU student newspaper, for an article. Here's what was said... unexpurgated. (Not that the text needs it.)

Usability problem of tomorrow (Star Trek user interfaces)

In the courses I teach about human-computer interaction, I typically open each class with an example of a usability problem. I'm putting these online, in case others find them useful.

I sometimes take examples of interaction design from the entertainment industry, whether the systems involved are real or not. Star Trek: The Next Generation is a good standby.



Tuesday, January 22, 2013

Making computers seem a little less scary

The NCSU Technician has featured me in an article:

Making computers seem a little less scary
by Nikki Stoudt, Life & Style Editor
Though the computer science and English departments may not be known for their collaborative efforts, one professor has been working to break down the barrier between the “hard sciences” and the humanities.
More...

Me on youtube

For the Fabulous Faculty Series at the NCSU Hunt Library.


Thursday, January 17, 2013

On the WSJ and the D.C. gun ban

Jeffrey Scott Shapiro, in a recent Wall Street Journal opinion piece, writes about the D.C. gun ban:
The D.C. gun ban, enacted in 1976... had an unintended effect: It emboldened criminals because they knew that law-abiding District residents were unarmed and powerless to defend themselves. Violent crime increased after the law was enacted, with homicides rising to 369 in 1988, from 188 in 1976 when the ban started. By 1993, annual homicides had reached 454.
Later in the article, he writes,
Since the gun ban was struck down, murders in the District have steadily gone down, from 186 in 2008 to 88 in 2012, the lowest number since the law was enacted in 1976. The decline resulted from a variety of factors, but losing the gun ban certainly did not produce the rise in murders that many might have expected.
In written form, numbers and relationships like "369 in 1988", "188 in 1976", "186 in 2008", and "88 in 2002" are not always easy to make sense of. Here's what these specific numbers look like graphically.