Friday, February 8, 2013

A four-minute introduction to computing

This is a draft script for a very short talk about computing concepts for a non-technical audience. It doesn't go deep, but I try to bring across some important ideas...

When I was a kid, I had a Play-Doh machine. You drop in a lump of colored clay, insert a little plastic stencil, and then push down on the plunger. You get a long tube with the cross section of a triangle or a star. Was it fun? Well, I still remember it today...

When I explain what computer science is about, I sometimes start with a machine like this. Except that computers mold information into different shapes. They're information processing machines. There's more to it, of course. Information is different from physical raw materials. Think of information as a stream of numbers that you could write down. We might be talking about data from a scientific experiment or quarterly business reports, or a DVD video--it's all information. That's what's going into the machine, as input, and a new stream of information is the output.

But here's something cool. Inside every computer is a controller. That controller treats some streams of information differently from other kinds of data: it interprets each number as an instruction to follow. A stream of numbers can be a stream of instructions--a program.

So unlike my Play-Doh machine, a computer can handle two different kinds of input. One is information to work on, the other is instructions for what to do with the information. And some of those instructions involve making decisions about what it should do next. This is why we talk about a computer as being a different kind of machine than we usually think of--a Play-Doh machine, a loom for weaving, and so forth--because it can make some decisions for itself, on our behalf. That's more than a stencil or a template for repeating actions; it's real automation.

And there's something else. Programs are information, right? And programs take information as input... This means that we can feed one program to another program. Now things get really interesting.

Think of the instructions that a computer can carry out as its language. To get the computer to do something useful, you need to speak its language. But machine language is incredibly tedious, and it takes forever to write down instructions in just the right way to make just the right things happen.

And what if I'm a video artist or a baseball statistician? I have information that I'd like to be processed--maybe color corrections or on-base plus slugging numbers--but of course the machine's language doesn't include anything close to these abstract concepts. But here's the thing--information is malleable, and we know a lot about translating it from one language into another. My information comes in abstract chunks of information that I can talk about in my specialized language of video artistry or baseball statistics. With a lot of work, I may be able to translate my own information abstractions into terms that a computer can handle. And I don't have to do this entirely by hand--once I figure it all out, I can write a program to do the translation for me.

So when I'm using my computer, I don't have to work at the level of the machine; I can express myself in the concepts I'm familiar with, and the computer will translate those concepts into its own language and do whatever work I tell it to do.

This is the practical side of computing, what programming is basically all about--building computational abstractions that help people solve problems. And on the theoretical side, you might be thinking, "So we can transform a computer to behave as if it's a completely different machine..." Yes. Computers are universal machines. I don't mean that a computer can do everything. What I mean is that when we think about what "ordinary" machines do, we tend to say that you have to pick the right tool for the job. You don't use a chain saw to drive screws, or a kitchen blender to paint your walls. If the job is processing information, though, we choose a computer. We might think about how fast it runs and how much information it can store (it's something like saying, "I need a bigger hammer"), but the practical details are less important than the idea that we don't need different kinds of machines for different information processing tasks. Every computer is theoretically equivalent to every other computer for solving problems--we can transform one into another. It's just a matter of which abstractions we build on top of it.

Computing is as general and as powerful as you can imagine.

Saturday, February 2, 2013

Usability problem of the day (Unix ln documentation)

In the courses I teach about human-computer interaction, I typically open each class with an example of a usability problem. I'm putting these online, in case others find them useful.

I've been using Unix off and on since the early 1980s. Although I still occasionally write shell scripts, I'm not anything like an expert.

Unix is still my go-to source, though, when I talk about command line interfaces in my HCI classes. The Unix command line is a canonical example of the interaction style: short, powerful commands with a flexible grammar for constructing and combining results. It's also different in abstract ways from more familiar graphical user interfaces, enough to make it worth discussing. For example, on the command line you enter a command and then an optional sequence of flags and file names; in a GUI you typically first choose objects, such as file icons, and then choose a command, such as a menu item, to execute on the objects. The different grammars are appropriate for the different metaphors.