Monday, December 31, 2012

The book



It’s easy to take computers for granted. If I want to go shopping, visit a library, play a game, or share my thoughts with the rest of the world, I can do this all by typing on my laptop. I can exchange email with friends and colleagues, wherever they might be. If I were to pick up a screwdriver and go exploring in my house, I’d find computers in kitchen appliances, gaming and entertainment consoles, telephones—even in the walls, controlling the temperature.

Have you ever wondered what gives computers such remarkable power and flexibility? One answer is that computer designers and software developers build them that way. But that’s not entirely satisfying as an answer. Computing for Ordinary Mortals starts in a different way:


Friday, December 28, 2012

Usability problem of the day (representative users)

In the courses I teach about human-computer interaction, I typically open each class with an example of a usability problem. I'm putting these online, in case others find them useful.

Darrell Huff's 1954 book, How to Lie with Statistics, is a classic introduction to statistics. It's very informal; I first read it when I was a teenager. I remember one passage in part because I had no idea what what it was about:
For further evidence go back to 1936 and the Literary Digest's famed fiasco. The ten million telephone and Digest subscribers who assured the editors of the doomed magazine that it would be Landon 370, Roosevelt 161 came from the list that had accurately predicted the 1932 election. How could there be bias in a list already so tested?

Friday, December 7, 2012

Usability problem of the day (Moodle, part 1 of n)

In the courses I teach about human-computer interaction, I typically open each class with an example of a usability problem. I'm putting these online, in case others find them useful.




Moodle is... a Free [sic] web application that educators can use to create effective online learning sites.
I'm ambivalent about Moodle, the system my university has adopted for course management. It's awful beyond reason, but if I'm ever at a loss for a usability problem to show my students in class, I just do something in Moodle and take screen shots as I go.


Wednesday, December 5, 2012

Usability problem of the day (emergency exits in the interface)

In the courses I teach about human-computer interaction, I typically open each class with an example of a usability problem. I'm putting these online, in case others find them useful.

Jacob Nielsen offers this heuristic (one of ten) for designing a usable interface:
User control and freedom
Users often choose system functions by mistake and will need a clearly marked "emergency exit" to leave the unwanted state without having to go through an extended dialogue. Support undo and redo.
I'm in the Macintosh Finder, looking through my file system. On the top right of the Finder window there's a text box with a standard search icon; I click in it (or press a shortcut key combination, Command-F) to search for a file somewhere within "Applications".

Tuesday, December 4, 2012

Usability problem of the day (URLs on Facebook and Google+)

In the courses I teach about human-computer interaction, I typically open each class with an example of a usability problem. I'm putting these online, in case others find them useful.

A few months ago Stephanie Buck collected a list of pet peeves from Mashable staff: 20 Things Your Most Annoying Friends Do on Facebook.

Here's number 20:
20. Redundant Links
Pet peeve: people who don't remove the URL once they've copy/pasted it into a status update. Result: ilooklikebarackobama.com on top of ilooklikebarackobama.com. Hurts my eyes.


Monday, December 3, 2012

Review in the San Francisco Book Review

"St. Amant uses many analogies to explain difficult concepts, but is quick to point out where these analogies break down, so as not to lose accuracy and truthfulness. It can be dense, and can require a little time in each chapter to really get one’s head around some of the concepts, but the author really strikes the perfect balance between accuracy and understandability."

From the San Francisco Book Review.

Usability problem of the day (Post-Its)

In the courses I teach about human-computer interaction, I typically open each class with an example of a usability problem. I'm putting these online, in case others find them useful.

A few years ago I attended a workshop on the future of interactive systems; it was a wonderful chance to talk with world-class researchers in human-computer interaction and to speculate about the directions the field might take.

One of the workshop sessions involved brainstorming, in which we jotted down thoughts about a given topic on Post-It notes and then tried to figure out how they might all fit together. We stuck the notes to the wall, talked about the emerging organization of ideas, and then went off to lunch.

This is how things looked when we returned.


Sunday, December 2, 2012

Usability problem of the day (Scrabble)

In the courses I teach about human-computer interaction, I typically open each class with an example of a usability problem. I'm putting these online, in case others find them useful.

This is how the Scrabble app on my iPhone worked for some time (the current interface has gone through a few revisions). Let's walk through it.


I decide that I want to play a game of Scrabble. When I start up the app, this screen appears.


Friday, November 30, 2012

Usability problem of the day (an adventure in an English washroom)

In the courses I teach about human-computer interaction, I typically open each class with an example of a usability problem. I'm putting these online, in case others find them useful.

Okay, not an adventure. Last year, during a layover at the Heathrow Airport, I had occasion to visit the washroom. The fixtures looked familiar, as they do almost everywhere, but they gave me trouble. I couldn't figure out how to turn the water on.

Thursday, November 29, 2012

NCSU HFES talk

These slides are for a presentation to the NCSU chapter of the Human Factors and Ergonomics Society.


Usability problem of the day (duplicating in iPhoto)

In the courses I teach about human-computer interaction, I typically open each class with an example of a usability problem. I'm putting these online, in case others find them useful.

I'm a casual user of iPhoto. The application is up to version 9, and it's been around since 2002, so you'd expect it to be bulletproof with respect to usability. It's not quite that.

Wednesday, November 28, 2012

Usability problem of the day (microwave oven controls)

In the courses I teach about human-computer interaction, I typically open each class with an example of a usability problem. I'm putting these online, in case others find them useful.

This is the control panel on my microwave oven.




Tuesday, November 27, 2012

Usability challenge of the day (Jitterbug)


In the courses I teach about human-computer interaction, I typically open each class with an example of a usability problem. I'm putting these online, in case others find them useful.

A couple of years ago ads for a new cell phone for older users, called Jitterbug, were everywhere on TV. (The ads were annoying, to be honest, but maybe that's just me.) Online, here's how the Jitterbug was advertised:


Introducing the world's simplest, cell phone experience [sic]...
It doesn't play games, take pictures, or give you the weather.
For people like me, who want a cell phone that's easy to use.

That sounds great, doesn't it? I wondered how they manage it, and so I looked up the Jitterbug user manual. Here's the table of contents.


Monday, November 26, 2012

Usability problem of the day (iPhone scrollbar)

In the courses I teach about human-computer interaction, I typically open each class with an example of a usability problem. I'm putting these online, in case others find them useful.

Here are two screenshots of the Settings app on my iPhone, from a couple of years ago.



Sunday, November 25, 2012

On animals and tools (OUP)

Try this experiment: Ask someone to name three tools, without thinking hard about it. This is a parlor game, not a scientific study, so your results may vary, but I've done this dozens of times and heard surprisingly consistent answers. The most common is hammer, screwdriver, and saw, in that order.

We seem to share a basic understanding of what tools are and how they're used. This may be only natural; tools fill our lives. It's hard to imagine going through your daily routine without them. You can't brush your teeth or comb your hair; locked doors stay locked; meal times, in the preparation and the eating, are messy affairs. As Thomas Carlyle said, "Man is a tool-using animal. Without tools he is nothing, with tools he is all."


Read the rest on the OUPblog, "Oxford University Press's Academic Insights for the Thinking World."

Wednesday, November 21, 2012

Fairly old memories

My wife and I used to travel a good deal, and I took a fair number of photos on our trips. Here are a few of my favorites, which I keep rotating in the background on my computer. There's no lesson here; just whimsy. I'm terrible at keeping records, but I've added captions that indicate something about where the pictures were taken. Most of these were taken with an SLR and scanned into digital form.


Tanzania. An acacia tree, if I'm not mistaken.

Orca and usability (HuffPo)

Imagine that it's a few months before the Presidential election. You've just received a call, asking if you'd be willing to offer your services as a usability consultant on a political software project. You're curious; what would something called "Orca" be for? You decide to do it. You're shown a few screen shots, along with brief written instructions for using the system.
Here's how your conversation with the development team might go.
Read the rest on the Huffington Post.

Sunday, November 18, 2012

Software building blocks

I've changed the background image on the blog; it shows the street pavement outside a small hotel in Milan, where my wife and I stayed some years ago. The rectangular blocks are arranged to form overlapping arcs, a nice contrast between the view close-up and from a short distance.

One highlight of our trip was a visit to the Milan cathedral. It was under construction and most of the exterior was covered for protection, but we were able to wander across the lower rooftops and admire the stonework and detailed carvings, high above the street level.



Thursday, November 15, 2012

Book review at Technology and Society

There's a new book review from Curtis Frye, who writes, "Robert St. Amant wrote Computing for Ordinary Mortals to describe the ideas behind computer technology for the non-technical reader. He succeeds admirably."

Read the full review at Technology and Society Book Reviews.

Tuesday, November 13, 2012

CSTA talk

I'll be giving a short talk at the local chapter of the Computer Science Teachers Association this evening. Here's what I plan to present.



Friday, November 9, 2012

UNC CRADLE talk

This isn't strictly related to Computing for Ordinary Mortals, but in case any readers are interested in what a college professor does...

These slides were for a presentation to the Center for Research and Development of Digital Libraries at the University of North Carolina. The talk was about Physical information spaces, in context, which is probably a bit too general, but I was covering a good deal of ground.



Thursday, November 8, 2012

Ordinary Mortals and CS education

This post can also be read as a public Google doc.

I’ve written Computing for Ordinary Mortals for readers without a technical background or even much experience in computing. My thought was this: If you wanted to explain what computing is all about, starting from scratch, what what would you say? You have a tremendous number of decisions to make, about which topics are critical and which can be left out, about the ordering and detail of topics you include, and about how the topics fit together into a comprehensive whole. For what it’s worth, some computer scientists will make decisions different from mine. Most popular science books and textbooks go into greater detail about representing and managing data; I delay a discussion of programming concepts until after algorithms and abstract data types; I punt on the question of whether computer science is a branch of applied mathematics (see Dijkstra’s “How do we tell truths that might hurt?” [PDF], though he was talking about programming), or a branch of science (Newell, Perlis, and Simon’s “What is computer science?”), or a branch of engineering (Eden’s “Three Paradigms of Computer Science” [PDF]), or perhaps something different (Rosenbloom’s On Computing, or Graham’s “Hackers and painters”).

Writing a popular science book on computing means taking a stand on such issues, but the constraints of the genre didn’t make it easy for me to say, “Here’s what I’m doing...” That’s in part what this document is for, to identify the connections between Ordinary Mortals and the field of computer science, at least as it’s currently taught at the university and secondary school levels.

Industry talk

This isn't strictly related to Computing for Ordinary Mortals, but in case any readers are interested in what a college professor does...

These slides were for a presentation to an industry partner, about the work that goes on in my research lab.


Saturday, November 3, 2012

How to avoid programming (OUP)

What does a computer scientist do? You might expect that we spend a lot of our time programming, and this sometimes happens, for some of us. When I spend a few weeks or even months building a software system, the effort can be enormously fun and satisfying. But most of the time, what I actually do is a bit different. Here’s an example from my past work, related to the idea of computational thinking.
Imagine you have a new robot in your home. You haven’t yet figured out all of its capabilities, so you mainly use it for sentry duty; it rolls from room to room while you’re not at home, turning lights and appliances on and off, perhaps checking for fires or burglaries.

Read the rest on the OUPblog, "Oxford University Press's Academic Insights for the Thinking World."

Tuesday, October 30, 2012

The Big Idea (Whatever)

When I was ten years old or so, I saw a battered paperback copy of Triplanetary on my grandfather’s bookshelf. I borrowed it… and found myself in ten-year-old heaven. Science fiction led me to popular science, with Isaac Asimov (and Edgar Cayce, embarrassingly enough) to help me cross the boundary. I read about physics, space, biology, math, and psychology. It was formative reading. Today I’m a computer scientist, and I’ve just written my own book.
The big idea in Computing for Ordinary Mortals is that the basics of computer science can be conveyed through stories. Not stories about computers and how we use them, but stories about other kinds of everyday things we do. Computing is more about abstract concepts than about hardware or software, and we can understand these concepts through analogies to what happens in the real world.
Read the rest in a Big Ideas post on John Scalzi's Whatever blog.

Computational thinking about politics (HuffPo)

On The Atlantic Wire Gabriel Snyder gives what we'd call a combinatorial analysis of the presidential election. I like the analysis not for what it says about the possible outcome but because it illustrates an influential idea in computer science, called computational thinking: formulating a problem so that it can be solved by an information-processing agent. Here's how it works in this situation.

Read the rest on the Huffington Post.

Monday, October 15, 2012

A computer science reading list

In writing Computing for Ordinary Mortals, I had to think about where to point readers who might want more depth in each of the topics I wrote about. The result (in the Further Reading sections and end notes) is a reading list for someone who would like to come up to speed in computer science. My choices are biased, of course (a systems person would have more to say about architecture, operating systems, and networking), but I think they're plausible overall.

Friday, October 12, 2012

Computing for Ordinary Mortals: Errata

I wonder if it's possible to publish a book that doesn't contain errors? These are fixes for Computing for Ordinary Mortals. So far.

Page 90: The indentation of the last line in the algorithm, "Output the number 32...", should match the indentation of the second line, "Do the following..."

Page 222: It's Butler Lampson. (Thanks to Jon Doyle for catching my mistake.)

Friday, October 5, 2012

Computing for Ordinary Mortals has arrived



A couple of weeks ago a Federal Express envelope arrived at my house: it was a copy of my book, from Oxford. Last week a box arrived, and I knew what to expect: twenty more copies for friends and family.

And now I'm told that Computing for Ordinary Mortals is shipping from Amazon.

I'm excited. So what do I do now? I tell you about what I've written... (shortly.)

Wednesday, September 19, 2012

Science, Engineering, and Innovation (HuffPo)


We all know in our heads that science, engineering, and the work of creative people influence our everyday lives. But sometimes it feels more personal.
Chad Ruble's mother had a stroke some years ago and was left with aphasia. According to the National Aphasia Association, about 1 million Americans have aphasia, an impairment of the ability to process language -- in speaking and listening, and usually also in reading and writing. Today's networks can connect us with almost everyone else in the world, but for people with severe aphasia, more is needed. Ordinary phone conversation and email pose challenges.
Read the rest on the Huffington Post.

Wednesday, September 12, 2012

Computer programming is the new literacy (OUP)

It’s widely held that computer programming is the new literacy. (Disagreement can be found, even among computing professionals, but it’s not nearly as common.) It’s an effective analogy. We all agree that everyone should be literate, and we might see a natural association between writing letters for people to read and writing programs for computers to carry out. We also find a historical parallel to the pre-Gutenberg days, when written communication was the purview mainly of the aristocracy and professional scribes. Computation is an enormously valuable resource, and we’re only beginning to explore the implications of its being inexpensively and almost universally available.
But is programming-as-literacy an appropriate analogy? We tend to think that basic literacy is achieved by someone who can say, “Yes, I can read and write.” Let’s see what this means in the context of programming.

Read the rest on the OUPblog, "Oxford University Press's Academic Insights for the Thinking World."

Wednesday, September 5, 2012

Code Year: Why You Should Learn to Code (HuffPo)

You probably know about Code Year. Code Year, sponsored by Codecademy, challenges people to learn how to program in 2012. The Codeacademy website offers free online lessons in a variety of programming languages; it's received attention in the press and saw a large boost from a comment from New York Mayor Michael Bloomberg on Twitter: "My New Year's resolution is to learn to code with Codecademy in 2012! Join me."
Hundreds of thousands of people have joined Bloomberg. Even though my own Code Year was 30 years ago, I can still appreciate the appeal -- you'll learn how to write software to make your computer do new and wonderful things that youfind valuable, instead of depending only on what others have done. That's empowering.
But there's more.

Read the rest on the Huffington Post.

Sunday, July 8, 2012

How to write a popular science book

I'm being a little presumptuous with this post. My book, Computing for Ordinary Mortals, won't appear until the fall. And it's my first book. So I might end up retitling this post "How to Write a Popular Science Book that Nobody Reads," or (the happier but less likely case) "The Secret to Writing a Popular Science Best Seller." We'll see.

Here are a few things I kept in mind as I was writing.

Tuesday, June 19, 2012

Popular science books about computing


There’s a common aphorism in academia: You don’t really understand a subject until you teach it. This isn’t entirely true, of course, but being asked questions can make you think harder about what you know and what you don’t know.
I’ve found something similar in writing a popular science book. And I’ve realized a bit more.

Thursday, June 14, 2012

Justin Bieber is a Literary Giant



End here. Us then. Finn, again! Take. Bussoftlhee, mememormee! Till thousandsthee. Lps. The keys to. Given! A way a lone a last a loved a long the riverrun, past Eve and Adam's, from swerve of shore to bend of bay, brings us by a commodius vicus of recirculation back to Howth Castle and Environs.

Thursday, June 7, 2012

Everything old is new again

I wrote this back in 2008 and then took it down; here it is again, slightly updated.
Have you ever come across the notion that the world of computers is changing very rapidly? Me too. This theme runs constantly through discussions of computer and communication systems today: we'll need these upgrades; our systems will be obsolete within six months; we can't conceive of what our grandchildren will be doing with computers; and so forth.
Not surprisingly, though, really good ideas  the kind that lead to revolutionary change   are rare. General conceptual threads in computing can often be traced back to a strikingly original idea, and what we sometimes find is that our great new discoveries are what smart people have been talking about for quite some time. Here are two examples.

Friday, May 25, 2012

An ongoing revolution... in computing education

These days a lot of people seem to be thinking, "Maybe I could try one of those free online courses and learn how to program." Others say, "What's the point?" (Juliet Waters, in blogging about her New Year’s resolution to learn how to code, explains what the point is.) Some even say, "No! Please don't learn to code!" Fortunately, the last category holds only a tiny minority of people.
The past six months have seen a surge of public interest in computing. The UK is refocusing its pre-university curriculum on information and communications technology to concentrate on the science of computing. (This is good timing; 2012 marks the centenary of the birth of Alan Turing, the London-born founder of computer science.) In the New York Times, Randall Stross writes about computational thinking as a fundamental skill for everyone. When even the mayor of New York City decides to join Code Academy to learn how to program, people take notice. A minor revolution is underway in formal and informal computing education.

Wednesday, May 23, 2012

I'll bet you like ice cream.



Do you like ice cream? I'll predict that if you care enough to mention ice cream via Twitter, you're probably in favor of it, even moderately enthusiastic.

Am I just guessing? Not entirely. I used a visualization system developed by my colleague Chris Healey to produce the result at the top of this post. When I typed in "ice cream", the system retrieved a few hundred recent tweets containing that term and generated a visualization of their emotional content, or sentiment. Try it yourself, with your own keywords, on Chris's tweet viz Web page.

The visualization integrates a lot of information (details here), but I'll concentrate on the basics: roughly speaking, the green circles are for tweets that express "positive" sentiment, and the blue circles are for tweets expressing "negative" sentiment. Sentiment is inferred from the words in the tweet. For example, "ice cream and good sex!" (the text of an actual tweet, with relevant words in bold) contains "good" and "sex". Very nice. But how could ice cream be bad? Well, some people who eat ice cream when they're feeling down might tweet about it. And one tweet says it's "dangerous even in an ice cream shop! Robbery yesterday..."

We'll need a bit of theory to understand how the circles are laid out. James Russell, a psychologist at Boston College, has proposed a conceptual framework for understanding emotion [PDF]. (There have been many attempts to formalize what we know about emotion.) In Russell's framework, one important factor is valence, which ranges from unpleasant to pleasant. (This is actually what I meant by "negative" and "positive" above.) The horizontal placement of a circle is an indication of the unpleasantness or pleasantness expressed in a tweet. And the vertical placement? That's another factor in Russell's framework: arousal, which ranges from being nearly comatose to being very, very excited. So the circles toward the top are excited, with happy tweets on the right and stressed-out tweets on the left, and at the bottom they're kind of... meh. We don't see a lot of the latter; tweeting takes some effort, after all.

It's surprisingly hard to find topics where tweets contain uniformly pleasant or unpleasant sentiment. Not even for the keyword "funeral":


Some people apparently like funerals! But if you hovered your mouse pointer over different green circles to read the tweets, you'd find that some Twitter users have the word "funeral" in their names, and sometimes they tweet about happy things. (Why not filter out names? We'd lose information. For example, a query on "obama" might then ignore tweets containing only @BarackObama, which we probably do want to see.) Also, you'll occasionally find something along these lines: "Nice to see my loving family at an event other than a funeral." A loving family is a pleasant thing to have.


This last example suggests that an automated analysis isn't as smart as a person in extracting meaning or sentiment. A human being, for example, might reasonably judge that a tweet about "an event other than a funeral" isn't really about funerals. Processing natural language, beyond the level of individual words, remains a very hard problem.


I'm writing about Chris's tweet viz system for a couple of reasons. First, it's cool. Half a billion people across the world use Twitter, and 340 million tweets are posted each day. (I'm quoting one of my students, Shishir Kakaraddi, who just completed an M.S. thesis in the general area of tweet summarization.) We need good tools for making sense of all this data. Second, the project is a nice example of how research can drive software development. It's not just about what people might like to see in a visualization of Twitter data; the design of the visualization draws on psychological models of visual perception (for example, in the color choices) and of emotion (in the analysis and display of sentiment information). By building and testing systems like these, we can learn new and interesting things.


Monday, May 21, 2012

Being chased by ravening informavores

I posted this a couple of years ago, but the original version is gone; here it is again, in slightly altered form.
Imagine yourself a gentle woodland creature, perhaps a deer. You're peacefully munching on ferns and acorns in the forest, like Bambi, moving from one patch of fresh greenery to another, until...
Well, there's no "until" in this story; you spend the rest of your life in the pursuit of various deer-appropriate activities, and the curtain comes down long before your unfortunate encounter with a pack of wolves.
If we were to ask a group of biologists to tell this story, they'd be able to flesh it out with any number of details: the kinds of plants you find nourishing as deer fodder; your weight, coloring, and other physical characteristics; even what your life has probably been like up until now. They'd be able to describe your deerhood down to the hairs in your coat. This isn't the only way to tell such a story, though. Some of the biologists' mathematically oriented cousins, the behavioral ecologists, have traded in their magnifying glasses for satellite cameras, abandoning details for generality. For example, a behavioral ecologist might say,
Once upon a time, in a forest in which patches of edible foliage were uniformly distributed with such and such parameters, various groups of deer spent x hours on average within one patch before moving to another...
This bedtime story should put you (not to mention your children) right to sleep. But here's the thing: if we can tell such ecological stories accurately, we should be able to answer a lot of interesting questions. For example, "Given a particular foraging behavior we observe, what kind of environment might this animal have evolved in?" Or "If a drought is expected this summer, reducing flora of this kind, is this species likely to survive, given its usual behavior?" Or even a bit farther afield, "Given the rate at which temperatures are rising in the Arctic, along with our knowledge of polar bears, including their swimming range, how far are they from extinction?" And so forth. I've made up these specific examples, but they all fall within the scope of behavioral ecology. 


"But why should I care about any of this?"


Let me tell you a different story...
I'm browsing through Facebook and I come across an update that points to an interesting story in the news. I follow the link to a newspaper Web site, and after I'm finished I see, on the sidebar, other articles that are popular. I move on, happily reading and even commenting until eventually I am eaten by wolves—
Oops, sorry, I forgot which story I was telling. But you can see how I might have become confused. I'm telling the same story about myself that I told about a deer, except that I'm foraging for information. The insight concerning this parallel is due to George Miller (whose name is familiar to Psych 101 students—the magical number seven plus-or-minus two may ring a bell). Miller observed that just as animals consume food, human beings and other higher organisms consume information: we're informavores. One of the implications of this view is that the scientific concepts used to model animal ecologies can also be applied to human information environments.
Peter Pirolli and Stu Card, cognitive scientists at PARC, have explored this idea in detail. Among other things, they've defined some new conceptual terms for Web designers to think about, such as information foraging and information scent. "Information scent is a metaphor for proximal cues that guide users in navigating toward distal information," as I once wrote elsewhere. In other words, we can think of the World Wide Web as consisting of patches of information that we move in and between, trying to maximize the value of what we're absorbing as we go. The little snippets of text associated with hyperlinks that we can click on (or decide not to click on) are the information scent for the Web pages they reach.
These concepts are more than a metaphorical way of describing what people are doing when they surf the Web. They can be used prescriptively. The Web is an artificially designed information environment, one that can be tailored to what is known about human information foraging patterns: how we allocate attention between competing sources of information; how we combine what we see with what we already know; even the mechanics of how we point at and click on information items we find interesting. Our interaction with a given Web site can be optimal, by some measure, if it is designed to be so. In theory, at least...
So if a deer sees a bunch of bright red berries in the middle distance and thinks, "Yummy," he's doing much the same as I am when I see a link and consider following it. More yummy information. And when we've had our fill, or at least grazed for a while, we move on. The next time you hear someone say, "We're closer to animals than we think," you have another reason to agree.




I'll end with a disclaimer: I'm a computer scientist rather than a behavioral ecologist; this is my understanding of the area. I've also written a little bit about the work of Peter Pirolli and Stu Card elsewhere, and I chat with them on occasion.

Sunday, May 20, 2012

Through the Computer Screen, Part II

My previous post was a Lewis Carroll pastiche about the organization of concepts in computer science. This isn't an unusual effort; several can be found online, and there's an entire book, Lauren Ispum, that combines themes from Alice, The Phantom Tollbooth, and probably other sources (I've only read a few chapters) to introduce computer science to kids.

 Tweedledum's expression (or is that Tweedledee's expression?) will probably match your reaction to the computer in-jokes in my piece.


I should start by crediting Peter Denning, who developed the Great Principles of Computing, though I'm relying a slightly older and simpler version of his work. To quote from Denning's paper [PDF]: 
  • Computation: What can be computed and how; limits of computing. [I've called this Theory.]
  • Communication: Sending messages from one point to another.
  • Coordination: Multiple entities cooperating toward a single result.
  • Automation: Performing cognitive tasks by computer.  [I've narrowed this to Artificial Intelligence.]
  • Recollection: Storing and retrieving information. [Narrowed to Information Management.]
Here's how I put this together in a metaphorical story, with annotations in green. I explain the red asterisks at the end.

Alice is wandering through the downtown area of her city. As she walks down a side street, she passes a man and a woman leaving the entrance of a small white building. The woman says, "That was an interesting museum." [Why a museum as the setting? See the AI section below.]
Alice decides to go inside. She stops in front of a sign titled “Read me” [Software is often delivered with a README file giving basic information] and discovers that she’s in a museum of Victorian artifacts. Alice passes a display of postcards, then an arrangement of fashionable women’s clothing (cuirass bodices, skirts with bustles), and then a penny-farthing bicycle. Eventually she sees a man in uniform sitting behind a writing desk. His badge reads, Docent: Charles Corvus.  ["Charles" is of course a nod to Lewis Carroll. The Corvus genus includes ravens, one of which appears in Alice in Wonderland as the subject of a riddle.]
Communication: "Hello," says Alice politely. "Can you tell me about your museum?" Charles doesn't look up.
"This isn’t a mausoleum," he says. *
“Your museeeum," Alice says, enunciating carefully. [One approach to dealing with errors in communication is simply to repeat.]
Charles glances up at her. "I beg your pardon," he says. “It’s a bit noisy.”  [Noise introduces errors in communication.] He rises and shakes Alice’s hand[Handshaking is part of how some systems initiate communication.] “Would you like to have a tour?"
Artificial Intelligence: He gives her a small plastic device with buttons and a display. "This is a mobile guide. If you press this button, it will tell you where to go next in the museum."
"Thank you. [Carroll's Alice is very polite.] How does it know what I’ll be interested in?"
"It doesn't," Charles says. * "It takes you on a walk in a random direction." [A random walk and the British Museum algorithm are famous though not very good search techniques in AI.]
Information management: "But how does that help me?” asks Alice. “I mean, the museum seems very confusing as it is. It’s as if there’s no organization to the exhibits."
"Ah, but there is. You're meant to explore the museum, and it's organized so that whatever exhibit you're standing in front of, related exhibits are as far away as possible." *
"Does that help?"
"Yes—the key is to take your time.  Join me. We’ll explore together."  [These are database terms.]
Coordination: Alice and Charles pass two stout museum workers holding opposite ends of a large flag. The men are arguing and pulling violently in opposite directions. The threads part and snap, leaving the fabric in tatters. * [These are operating systems terms.]
"Those contentious fellows are in charge of separate exhibits," says Charles. "They're always having a bit of a fight." [Contention over resources is also an operating systems issue; I'm quoting Carroll with "a bit of a fight".]
Theory: Alice and Charles walk through the rooms for a while longer, talking about the exhibits. At the exit she says, "Thank you, it's a very interesting museum."
"All of our visitors say that."
"Do you have many visitors?" asks Alice cautiously. She hasn't seen another inside the museum.
"Uncountably many," says Charles.  [Some sets of things, such as the real numbers, can't be counted in their entirety--if you try, you'll inevitably end up missing some.]
"Oh. Have you tried counting?" *
“Well…” Charles halts and looks thoughtful. "Good-bye." [Alan Turing, the father of computer science, proved that the halting problem--determining whether an arbitrary computer program will stop or run forever--cannot be solved by any algorithm.]


One of the conceits in this piece (if that's the word I want) is that the starred breakdowns should be memorable, and each breakdown marks a different area of computing. But is this effective? Hmm... 
I feel a little bit like a magician explaining a trick that didn't come off, or a comedian trying to convince you that some routine should be funny. Elmore Leonard follows this rule in his work: If it sounds like writing, I rewrite it. What I've written definitely sounds like writing. That's part of the charm of the Alice books, but I'm no Lewis Carroll.