Showing posts with label popular science. Show all posts
Showing posts with label popular science. Show all posts

Friday, February 8, 2013

A four-minute introduction to computing

This is a draft script for a very short talk about computing concepts for a non-technical audience. It doesn't go deep, but I try to bring across some important ideas...

When I was a kid, I had a Play-Doh machine. You drop in a lump of colored clay, insert a little plastic stencil, and then push down on the plunger. You get a long tube with the cross section of a triangle or a star. Was it fun? Well, I still remember it today...

When I explain what computer science is about, I sometimes start with a machine like this. Except that computers mold information into different shapes. They're information processing machines. There's more to it, of course. Information is different from physical raw materials. Think of information as a stream of numbers that you could write down. We might be talking about data from a scientific experiment or quarterly business reports, or a DVD video--it's all information. That's what's going into the machine, as input, and a new stream of information is the output.

But here's something cool. Inside every computer is a controller. That controller treats some streams of information differently from other kinds of data: it interprets each number as an instruction to follow. A stream of numbers can be a stream of instructions--a program.

So unlike my Play-Doh machine, a computer can handle two different kinds of input. One is information to work on, the other is instructions for what to do with the information. And some of those instructions involve making decisions about what it should do next. This is why we talk about a computer as being a different kind of machine than we usually think of--a Play-Doh machine, a loom for weaving, and so forth--because it can make some decisions for itself, on our behalf. That's more than a stencil or a template for repeating actions; it's real automation.

And there's something else. Programs are information, right? And programs take information as input... This means that we can feed one program to another program. Now things get really interesting.

Think of the instructions that a computer can carry out as its language. To get the computer to do something useful, you need to speak its language. But machine language is incredibly tedious, and it takes forever to write down instructions in just the right way to make just the right things happen.

And what if I'm a video artist or a baseball statistician? I have information that I'd like to be processed--maybe color corrections or on-base plus slugging numbers--but of course the machine's language doesn't include anything close to these abstract concepts. But here's the thing--information is malleable, and we know a lot about translating it from one language into another. My information comes in abstract chunks of information that I can talk about in my specialized language of video artistry or baseball statistics. With a lot of work, I may be able to translate my own information abstractions into terms that a computer can handle. And I don't have to do this entirely by hand--once I figure it all out, I can write a program to do the translation for me.

So when I'm using my computer, I don't have to work at the level of the machine; I can express myself in the concepts I'm familiar with, and the computer will translate those concepts into its own language and do whatever work I tell it to do.

This is the practical side of computing, what programming is basically all about--building computational abstractions that help people solve problems. And on the theoretical side, you might be thinking, "So we can transform a computer to behave as if it's a completely different machine..." Yes. Computers are universal machines. I don't mean that a computer can do everything. What I mean is that when we think about what "ordinary" machines do, we tend to say that you have to pick the right tool for the job. You don't use a chain saw to drive screws, or a kitchen blender to paint your walls. If the job is processing information, though, we choose a computer. We might think about how fast it runs and how much information it can store (it's something like saying, "I need a bigger hammer"), but the practical details are less important than the idea that we don't need different kinds of machines for different information processing tasks. Every computer is theoretically equivalent to every other computer for solving problems--we can transform one into another. It's just a matter of which abstractions we build on top of it.

Computing is as general and as powerful as you can imagine.

Friday, January 25, 2013

An unexpurgated interview

In the pulp science fiction novels I read as a kid, the authors tended to work within the social norms of the day with respect to language. Here's an example from E. E. "Doc" Smith's First Lensman:
Jack started to express an unexpurgated opinion, but shut himself up. Young cubs did not swear in front of the First Lensman.
And another:
Do you think you can get away with this?" she demanded. "Why, you..." and the unexpurgated, trenchant, brilliantly detailed characterization could have seared its way through four-ply asbestos.
I liked "unexpurgated", once I looked up what it meant. Hence the title of this post. I recently exchanged email with Nikki Stoudt, a writer for the NCSU student newspaper, for an article. Here's what was said... unexpurgated. (Not that the text needs it.)

Monday, December 31, 2012

The book



It’s easy to take computers for granted. If I want to go shopping, visit a library, play a game, or share my thoughts with the rest of the world, I can do this all by typing on my laptop. I can exchange email with friends and colleagues, wherever they might be. If I were to pick up a screwdriver and go exploring in my house, I’d find computers in kitchen appliances, gaming and entertainment consoles, telephones—even in the walls, controlling the temperature.

Have you ever wondered what gives computers such remarkable power and flexibility? One answer is that computer designers and software developers build them that way. But that’s not entirely satisfying as an answer. Computing for Ordinary Mortals starts in a different way:


Sunday, July 8, 2012

How to write a popular science book

I'm being a little presumptuous with this post. My book, Computing for Ordinary Mortals, won't appear until the fall. And it's my first book. So I might end up retitling this post "How to Write a Popular Science Book that Nobody Reads," or (the happier but less likely case) "The Secret to Writing a Popular Science Best Seller." We'll see.

Here are a few things I kept in mind as I was writing.

Tuesday, June 19, 2012

Popular science books about computing


There’s a common aphorism in academia: You don’t really understand a subject until you teach it. This isn’t entirely true, of course, but being asked questions can make you think harder about what you know and what you don’t know.
I’ve found something similar in writing a popular science book. And I’ve realized a bit more.

Thursday, June 7, 2012

Everything old is new again

I wrote this back in 2008 and then took it down; here it is again, slightly updated.
Have you ever come across the notion that the world of computers is changing very rapidly? Me too. This theme runs constantly through discussions of computer and communication systems today: we'll need these upgrades; our systems will be obsolete within six months; we can't conceive of what our grandchildren will be doing with computers; and so forth.
Not surprisingly, though, really good ideas  the kind that lead to revolutionary change   are rare. General conceptual threads in computing can often be traced back to a strikingly original idea, and what we sometimes find is that our great new discoveries are what smart people have been talking about for quite some time. Here are two examples.

Sunday, May 20, 2012

Through the Computer Screen, Part II

My previous post was a Lewis Carroll pastiche about the organization of concepts in computer science. This isn't an unusual effort; several can be found online, and there's an entire book, Lauren Ispum, that combines themes from Alice, The Phantom Tollbooth, and probably other sources (I've only read a few chapters) to introduce computer science to kids.

 Tweedledum's expression (or is that Tweedledee's expression?) will probably match your reaction to the computer in-jokes in my piece.


I should start by crediting Peter Denning, who developed the Great Principles of Computing, though I'm relying a slightly older and simpler version of his work. To quote from Denning's paper [PDF]: 
  • Computation: What can be computed and how; limits of computing. [I've called this Theory.]
  • Communication: Sending messages from one point to another.
  • Coordination: Multiple entities cooperating toward a single result.
  • Automation: Performing cognitive tasks by computer.  [I've narrowed this to Artificial Intelligence.]
  • Recollection: Storing and retrieving information. [Narrowed to Information Management.]
Here's how I put this together in a metaphorical story, with annotations in green. I explain the red asterisks at the end.

Alice is wandering through the downtown area of her city. As she walks down a side street, she passes a man and a woman leaving the entrance of a small white building. The woman says, "That was an interesting museum." [Why a museum as the setting? See the AI section below.]
Alice decides to go inside. She stops in front of a sign titled “Read me” [Software is often delivered with a README file giving basic information] and discovers that she’s in a museum of Victorian artifacts. Alice passes a display of postcards, then an arrangement of fashionable women’s clothing (cuirass bodices, skirts with bustles), and then a penny-farthing bicycle. Eventually she sees a man in uniform sitting behind a writing desk. His badge reads, Docent: Charles Corvus.  ["Charles" is of course a nod to Lewis Carroll. The Corvus genus includes ravens, one of which appears in Alice in Wonderland as the subject of a riddle.]
Communication: "Hello," says Alice politely. "Can you tell me about your museum?" Charles doesn't look up.
"This isn’t a mausoleum," he says. *
“Your museeeum," Alice says, enunciating carefully. [One approach to dealing with errors in communication is simply to repeat.]
Charles glances up at her. "I beg your pardon," he says. “It’s a bit noisy.”  [Noise introduces errors in communication.] He rises and shakes Alice’s hand[Handshaking is part of how some systems initiate communication.] “Would you like to have a tour?"
Artificial Intelligence: He gives her a small plastic device with buttons and a display. "This is a mobile guide. If you press this button, it will tell you where to go next in the museum."
"Thank you. [Carroll's Alice is very polite.] How does it know what I’ll be interested in?"
"It doesn't," Charles says. * "It takes you on a walk in a random direction." [A random walk and the British Museum algorithm are famous though not very good search techniques in AI.]
Information management: "But how does that help me?” asks Alice. “I mean, the museum seems very confusing as it is. It’s as if there’s no organization to the exhibits."
"Ah, but there is. You're meant to explore the museum, and it's organized so that whatever exhibit you're standing in front of, related exhibits are as far away as possible." *
"Does that help?"
"Yes—the key is to take your time.  Join me. We’ll explore together."  [These are database terms.]
Coordination: Alice and Charles pass two stout museum workers holding opposite ends of a large flag. The men are arguing and pulling violently in opposite directions. The threads part and snap, leaving the fabric in tatters. * [These are operating systems terms.]
"Those contentious fellows are in charge of separate exhibits," says Charles. "They're always having a bit of a fight." [Contention over resources is also an operating systems issue; I'm quoting Carroll with "a bit of a fight".]
Theory: Alice and Charles walk through the rooms for a while longer, talking about the exhibits. At the exit she says, "Thank you, it's a very interesting museum."
"All of our visitors say that."
"Do you have many visitors?" asks Alice cautiously. She hasn't seen another inside the museum.
"Uncountably many," says Charles.  [Some sets of things, such as the real numbers, can't be counted in their entirety--if you try, you'll inevitably end up missing some.]
"Oh. Have you tried counting?" *
“Well…” Charles halts and looks thoughtful. "Good-bye." [Alan Turing, the father of computer science, proved that the halting problem--determining whether an arbitrary computer program will stop or run forever--cannot be solved by any algorithm.]


One of the conceits in this piece (if that's the word I want) is that the starred breakdowns should be memorable, and each breakdown marks a different area of computing. But is this effective? Hmm... 
I feel a little bit like a magician explaining a trick that didn't come off, or a comedian trying to convince you that some routine should be funny. Elmore Leonard follows this rule in his work: If it sounds like writing, I rewrite it. What I've written definitely sounds like writing. That's part of the charm of the Alice books, but I'm no Lewis Carroll.

Saturday, May 19, 2012

Through the Computer Screen, and What Alice Found There

I wonder if every computer scientist who writes for the general public is tempted to do an Alice pastiche?

This is a fragment from a draft of the first chapter of my soon-to-be-published book, Computing for Ordinary Mortals. One of my excellent reviewers said that this passage had to go, and so I replaced it. I still like it, though. I'll put up another post, a bit later, with footnotes.



Alice is wandering through the downtown area of her city. As she walks down a side street, she passes a man and a woman leaving the entrance of a small white building. The woman says, "That was an interesting museum."
Alice decides to go inside. She stops in front of a sign titled “Read me” and discovers that she’s in a museum of Victorian artifacts. Alice passes a display of postcards, then an arrangement of fashionable women’s clothing (cuirass bodices, skirts with bustles), and then a penny-farthing bicycle. Eventually she sees a man in uniform sitting behind a writing desk. His badge reads, Docent: Charles Corvus.
"Hello," says Alice politely. "Can you tell me about your museum?" Charles doesn't look up.
"This isn’t a mausoleum," he says.
“Your museeeum," Alice says, enunciating carefully.
Charles glances up at her. "I beg your pardon," he says. “It’s a bit noisy.” He rises and shakes Alice’s hand. “Would you like to have a tour?"
He gives her a small plastic device with buttons and a display. "This is a mobile guide. If you press this button, it will tell you where to go next in the museum."
"Thank you. How does it know what I’ll be interested in?"
"It doesn't," Charles says. "It takes you on a walk in a random direction."
"But how does that help me?” asks Alice. “I mean, the museum seems very confusing as it is. It’s as if there’s no organization to the exhibits."
"Ah, but there is. You're meant to explore the museum, and it's organized so that whatever exhibit you're standing in front of, related exhibits are as far away as possible."
"Does that help?"
"Yes—the key is to take your time. Join me. We’ll explore together."
Alice and Charles pass two stout museum workers holding opposite ends of a large flag. The men are arguing and pulling violently in opposite directions. The threads part and snap, leaving the fabric in tatters.
"Those contentious fellows are in charge of separate exhibits," says Charles. "They're always having a bit of a fight."
Alice and Charles walk through the rooms for a while longer, talking about the exhibits. At the exit she says, "Thank you, it's a very interesting museum."
"All of our visitors say that."
"Do you have many visitors?" asks Alice cautiously. She hasn't seen another inside the museum.
"Uncountably many," says Charles.
"Oh. Have you tried counting?"
“Well…” Charles halts and looks thoughtful. "Good-bye."

Thursday, May 17, 2012

Behind the title of a new book




Forthcoming this fall from Oxford University Press

So you've written a book. What should you call it?

Tough question. Two years ago I submitted a proposal to Oxford for a book titled Computational Thinking.

My editor liked it. (She suggested that I resubmit a proposal for two books, one purely about ideas in computing and the other about how those ideas connect to our everyday lives. She also asked if I would edit a collection of papers on the subject... but I declined both options.) Reviewers also liked the proposal. (Non-fiction is different from fiction; you can pitch a book to an agent or publisher before you've finished writing it. Sometimes before you've written any of it.) But some reviewers argued about the title--there's disagreement among computer scientists about what computational thinking actually is.

Back to the drawing board. My second effort at a title was How to think about computers if you're not a computer scientist. The marketing folks at Oxford hated it.

The third try, a suggestion from my editor, ended up on the book contract. Understanding the computers in our lives. I don't think anyone was really satisfied with that, though.

So I sat down with my wife and brainstormed.

By analogy, the challenge was this. Imagine an alternative universe in which you're looking for a popular science book about biology. You find biographies of Darwin and other famous figures of the past and present; you see books that tell you how to turn on and focus a microscope, and even how to run a DNA sequencer; you come across a wide range of books aimed at professional biologists. No one at the bookstore has ever heard of Stephen Jay Gould, Edward O. Wilson, Richard Dawkins, James Watson, or Lewis Thomas, and hardly anyone thinks that it's possible for a book to convey the basic principles of biology to the average, non-biologist reader.

The literature of computing is something like that. There are lots of books about the history and social impact of computing, and about how to use a computer. There are libraries full of deep technical books for computing professionals. But there aren't many books about the ideas behind computers and computing, written in an approachable way. Mine would be a new addition to that tiny handful.

How should the book's title convey what it's about--and what it's notabout? One batch of titles we came up with emphasized the "popular" aspect of "popular science", while de-emphasizing the how-to aspects of computing:
  • Computers for the rest of us
  • A hands-off guide to computers
  • The human element in computing
  • A computer scientist looks at life
  • Computable lifestyles
  • The computable lifestyle
  • A computable life
  • This is not a computer manual
  • About computing
  • It's all computed
  • Computing without computers
  • The ABCs of computing
But none of these quite works, even if I like a couple of them, in the sense that they're too general, or they're a bit misleading about the contents.

The next batch of titles was based on the structure of the book I was writing. I tell stories to convey abstract ideas, real-world metaphors for how computation works. So...
  • The metaphorical computer
  • Computer stories
  • Stories about computers
  • Computers: A bedtime reader
Also less than ideal. The point isn't the stories themselves (which could be about anything, including the history of computing), but what the stories suggest.

The next batch of titles moved away from description to the equivalent of Buy this book
  • Computers: The important stuff
  • Computers: The first book to read
  • Computers: The first book you need to read
  • Computers: What everyone needs to know
  • Computers: The inside story
  • Computers: Behind the silicon curtain
But none of them seem quite right. (In case you're curious, all of these titles have the word "computer" or "computing" in them to help Web search engines find them.)

In the end, we settled on Computing for Ordinary Mortals. It says, "This is a book that anyone might read," and I hope that it also makes a subliminal connection between computers and our lives.