Showing posts with label computational thinking. Show all posts
Showing posts with label computational thinking. Show all posts

Friday, February 8, 2013

A four-minute introduction to computing

This is a draft script for a very short talk about computing concepts for a non-technical audience. It doesn't go deep, but I try to bring across some important ideas...

When I was a kid, I had a Play-Doh machine. You drop in a lump of colored clay, insert a little plastic stencil, and then push down on the plunger. You get a long tube with the cross section of a triangle or a star. Was it fun? Well, I still remember it today...

When I explain what computer science is about, I sometimes start with a machine like this. Except that computers mold information into different shapes. They're information processing machines. There's more to it, of course. Information is different from physical raw materials. Think of information as a stream of numbers that you could write down. We might be talking about data from a scientific experiment or quarterly business reports, or a DVD video--it's all information. That's what's going into the machine, as input, and a new stream of information is the output.

But here's something cool. Inside every computer is a controller. That controller treats some streams of information differently from other kinds of data: it interprets each number as an instruction to follow. A stream of numbers can be a stream of instructions--a program.

So unlike my Play-Doh machine, a computer can handle two different kinds of input. One is information to work on, the other is instructions for what to do with the information. And some of those instructions involve making decisions about what it should do next. This is why we talk about a computer as being a different kind of machine than we usually think of--a Play-Doh machine, a loom for weaving, and so forth--because it can make some decisions for itself, on our behalf. That's more than a stencil or a template for repeating actions; it's real automation.

And there's something else. Programs are information, right? And programs take information as input... This means that we can feed one program to another program. Now things get really interesting.

Think of the instructions that a computer can carry out as its language. To get the computer to do something useful, you need to speak its language. But machine language is incredibly tedious, and it takes forever to write down instructions in just the right way to make just the right things happen.

And what if I'm a video artist or a baseball statistician? I have information that I'd like to be processed--maybe color corrections or on-base plus slugging numbers--but of course the machine's language doesn't include anything close to these abstract concepts. But here's the thing--information is malleable, and we know a lot about translating it from one language into another. My information comes in abstract chunks of information that I can talk about in my specialized language of video artistry or baseball statistics. With a lot of work, I may be able to translate my own information abstractions into terms that a computer can handle. And I don't have to do this entirely by hand--once I figure it all out, I can write a program to do the translation for me.

So when I'm using my computer, I don't have to work at the level of the machine; I can express myself in the concepts I'm familiar with, and the computer will translate those concepts into its own language and do whatever work I tell it to do.

This is the practical side of computing, what programming is basically all about--building computational abstractions that help people solve problems. And on the theoretical side, you might be thinking, "So we can transform a computer to behave as if it's a completely different machine..." Yes. Computers are universal machines. I don't mean that a computer can do everything. What I mean is that when we think about what "ordinary" machines do, we tend to say that you have to pick the right tool for the job. You don't use a chain saw to drive screws, or a kitchen blender to paint your walls. If the job is processing information, though, we choose a computer. We might think about how fast it runs and how much information it can store (it's something like saying, "I need a bigger hammer"), but the practical details are less important than the idea that we don't need different kinds of machines for different information processing tasks. Every computer is theoretically equivalent to every other computer for solving problems--we can transform one into another. It's just a matter of which abstractions we build on top of it.

Computing is as general and as powerful as you can imagine.

Friday, January 25, 2013

An unexpurgated interview

In the pulp science fiction novels I read as a kid, the authors tended to work within the social norms of the day with respect to language. Here's an example from E. E. "Doc" Smith's First Lensman:
Jack started to express an unexpurgated opinion, but shut himself up. Young cubs did not swear in front of the First Lensman.
And another:
Do you think you can get away with this?" she demanded. "Why, you..." and the unexpurgated, trenchant, brilliantly detailed characterization could have seared its way through four-ply asbestos.
I liked "unexpurgated", once I looked up what it meant. Hence the title of this post. I recently exchanged email with Nikki Stoudt, a writer for the NCSU student newspaper, for an article. Here's what was said... unexpurgated. (Not that the text needs it.)

Wednesday, January 2, 2013

Usability problem of the day (a revived Facebook hoax)

In the courses I teach about human-computer interaction, I typically open each class with an example of a usability problem. I'm putting these online, in case others find them useful.

Privacy in the digital age is complicated and sometimes confusing. If Randi Zuckerberg can't figure it out, despite being a former marketing director for Facebook as well as the founder's sister...

Sometimes usability problems can make things worse. Let's consider a persistent Facebook hoax. One of your friends posts something along these lines:
To all my FB friends, may I request you to please do something for me… PLEASE place your mouse over my name above (do not click), a window will appear, now move the mouse on “FRIENDS” (also without clicking), then down to “Settings”, click here and a list will appear. REMOVE the CHECK on “COMMENTS & LIKE” by clicking on it. By doing this, my activity amongst my friends and family will no longer become public. Many thanks! Paste this on your wall so your contacts would follow suit too, that is, if you care about your privacy -- which I know we do.
This is a clever bit of malice for a few reasons. Let's walk through the process to see how usability figures in.

Monday, December 31, 2012

The book



It’s easy to take computers for granted. If I want to go shopping, visit a library, play a game, or share my thoughts with the rest of the world, I can do this all by typing on my laptop. I can exchange email with friends and colleagues, wherever they might be. If I were to pick up a screwdriver and go exploring in my house, I’d find computers in kitchen appliances, gaming and entertainment consoles, telephones—even in the walls, controlling the temperature.

Have you ever wondered what gives computers such remarkable power and flexibility? One answer is that computer designers and software developers build them that way. But that’s not entirely satisfying as an answer. Computing for Ordinary Mortals starts in a different way:


Thursday, November 8, 2012

Ordinary Mortals and CS education

This post can also be read as a public Google doc.

I’ve written Computing for Ordinary Mortals for readers without a technical background or even much experience in computing. My thought was this: If you wanted to explain what computing is all about, starting from scratch, what what would you say? You have a tremendous number of decisions to make, about which topics are critical and which can be left out, about the ordering and detail of topics you include, and about how the topics fit together into a comprehensive whole. For what it’s worth, some computer scientists will make decisions different from mine. Most popular science books and textbooks go into greater detail about representing and managing data; I delay a discussion of programming concepts until after algorithms and abstract data types; I punt on the question of whether computer science is a branch of applied mathematics (see Dijkstra’s “How do we tell truths that might hurt?” [PDF], though he was talking about programming), or a branch of science (Newell, Perlis, and Simon’s “What is computer science?”), or a branch of engineering (Eden’s “Three Paradigms of Computer Science” [PDF]), or perhaps something different (Rosenbloom’s On Computing, or Graham’s “Hackers and painters”).

Writing a popular science book on computing means taking a stand on such issues, but the constraints of the genre didn’t make it easy for me to say, “Here’s what I’m doing...” That’s in part what this document is for, to identify the connections between Ordinary Mortals and the field of computer science, at least as it’s currently taught at the university and secondary school levels.

Saturday, November 3, 2012

How to avoid programming (OUP)

What does a computer scientist do? You might expect that we spend a lot of our time programming, and this sometimes happens, for some of us. When I spend a few weeks or even months building a software system, the effort can be enormously fun and satisfying. But most of the time, what I actually do is a bit different. Here’s an example from my past work, related to the idea of computational thinking.
Imagine you have a new robot in your home. You haven’t yet figured out all of its capabilities, so you mainly use it for sentry duty; it rolls from room to room while you’re not at home, turning lights and appliances on and off, perhaps checking for fires or burglaries.

Read the rest on the OUPblog, "Oxford University Press's Academic Insights for the Thinking World."

Tuesday, October 30, 2012

The Big Idea (Whatever)

When I was ten years old or so, I saw a battered paperback copy of Triplanetary on my grandfather’s bookshelf. I borrowed it… and found myself in ten-year-old heaven. Science fiction led me to popular science, with Isaac Asimov (and Edgar Cayce, embarrassingly enough) to help me cross the boundary. I read about physics, space, biology, math, and psychology. It was formative reading. Today I’m a computer scientist, and I’ve just written my own book.
The big idea in Computing for Ordinary Mortals is that the basics of computer science can be conveyed through stories. Not stories about computers and how we use them, but stories about other kinds of everyday things we do. Computing is more about abstract concepts than about hardware or software, and we can understand these concepts through analogies to what happens in the real world.
Read the rest in a Big Ideas post on John Scalzi's Whatever blog.

Computational thinking about politics (HuffPo)

On The Atlantic Wire Gabriel Snyder gives what we'd call a combinatorial analysis of the presidential election. I like the analysis not for what it says about the possible outcome but because it illustrates an influential idea in computer science, called computational thinking: formulating a problem so that it can be solved by an information-processing agent. Here's how it works in this situation.

Read the rest on the Huffington Post.

Wednesday, September 12, 2012

Computer programming is the new literacy (OUP)

It’s widely held that computer programming is the new literacy. (Disagreement can be found, even among computing professionals, but it’s not nearly as common.) It’s an effective analogy. We all agree that everyone should be literate, and we might see a natural association between writing letters for people to read and writing programs for computers to carry out. We also find a historical parallel to the pre-Gutenberg days, when written communication was the purview mainly of the aristocracy and professional scribes. Computation is an enormously valuable resource, and we’re only beginning to explore the implications of its being inexpensively and almost universally available.
But is programming-as-literacy an appropriate analogy? We tend to think that basic literacy is achieved by someone who can say, “Yes, I can read and write.” Let’s see what this means in the context of programming.

Read the rest on the OUPblog, "Oxford University Press's Academic Insights for the Thinking World."

Wednesday, September 5, 2012

Code Year: Why You Should Learn to Code (HuffPo)

You probably know about Code Year. Code Year, sponsored by Codecademy, challenges people to learn how to program in 2012. The Codeacademy website offers free online lessons in a variety of programming languages; it's received attention in the press and saw a large boost from a comment from New York Mayor Michael Bloomberg on Twitter: "My New Year's resolution is to learn to code with Codecademy in 2012! Join me."
Hundreds of thousands of people have joined Bloomberg. Even though my own Code Year was 30 years ago, I can still appreciate the appeal -- you'll learn how to write software to make your computer do new and wonderful things that youfind valuable, instead of depending only on what others have done. That's empowering.
But there's more.

Read the rest on the Huffington Post.

Sunday, July 8, 2012

How to write a popular science book

I'm being a little presumptuous with this post. My book, Computing for Ordinary Mortals, won't appear until the fall. And it's my first book. So I might end up retitling this post "How to Write a Popular Science Book that Nobody Reads," or (the happier but less likely case) "The Secret to Writing a Popular Science Best Seller." We'll see.

Here are a few things I kept in mind as I was writing.

Tuesday, June 19, 2012

Popular science books about computing


There’s a common aphorism in academia: You don’t really understand a subject until you teach it. This isn’t entirely true, of course, but being asked questions can make you think harder about what you know and what you don’t know.
I’ve found something similar in writing a popular science book. And I’ve realized a bit more.