Effect Measure

Fifteen years isn’t a long time. Most of us can remember what we were doing 15 years ago. Often it’s the same thing we are doing now, job-wise. Sure our kids were just kids, not adults. But 15 years isn’t a historical epoch. At least not when you are living through it. But the fact is we have gone through a revolution in that period that will seem as profound as the 50 years from 1450 to 1500, the half century after Gutenberg and the invention of moveable type.

It’s hard to remember what the cyberworld was like a short 15 years ago, but thanks to the internet we can retrieve — instantly — what it looked like to someone who was intimately familiar with it, computer geek and curmudgeon Clifford Stoll. I remember seeing him interviewed on TV a number of times when his book, Silicon Snake Oil came out, and he was quite convincing. Wild hair, wild enthusiasm for his own arguments, confident pronouncements that the whole internet thing was a hoax. Boingboing just linked to his 1995 essay in Newsweek that is so full of delicious irony it’s all I can do just to put up the whole thing. But that wouldn’t be fair use, would it. After all, it would be stealing because then you wouldn’t have any incentive to subscribe to the 1995 Newsweek. So here are some snippets:

Visionaries see a future of telecommuting workers, interactive libraries and multimedia classrooms. They speak of electronic town meetings and virtual communities. Commerce and business will shift from offices and malls to networks and modems. And the freedom of digital networks will make government more democratic.

Baloney. Do our computer pundits lack all common sense? The truth in no online database will replace your daily newspaper, no CD-ROM can take the place of a competent teacher and no computer network will change the way government works.


How about electronic publishing? Try reading a book on disc. At best, it?s an unpleasant chore: the myopic glow of a clunky computer replaces the friendly pages of a book. And you can?t tote that laptop to the beach. Yet Nicholas Negroponte, director of the MIT Media Lab, predicts that we?ll soon buy books and newspapers straight over the Intenet. Uh, sure.


Won?t the Internet be useful in governing? Internet addicts clamor for government reports. But when Andy Spano ran for county executive in Westchester County, N.Y., he put every press release and position paper onto a bulletin board. In that affluent county, with plenty of computer companies, how many voters logged in? Fewer than 30. Not a good omen.


Then there?s cyberbusiness. We?re promised instant catalog shopping-just point and click for great deals. We?ll order airline tickets over the network, make restaurant reservations and negotiate sales contracts. Stores will become obselete. So how come my local mall does more business in an afternoon than the entire Internet handles in a month? Even if there were a trustworthy way to send money over the Internet-which there isn?t-the network is missing a most essential ingredient of capitalism: salespeople. (From Clifford Stoll, Why the Internet will Fail, Newsweek, 1995)

Read the whole thing. I’m not ridiculing it. What Stoll said made a certain amount of sense then. But it was all dead wrong. The internet is a world changing technology, for better of for worse. It’s hard to see a revolution when you are living through it, but consider this. There is no more “power of the press,” if taken literally to mean the power of the person who owns the printing press. I am now an author, a publisher and a distributor. I have more unique readers each day than the average subspecialty scientific journal had all year in 1995. I am lusting after an iPad so I can read textbooks and newspapers and magazines on it. I buy as much or more online as in stores. I even buy groceries online and have them delivered. I communicate mostly online, preferring it because it is asynchronous: the other person and I don’t have to do it at the same time. Yet I still have as many social and personal relationships of the face to face kind as I have ever had, supplemented by a global community of readers, commenters and friends.

Clifford Stoll is a smart, knowledgeable guy about computers. It’s a cautionary tale and a glimpse of a world that is now gone. For good.


  1. #1 iayork
    February 27, 2010

    Changes like this slip past us when we live through them. The other day I ran across something that really brought home to me how much, and how fast, things have changed. Gary Kasparov, in a wonderful article in the New York Times, said:

    It was my luck (perhaps my bad luck) to be the world chess champion during the critical years in which computers challenged, then surpassed, human chess players. Before 1994 and after 2004 these duels held little interest. The computers quickly went from too weak to too strong.

    Biology is just starting to experience its computer revolution, I think (I would guess that epidemiology is further ahead in its computer revolution — I would be interested in your thoughts on that some time, by the way).

    Will there be a time where we look back and say, “Between 2010 and 2020 computers challenged, then surpassed, human scientists”? Or, for that matter, human artists or musicians or authors? All of those, today, seem like uniquely human skills, but 20 years ago lots of people were saying that chess was a uniquely human skill that no computer could best.

    Actually, what I really do expect is also part of Kasparov’s article:

    In 2005, the online chess-playing site Playchess.com hosted what it called a “freestyle” chess tournament in which anyone could compete in teams with other players or computers. … The surprise came at the conclusion of the event. The winner was revealed to be not a grandmaster with a state-of-the-art PC but a pair of amateur American chess players using three computers at the same time. … Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process.

    Human + machine + process seems like what we’re groping toward in much of science today. It’s also being used by artists and musicians; mostly seen as a novelty, but so were chess-playing computers at their beginning.

  2. #2 revere
    February 27, 2010

    layork: You raise several thought provoking points. Since I am still grant writing (and everyone is tired of hearing about it) I don’t have time or mental energy to really start to think this through, but off the top of my head it seems there are two related questions here: (1) what does it mean for some task to be “uniquely” human? (2) does such a task exist, or said another way, are humans just another kind of natural mechanism? I tend to cleave to the latter but as I say, haven’t thought it through.

  3. #3 Daniel J. Andrews
    February 27, 2010

    I did find Stoll’s comments about the mass of unreviewed data still relevant. Science and scientists are under attack in a few areas by political, religious and business ideologies (i.e. we don’t like what you’re finding, therefore we’re going to smear you). The amount of disinformation and lies on the subject flood the net, and a google search may more often bring up the disinformation rather than the real science.

    In schools, I think students need to be taught how to search productively, and taught how to recognize a reliable site from one that promotes pseudoscience. It seems Stoll may have forseen, in this area anyway, the upwelling of anti-intellectualism, and the changes that would inspire Charles Pierces’ Idiot America (to be read as an entertaining rant rather than a serious sociological study even though it contains many truths).

  4. #4 revere
    February 27, 2010

    Daniel: This is a frequent “complaint” about the internet. But in fact in my youth the average public library full of unverified information and so are TV, radio and newspapers. There is more of it on the internet, to be sure, but one of the skills one learns on the internet no one ever taught me about the library is not to take it on faith. Obviously not everyone is sufficiently skeptical about what they read on the net, but that is also true for what they hear on radio, TV and newspapers. The difference for the net is the much broader diversity of views. And like many things. one believes what one wants to believe. It isn’t produced by the net, just enabled and amplified by it. But so is legitimate skepticism. Now it will all net out history will tell us.

  5. #5 ECaruthers
    February 27, 2010

    The difference between the need to check what is on the web and what’s in the library is, it’s easier on the internet. 20 years ago if you read a story about Nietszche’s style changing after he bought a typewriter, it was hard to check. Your library might not have had specialized monographs on Nietzsche or the history of typewriters. With the internet, it’s easy to find out that N’s typewriter arrived broken, that he hired a mechanic who made it worse instead of better, and that N only used the typewriter for a few letters.

  6. #6 geodoc
    February 27, 2010

    You ask- ‘Will there be a time where we look back and say, “Between 2010 and 2020 computers challenged, then surpassed, human scientists”?’

    For a computer to ‘be’ a scientist rather than just a versatile scientific tool, they’ll have to generate original ideas, develop ways to test them, then synthesise the resulting information and place it into context (as well as collecting and processing it, which they can do already), and then use it to make further testable predictions. In other words, they’ll have to manage whole business of making conceptual leaps and generating hypotheses, designing and selecting experimental methodologies, modifying theories and relating them to pre-existing knowledge etc etc. My impression is that all this is probably more than 10 years away— but maybe AI is advancing faster than I thought…

  7. #7 mk
    February 27, 2010

    I shop (occasionally), get most of my news, explore new ideas, learn about science (my hobby), and communicate online.

    However, I still read actual books. The alternatives are unsatisfying, to say the least.

  8. #8 iayork
    February 27, 2010

    For a computer to ‘be’ a scientist rather than just a versatile scientific tool, they’ll have to generate original ideas, develop ways to test them, then synthesise the resulting information and place it into context (as well as collecting and processing it, which they can do already), and then use it to make further testable predictions.

    Maybe. But chess playing is an interesting comparison. Twenty years ago, people were saying that for a computer to play chess it would have to use the same rules as a Grandmaster. Computers don’t do that today, and they still play chess at (or above) Grandmaster level.

    I don’t know much about the AI field, but my impression is that the field started off trying to imitate human approaches, spent a long time stagnating, and then started taking computer-centric approaches to the same problems and making real progress.

    Just because humans perform science, today, as you describe; doesn’t mean it’s the only way to make advances. Very different underlying processes could lead to the appearance of the same thing (or perhaps there’s something that works better that we as humans can’t reach). Or, more likely, humans and computers will work together to form some hybrid process that’s better than either alone.

  9. #9 Alex Besogonov
    February 27, 2010

    “I don’t know much about the AI field, but my impression is that the field started off trying to imitate human approaches, spent a long time stagnating, and then started taking computer-centric approaches to the same problems and making real progress. ”

    Not really. I worked at AI field for some time and I don’t really like when chess is brought up as an example.

    Why? Because it’s boring. Chess (and checkers) playing programs win by pure brute-force. Even the best chess programs work by trying a lot of variants.

    Much more interesting example is ‘Go’ game. Blind search is not possible for Go-playing AI, because there are too many possible moves. So you really need to use pattern-recognition and other complex strategies. That’s why no Go AI plays better than an amateur.

    So no, computers probably won’t replace humans until we get good enough hardware to emulate human brain. I think that should happen around 2030 🙂

  10. #10 GrayGaffer
    February 28, 2010

    A periodic topic. A few observations from someone whose first computer was build out of discrete transistors (ca 1965):

    1: Sturgeon’s Law – 90% of everything is crap – still holds true for the Internet*.

    2: AFAIK, only the original Star Trek predicted hand-held computers, and even that may have been just a self-fulfilling prophecy (except maybe the Dick Tracey watch?). Other than ST, Hollywood was enamored of massive blinkenlichten. Cliff is a great guy, but not a computer guy (he’s an Astrophysicist), and I can understand he missed the potentials. So did almost everybody else. Including myself – I knew the machine would just continue to get smaller, but missed that the ones that stayed the same size got more powerful, and what that implied.

    3: (slightly related): a top of the line, biggest penis, personal computer, cost $3000 in 1985. It still does. It just does more crap faster. (PCs as well as Macs – I recently priced them out)

    4: (My prediction) Flexible e-Ink is going to make the next major change in our lives. Androids, iPhone, now iPad, netpads, etc, are transitional technologies. Cliff was right about reading electronically, mostly, in that it is a relatively unsatisfying experience. So we’ll be getting back to full size newspaper-like pages which we can fold up and put in our pockets. We just won’t be buying them every day because they will display whatever we want. I’m less optimistic about ubiquitous very high speed wireless because of bandwidth saturation (already a problem), but I may well be surprised.

    (* I have an addition to that: the trickle-down theories are OK as far as they go – its just that the crap trickles down too and there’s more of it)

  11. #11 llewelly
    February 28, 2010

    Recalling those times – I think Cliff Stoll was substantially influenced by the fact that there were a great many crooks, liars, and con-artists of all sorts involved in the computer industry. Many a company touted as “innovative” or “ground-breaking” in the computer industry press turned out to be, on closer inspection, little more than a jargon-filled too-clever-by-half business plan written by a talented BS artist. To make matters worse, many previously respectable companies involved themselves in the lowest sort of scams and foolishness. In some ways, the IT bubble of the 1990s was like the real estate bubble of the 2000s. The difference is, there really were some great new technologies at the bottom of it all (though many of those did not pan out until well after the 1990s were over), and some very innovative people brought new ways of communicating to the world, despite the scam artists.

    (I had a computer graphics professor, who, in about 1997, said, more or less: “Stoll’s book is about 90% correct, in the short term. But in the long run, the 10% where he’s wrong will matter most, and in 15 years we’ll look back at his book and laugh pretty hard.”)

  12. #12 Jesse
    March 1, 2010

    I am going to be a contrarian here, and point out a few spots in which Stoll is absolutely right even now.

    “The truth in no online database will replace your daily newspaper, no CD-ROM can take the place of a competent teacher and no computer network will change the way government works.”

    Absolutely. I can amass all the data you want in a database. But what a newspaper and what a journalist does is fundamentally different from what a phone book does. Online newspapers have great search functions. But there are things that I do when I write stories that a computer cannot — like ask pertinent questions and discover where to get answers. And I challenge you to hand out a CD-ROM with a calc text on it and expect the students to learn anything. Not that certain kinds of automated learning are impossible. I learned some Arabic from a CD-ROM. But teachers are around for a reason, and Asimov’s vision of learning from screens doesn’t seem much closer than in 1955.

    And while governments all over the world use computer networks, their fundamental functioning still looks pretty much the same to me — you have to negotiate, you have to legislate, you have to do all the messy stuff that makes most governments function. I don’t see much change there.

    And electronic publishing has its limits. Stoll’s point about taking an eBook to the beach still holds — if I get sand on my Kindle it is a very expensive paperweight right quick.

    Paper books need no batteries, can get wet, and have pretty good info-retrieval capabilities that depend on the pattern recognition of the human brain (think about how you remember the parts of a book you like and return to them very fast).

    I think more interesting are the areas that Internet failed to make inroads. Most people still commute to work, because it turns out there are just too many jobs that you can’t do remotely. (Medical care being an obvious example — there are loads of things you can’t treat without seeing a patient in person and few doctors would dare take that risk). And many of the futuristic tasks that people thought could go on the Net a long time ago turned out to be really, really complicated.

    And “the power of the press” still exists — more so now than ever. Fox News has a gigantic influence on the public debate, for instance, and it is increasingly hard for smaller media to get eyeballs and attention. Television is changing, but there are reasons that even with more choices of channel the content hasn’t necessarily gotten any better. Remember, while we can all be publishers, that doesn’t mean anyone will see it. The very nature of the Internet (I have to look for stuff) means that unlike Fox News, there is no guarantee anyone will see what I put out there. Most people still get their news from TV — just look at the discussion boards on most Internet news sites, they reference TV all the time. It isn’t just the diversity of voices that matters, it’s who controls the terms of debate. And having many voices doesn’t often alter the latter point.

    On top of that, I should say that diversity of voices says nothing for the quality thereof. Book editors and publishers were more than just “gatekeepers.” A good editor makes your stuff better. And editing is a very, very intricate skill. Much of what is on the ‘Net is simply crap (just like in real life).

    Was the Internet a revolutionary development? Yes, in many ways. But let’s keep in perspective that there were just as many things it didn’t change, and look at the reasons why.

  13. #13 Jesse
    March 1, 2010

    I should amend one part: most people worldwide get their news from AM radio. In developed countries it’s an even split between TV and the Net, though a big chunk of the Net news is produced from TV stations and traditional news outlets.

  14. #14 Chris
    March 2, 2010

    I remember (vaguely) reading his book when it came out. One thing that stands out to me was his description of internet conversations.

    He used “The Well” as an example. One person would pose a question, it would be discussed ad nauseum. And then a couple of weeks later another person would pose the same question, and then wonder why no one would engage in a conversation.

    The answer being: We did it two weeks ago! Get a grip and read the archives!

    But, alas! They never do, and so the round about goes on and on.

    Which is why after a few rounds of the same thing on Usenet I left, and then I very seldom repeat the same ol’ debates. Of course, if the person/prey is very enticing I will jump in.

    I will say this about the Internet, it kept my brain alive and me sane. I had to quit work to take care of a kid with several medical issues. I did find a group of parents dealing with the same disability (with its own type of loonies), but I found ways to keep my brain from decaying. I have actually gone back to school, including graduate school and have not been an embarrassment to my children (all A’s in the community college where I dipped my toe in, and the a B student as a non-matriculated graduate student… hey, graduate school is harder!… though I am finding my take home advanced engineering math test actually entertaining).

  15. #15 Sherri
    March 4, 2010

    I remember when my daughter was in 3rd grade, she had a homework assignment that required internet research. I prepared to teach her how to use Google. She looked at me and said, “I already know, Mom.” Now she’s in 9th grade, and a large percentage of her homework requires internet research. She has learned how to search productively, and ways to evaluate information found on the net.

  16. #16 anon
    March 13, 2010

    not if, but when.

    15 years is a really short time as compared
    to human evolution.
    Wait million years and you’ll be amazed, how good
    “computers” will be

New comments have been disabled.