Science & Technology

I have previously mentioned, in passing, a pet peeve of mine: when people conflate ecology with environmentalism (see here and here for examples). It’s an odd pet peeve for an admitted non-ecologist, but it falls under the umbrella of distinguishing science from technology which is at the heart of the real pet peeve. It just happens that the ecology/environmentalism issue pops up more often than other science/technology issues in my daily life (I don’t deal with stem researchers or people cloning whole organism).

Before I get too far ahead of myself, allow me to define what I mean by science and what I mean by technology. These words (like all words) are just placeholders for two processes: the pursuit of knowledge and understanding of the natural world and the application of that knowledge for human uses. Science is the pursuit of that knowledge and technology is its application. Using ecology and environmentalism as an example, ecology is the study of organisms in their environment, whereas environmentalism is the movement to conserve those organisms and their environments. Ideally, environmentalists use findings from ecological research to justify their conservation efforts (ie, they apply the knowledge gained from the science of ecology).

Given what I just wrote, it should come as no surprise that I bristled when someone in my department justified his efforts to encourage people in the department to recycle by referring to himself as a “hardcore ecologist”. “You mean you’re a hardcore environmentalist,” I replied. His comeback was that environmentalism is a subset of ecology and mine was, obviously, that ecology is science and environmentalism is application. I think the whole exchange came off as a big dork-fest to those watching, and I appeared to be quite an asshole for trying to show him up (or science prick).

So, why should we care about the distinction between science and technology? Some interesting points came up in a back and forth between Janet Stemwedel and Larry Moran. It all started when Larry asked what role technology and ethics should play in teaching genetics and biochemistry to undergraduates. Janet offered her take as a philosopher of science specializing in ethics, and Larry replied that the ethical issues all arise when we discuss the application of science, not the science itself. The final two posts (one from Janet and one from Larry) focus on the role that the application of knowledge should play in teaching science.

In regards to the pedagogical aspects, Janet argues that the applications of the knowledge unify the individual pieces of information:

A laundry list of isolated facts is not the kind of thing the students want to learn, nor the kind of thing the science teachers want to teach. What keeps the facts from being isolated — what imposes a coherent structure where they’re connected to each other — almost always involves a storyline about what various bits of the knowledge are good for.

What Janet means by “good for” is the application of that knowledge (from what I can gather). While this may be true in organic chemistry (the example Janet provides), this is hardly the case in biology. A unified presentation of molecular genetics (from DNA to proteins) can be done without ever discussing the applications of the knowledge. I empathize with Larry in his goal to get students excited in the science, rather than have them think “what good is this for?”

In teaching scientific content, it is often useful to point out some applications of the concepts. But the course should not be unified around the applications or the pieces of information. Instead, it would be interesting to see a course organized around the scientific method itself (ie, how the science got done and the knowledge accumulated). This allows the educator to both introduce why researchers were interested in understanding something (eg, the structure of DNA or the genetic code) as well as how they went about discovering the bits of knowledge. Along with the issue of why the scientists wanted to understand a particular phenomenon comes how that discovery fits into a larger system (eg, how solving the genetic code relates to transcription and translation).

This is a shift in emphasis from important pieces of information to important concepts, where science as a process is emphasized over science as a collection of facts. The process then gets tied in with a growing body of knowledge that gets developed in order to get a complete picture of a natural process that cannot be understood by individual facts. This provides the motivation for the individual experiments and provides context for the pieces of knowledge.

And, before you rail on me for creating a false dichotomy between science and technology, I understand that the division can be quite ambiguous in places. But I like to think of the motivations: are they for increased understanding or the application of knowledge that does not increase understanding on its own (note: if the technology can be applied for increasing understanding, it still doesn’t count as science)? And then there’s the issue of technology as science, which I’ll leave for people to hash out in the comments.


  1. #1 p-ter
    February 8, 2007

    from the science prick link: The “I can be a prick when it comes to science” badge.
    In which the recipient can be so passionate about things of a scientific nature, that he/she may appear surly, rude, and/or unpleasant.

    nothing wrong with that, dude. I say embrace your inner science prick.

  2. #2 John Wilkins
    February 9, 2007

    I say embrace your inner science prick.

    In a room, away from public scrutiny. Eewww…

  3. #3 Rasmus
    February 9, 2007

    To make matters worse, organic foods are termed “ecological foods” in many European countries.

  4. #4 joltvolta
    February 9, 2007

    I’m glad you made the distinction between the two when responding to the guy. And anyone who says their a “hardcore [enter word here]” when talking about recycling should be poked in the eye by a blunt stick. Nothing against recycling, but c’mon…
    And with people grabbing on to any label they can think of that has any relation to the point they are trying to make in order to add validity, I share your frustration. I’m guilty of the same thing though. Not so much the “woah, dude, I’m a hardcore horticulturalist! You should recycle your bicycle!” situation, but using the wrong words when delving into something. Something to think about.

  5. #5 tbell
    February 9, 2007

    i think that there are at least 3 important distinctions to be made here. The process of science, the technological applications of science, and the choice of scientific questions (what direction do we go with the research?).
    The choice of questions can share elements of both the process and the application. The choice of questions can be a function of what seems possible according to our current theories, or a function of what seems likely to lead to tech applications. Which scientific questions (out of the infinitely many directions available) interest us may also be a function of many kinds of social biases (not necessarily in the negative sense). In any case, there seem to be ethical and philosophical questions that arise with respect to the choice of a line of research that are not identical to related to specific applications.

  6. #6 Jonathan Badger
    February 9, 2007

    I think it’s important to consider what science is “good for” for the simple reason that’s why society funds science. I’ve had plenty of professors who waxed poetic about how science should be studied for “its own beauty”, but it’s no surprise that the US agency that is in the business of subsidizing the creation of works of beauty — the National Endowment for the Arts — has a tiny budget compared to the NSF and NIH. It’s the practical applications of science that society wants – not the beauty.

    That doesn’t mean that only applied science should be studied of course; the failure of the 1970s “War on Cancer” was that it tried to tackle cancer before we knew very much basic molecular biology. Basic science is needed for applied science to succeed.

  7. #7 Matt Dunn
    February 9, 2007

    You’re kind of a curmudgeon. And also a posting machine these last couple days. I think this post brings up a couple interesting points:

    (1) why should “basic” research be done at all if not to help us solve practical problems? Of course I think it should be done. But how much? Just enough to train the next generation of scientists to be able to think critically about cancer drug research? How much should the NSF fund research on Drosophila speciation genes?

    (2) And I like your idea about structuring a course historically. Teach students what the motivations for certain experiments and concepts were etc. Of course then you might want to actually consult the historians of science. But I think when you talk to them you’ll be disappointed because they’ll tell you about institutional organization and the Rockefeller Foundation and the training of the Drosophilists BY Drosophila! They won’t give you the intellectual history you want because historians today don’t think that’s the most important part of the story.

    (3) here’s a reference for a good paper on the development of biotechnology: Robert Bud, 1992, “The zymotechnic roots of biotechnology” BRITISH JOURNAL FOR THE HISTORY OF SCIENCE 25: 127-144. Interestingly, it was often practical needs that drove basic science.

    (4) I posted a reply about Dobzhansky’s 3rd chromosome inversions in your recent post on speciation.

  8. #8 Michael
    February 9, 2007

    I make my living in the computer software industry but most of my friends would consider me a hardcore environmentalist. My daughter will soon have a degree in Environmental Science from a well-known public university and may decide to pursue additional degrees and a career in academia. We’re both environmentalists, but while I have no claim to being a scientist my daughter does. At some point these conversations become a haggle over semantics.

  9. #9 jbruno
    February 9, 2007

    Bravo for setting your acquaintance straight, RPM.

  10. #10 coturnix
    February 9, 2007

    I got excited about my area of research BECAUSE of the way the course was taught on it. Every topic started with “the first guy” who thought about it, then went through the history, stopping to analyze the key experiments and how they were interpreted by the authors and readers at the time (and why – the importance of context), and ended with the current knowledge, including some “hot off the presses” stuff. There was no way anyone could have taken the class without leaving with an impression that everything is tentative, that new research can change stuff really fast, and how much still there is to learn. That feeling that the field is wide open, feeling I got due to the way this was taught, made me want to jump into the discipline and do my own research.

    But that was a semester-long graduate class in a narrowly defined discipline. It is almost impossible to teach anything broader, not to mention BIO101 in this way.

    I teach a BIO101 speed-class (we meet eight time only) for adults in non-science majors. I have to show them the relevance to other areas of life, i.e., application. For that, I use diseases (I described that here).

New comments have been disabled.