I have previously mentioned, in passing, a pet peeve of mine: when people conflate ecology with environmentalism (see here and here for examples). It’s an odd pet peeve for an admitted non-ecologist, but it falls under the umbrella of distinguishing science from technology which is at the heart of the real pet peeve. It just happens that the ecology/environmentalism issue pops up more often than other science/technology issues in my daily life (I don’t deal with stem researchers or people cloning whole organism).
Before I get too far ahead of myself, allow me to define what I mean by science and what I mean by technology. These words (like all words) are just placeholders for two processes: the pursuit of knowledge and understanding of the natural world and the application of that knowledge for human uses. Science is the pursuit of that knowledge and technology is its application. Using ecology and environmentalism as an example, ecology is the study of organisms in their environment, whereas environmentalism is the movement to conserve those organisms and their environments. Ideally, environmentalists use findings from ecological research to justify their conservation efforts (ie, they apply the knowledge gained from the science of ecology).
Given what I just wrote, it should come as no surprise that I bristled when someone in my department justified his efforts to encourage people in the department to recycle by referring to himself as a “hardcore ecologist”. “You mean you’re a hardcore environmentalist,” I replied. His comeback was that environmentalism is a subset of ecology and mine was, obviously, that ecology is science and environmentalism is application. I think the whole exchange came off as a big dork-fest to those watching, and I appeared to be quite an asshole for trying to show him up (or science prick).
So, why should we care about the distinction between science and technology? Some interesting points came up in a back and forth between Janet Stemwedel and Larry Moran. It all started when Larry asked what role technology and ethics should play in teaching genetics and biochemistry to undergraduates. Janet offered her take as a philosopher of science specializing in ethics, and Larry replied that the ethical issues all arise when we discuss the application of science, not the science itself. The final two posts (one from Janet and one from Larry) focus on the role that the application of knowledge should play in teaching science.
In regards to the pedagogical aspects, Janet argues that the applications of the knowledge unify the individual pieces of information:
A laundry list of isolated facts is not the kind of thing the students want to learn, nor the kind of thing the science teachers want to teach. What keeps the facts from being isolated — what imposes a coherent structure where they’re connected to each other — almost always involves a storyline about what various bits of the knowledge are good for.
What Janet means by “good for” is the application of that knowledge (from what I can gather). While this may be true in organic chemistry (the example Janet provides), this is hardly the case in biology. A unified presentation of molecular genetics (from DNA to proteins) can be done without ever discussing the applications of the knowledge. I empathize with Larry in his goal to get students excited in the science, rather than have them think “what good is this for?”
In teaching scientific content, it is often useful to point out some applications of the concepts. But the course should not be unified around the applications or the pieces of information. Instead, it would be interesting to see a course organized around the scientific method itself (ie, how the science got done and the knowledge accumulated). This allows the educator to both introduce why researchers were interested in understanding something (eg, the structure of DNA or the genetic code) as well as how they went about discovering the bits of knowledge. Along with the issue of why the scientists wanted to understand a particular phenomenon comes how that discovery fits into a larger system (eg, how solving the genetic code relates to transcription and translation).
This is a shift in emphasis from important pieces of information to important concepts, where science as a process is emphasized over science as a collection of facts. The process then gets tied in with a growing body of knowledge that gets developed in order to get a complete picture of a natural process that cannot be understood by individual facts. This provides the motivation for the individual experiments and provides context for the pieces of knowledge.
And, before you rail on me for creating a false dichotomy between science and technology, I understand that the division can be quite ambiguous in places. But I like to think of the motivations: are they for increased understanding or the application of knowledge that does not increase understanding on its own (note: if the technology can be applied for increasing understanding, it still doesn’t count as science)? And then there’s the issue of technology as science, which I’ll leave for people to hash out in the comments.