Why does Medawar hate the scientific paper?

Following up on the earlier discussions of intentional unclarity and bad writing in scientific papers, I thought this might be a good opportunity to consider an oft-cited article on scientific papers, P.B. Medawar's "Is the Scientific Paper Fraudulent?" [1] He answers that question in the affirmative only three paragraphs in:

The scientific paper in its orthodox form does embody a totally mistaken conception, even a travesty, of the nature of scientific thought.

Medawar's major complaint has to do with the "orthodox form" and the story it tells about how scientific knowledge is produced. The ordering of major sections in the sort of scientific paper he decries is pretty recognizable in much peer-reviewed scientific literature today:

  • Introduction
  • Previous work
  • Methods
  • Results
  • Discussion

As far as they go, Medawar seems tolerant of the introduction (which places the scientific question addressed by the paper in a larger context), the run down of previous work on the question, and the description of the methods used in collecting and analyzing data. But he views the last two sections as problematic:

The section called "results" consists of a stream of factual information in which it is considered extremely bad form to discuss the significance of the results you are getting. You have to pretend that your mind is, so to speak, a virgin receptacle, an empty vessel, for information which floods into it from the external world for no reason which you yourself have revealed. You reserve all appraisal of the scientific evidence until the "discussion" section, and in the discussion you adopt the ludicrous pretense of asking yourself if the information you have collected actually means anything.

These two sections, he suggests, have scientists pretending to have engaged in a process quite different from the one actually involved in their research.

One of Medawar's big complaints against the standardly formatted scientific paper is that the story it tells reinforces a picture of scientific inquiry that Carl Hempel describes [2] at the narrow inductivist view. According to this picture of science, scientists start out by observing and recording all the facts about the world. Next, they analyze and classify these facts. Then, they use induction to derive generalizations - claims like "all ravens are black" or "the gravitational force between two bodies is proportional to the product of their masses over the square of their distance." Finally, they subject these generalizations to further tests.

This is an appealing view. Science sticks to the facts. It looks at how the world is and pulls out the patterns that connect the individual facts, getting the evidence first and building the theory from that. But, according to Hempel, science couldn't work this way.

For one thing, we never have all the facts. There are many bits of the world as yet unobserved, and the future is not something we can experience at all until it becomes the present (at which point there is still, we hope, a whole lot of future stretching ahead). Whenever we're trying to derive generalizations from partial information, we have to face the problem of induction.

But there are other problems with this picture of science. Even if we're only going to collect some of the data, which data should we collect? Hempel says that scientists rely on working hypotheses to guide their decisions about which observations to pay attention to and how to classify their data. Rather than reading their hypotheses off the world, scientists start out with hypotheses in order to get good observations of the world.

One more issue here is that Hempel says that induction is not an automatic process for generating generalizations from your stack of data. Rather, "the transition from data to theory requires creative imagination." If lots of data are missing, it requires insight to make a good bet about what it all adds up to, or about what's going to happen next.

As a philosopher of science, Hempel had ample opportunity to notice that narrow inductivism couldn't be a good description of how science really works. However, Medawar notes that the way scientific papers describe the scientific inquiry seems to be maintaining the fiction that this is really how scientist operate (or at least, how they should operate; otherwise, why tell the story that way?):

What is wrong with the traditional form of the scientific paper is simply this: that all scientific work of an experimental or exploratory character starts with some expectation about the outcome of the inquiry. This expectation one starts with, this hypothesis one formulates, provides the initiative and incentive for the inquiry and governs its actual form. It is in the light of this expectation that some observations are held relevant and others not; that some methods are chosen, others discarded; that some experiments are done rather than others. It is only in the light of this prior expectation that the activities the scientist reports in his scientific papers really have any meaning at all.

Medawar (and Hempel, too) finds Popper's picture of scientific activity more plausible. Popper saw two quite distinct parts of the scientific activity, the generation of hypotheses and the testing of hypotheses. Hypothesis testing is all about working out what your hypotheses logically entail and then setting up observations or experiments to compare what you ought to see if the hypothesis is true with what you actually see. The testing part of the scientific activity, as far as Popper is concerned, is nicely grounded in deductive logic (dodging the problem of induction). Besides conducting careful experiments and making good observations, all you need to do is feed in hypotheses that are falsifiable and you're good to go.

Where do these falsifiable hypotheses come from? Popper doesn't think it matters. Maybe they're hunches you develop while slaving in the lab or poring over the literature, but they could just as easily be ideas that come to you at cocktail parties or in fevered hallucinations. As long as they're falsifiable -- and as long as you are setting out making your best effort to falsify them -- they'll provide the necessary input to the glorious deductive testing machine of science.

This Popperian picture, whatever you think of it, is in stark contrast to the narrow inductivist picture in which scientists "feign no hypotheses" but simply collect the data and read the patterns off the world -- the picture of science Medawar thinks most scientific papers convey.

Why should this "fictionalization" of the process of scientific inquiry matter? In part, it misrepresents science to the public -- making it out to be more cut and dried, objective, mechanical, and boring than it really is. As well, it may be indicative of a way scientists mislead themselves -- thinking they have to achieve the super-human ability to read the correct generalizations off a pile of (necessary incomplete) data, or that good scientists never depend on luck or inspiration. Indeed, this latter issue may be the one that's really bugging Medawar:

The scientific paper is a fraud in the sense that it does give a totally misleading narrative of the processes of thought that go into the making of scientific discoveries. The inductive format of the scientific paper should be discarded. The discussion which in the traditional scientific paper goes last should surely come at the beginning. The scientific facts and scientific acts should follow the discussion, and scientists should not be ashamed to admit, as many of them apparently are ashamed to admit, that hypotheses appear in their minds along uncharted by-ways of thought; that they are imaginative and inspirational in character; that they are indeed adventures of the mind.

Why should scientists pine for an ideal of scientific inquiry that is both unattainable and that, were it somehow attainable, would make them boring cogs in a machine rather than the adventurers of mind that they really are? Why not be honest -- and proud -- of the human element of how science is really done, and report it that way?

Would anyone like to respond to Medawar on this?
______

[1] P. B. Medawar, "Is the Scientific Paper Fradulent?" Saturday Review, 1 August 1964, 42-43.

[2] Carl G. Hempel, Philosophy of Natural Science Englewood Cliffs, NJ : Prentice Hall, 1966.

More like this

Prompted by my discussion of Medawar and recalling that once in the past I called him a gadfly (although obviously I meant it in the good way), Bill Hooker drops another Medawar quotation on me and asks if I'll bite: If the purpose of scientific methodology is to prescribe or expound a system of…
Over at DrugMonkey, PhysioProf has written a post on the relative merits of "correct" and "interesting", at least as far as science is concerned. Quoth PhysioProf: It is essential that one's experiments be "correct" in the sense that performing the same experiment in the same way leads to the…
Theory: A word that gets used a lot in discussing science, or attacking it. Theories are only verified hypotheses, verified by more or less numerous facts. Those verified by the most facts are the best, but even then they are never final, never to be absolutely believed. [Claude Bernard, 1865,…
Sir Karl Raimund Popper (1902-1994) was a professor at the London School of Economics and among the most influential philsophers of science of the 20th century. Among his other projects, Popper dealt with the question of what is, and what is not, science. Popper proposed that what separates…

Well, for what it's worth (I'm not a philosopher of science), but the function of "the scientific paper" is not to describe the process of doing science in any way, shape or form; the function is to present one unit of results to the writers' peers in a stereotypical cookie-cutter form, making it easily digestible and comparable to other works. You know how people often get away with reading just the abstract and discussion and skimming the interesting parts of the results section? They can do so, and do so reliably, because they can trust the format.

A paper is very well optimized for people to get the unit of results it contains in a quick, effective manner - once you have learned how to read a paper, of course. You want to show the excitement of scientific results for laypersons? You want to highlight the vagaries of convoluted pathways of discovery? Great - but the journal paper is not the right forum for it. You don't expect a Porsche shop manual to express the excitement of driving the car down a narrow mountain road; you expect it to dryly, but correctly and exhaustively, describe the minutiae of how to keep the car well-maintained. Don't overload the paper with purpose different from what it's meant to do.

I have always had a problem more with the nature of the scientific paper as being a very biased and one sided point of view. The results and discussions are about every possible way in which a paper is right. I wonder what would happen if an equal amount of time and resource was spent in pondering over why the results could be wrong?

That said, I think Medawar's suggestion (whether practical or not) might be a good start to understand how to change the way we do/write science. Change is the order of the day for sure. And if I might allude to the earlier comment, I think Medawar might have said that he wouldn't want a scientific paper to read like a car manual. Rather it is important to convey the excitement of the results through the paper. something which the advertisement for the car does, very well.

This may be a reflection of differing traditions for physical vs natural sciences, but here in biomed-land we have the opposite concern. Virtually all papers explicitly state their hypotheses at the end of the introduction section, and, lo and behold, the hypotheses are virtually always supported by the results. Many papers I read therefore generate the suspicion that hypotheses have been generated ex-post-facto to suit the results, thereby turning Popper on his head.

Janne's point is well-taken, however, that this is still a very effective means of communicating a unit of data to the scientific community, and it could be seen as merely a convention rather than an ethical lapse. At the same time, no hypothesis is ever falsified or rejected by this method.

By Neuro-conservative (not verified) on 12 Jul 2007 #permalink

Last year, I did a semester of "Senior Seminar" that turned out to be specifically about reading scientific papers (in the field of Microbiology) in the context of the "hypothesis testing" model. ("What is/are the authors' hypothesis(es)? Did their experiments adequately test these hypotheses? Do their results genuinely support or falsify their hypotheses?...")
One thing I noticed is that this makes it difficult to write about research that is not based on a pre-supposed hypothesis. That is, what do you do if you genuinely don't have any pre-conceived notion of what the outcome of your experiments will be? Do you arbitrarily pick a side like someone in a Socratic rhetoric class?...or do you do the experiment and (as "Neuro-conservative" mentions) retro-fit a hypothesis into the paper once the results come in, so as to fit the expected stylized format of the paper?
It also made "method development" papers awkward. The mandatory "hypothesis" ends up being roughly "this method gives results" and the results are "yup, we got results. Therefore the method works". (Okay, not quite that flippant, but you get the idea.)
I have to wonder (at the risk of starting a flamewar) whether the rigidly stylized format of the scientific paper is just a crutch to deal with the fact that many scientists just plain can't write well. Having a canned outline that they must fill in the blanks for may help the subset of scientists who are poor communicators to still get their results out for review in a relatively coherent way.
Attempting to publish a journal that was peer-reviewed but did not require the stylized format would make for an interesting social experiment, I think (and probably a much more pleasant format to read.)

...the function is to present one unit of results to the writers' peers in a stereotypical cookie-cutter form, making it easily digestible and comparable to other works.

Exactly. Sometimes I have to read one of those pre-1950's papers that are a rambling story about how the researcher read something and thought of an experiment and went to the lab and took down a flask, and on and on. They have plenty of old-school charm, but I can't imagine having to absorb today's volume of literature in that format.

I'm actually going to take a bit of a cynical view of things, and suggest the results and discussions are separated for purely practical purposes. Leave a scientist with the option of contemplating the implications of every piece of data as they present it, and the resulting text would be an incomprehensible nightmare. Most scientists lack the self-discipline needed to avoid considering possible implications; allow them to do so, and each experiment would have a discussion-length tract associated with it.

The results/discussion separation avoids that; the results get presented in a way that (ideally, at least) groups logically connected work together, and presents it in a sufficiently cogent form to allow people to reconstruct what the actual body of data is. Its forced separation from the discussion allows the discussion to focus on the implications of that body of data, rather than the minutiae of various experiments.

This may be a misleading picture of the actual process, but the alternatives simply don't convey information well. Read a grad student's first stab at writing a research paper (or remember back to your own). It's probably in narrative form, in strictly historical order, and it undoubtably jumps from concept to concept in a logically incoherent manner. Using narrative form is one of the first things i had beaten out of me when it came to science writing, and i in turn try to beat it out of as many fellow scientists as possible.

If i had to guess, the current form is a survivor because it's the fittest; it's not the best for conveying the process of science, but it is the best at conveying scientific information. And that's what the target audience - scientists - select for.

I'll try to have something sensible to say about papers later, but right now, since you mentioned Medawar, here's something I've been meaning to do for a while: how would you, Janet, respond to Sir Peter's famous observation:

If the purpose of scientific methodology is to prescribe or expound a system of enquiry or even a code of practice for scientific behavior, then scientists seem to be able to get on very well without it. Most scientists receive no tuition in scientific method, but those who have been instructed perform no better as scientists than those who have not. Of what other branch of learning can it be said that it gives its proficients no advantage; that it need not be taught or, if taught, need not be learned?

For "scientific methodology" we can, of course, read "philosophy of science".

(And this is your fault; remember, you once called me a gadfly... now I feel I have a reputation to live up to!)

I've rarely read papers that separate the results and discussion from each other - it's simply not the way things are done in any of the fields relevant to my graduate research. Perhaps because of that, the separation seems artificial and awkward. The results of one experiment prompt/motivate the next experiment. With the results and discussion combined, the choice of experiments makes more sense.

Turns out Janne said everything I was going to. I will just add that I think the web has added another layer to scientific communication, an informal layer of Open Notebooks, blogs, wikis, comments at journals like PLoS ONE, and so on. This layer is much better suited to conveying the day-to-day nature of science -- Medawar's "adventures of the mind". (Sometimes I think Sir Peter took positions to which he was led more by the beauty of phrases as they came to him than by rigorous argument...)

There must be more context or Medawar looks pretty, well, non-smart. There is no monolithic "Scientific Paper". There are many different kinds with many different purposes and formats. In some fields the ones discussing research results are formatted the way you describe. In that case, as said, the point isn't to describe process, but results. If every research paper strove to have unique format or describe the exact research process it would be disastrous for anyone trying to actually use these papers as a basis for understanding and further research. That is what those research result reports are trying to do, are they not? The Intro should have some discussion of any hypothesis' going into the research anyway.

I agree with the posters above about why the separation - because it is useful when you are going back and looking at the paper, for example you can check the results separately. I think, based on how I used to use research papers with others, that no one would really care if the discussion were moved before the results as long as its reasonably consistant. In the long run it is meaningless - the point is a standard for that can be efficiently used. Review papers are where the hypotheses are better first for context in discussing sets of results, and they are an example of a paper for a different purpose.

By the way the quote by Bill from Medawar is also quite odd. If scientists are not taught the scientific method, then how come if you ask virtually any professional scientist you will get the - hypothesis then test by experiment - discussion and usually a little Popper falsifying in there. Do they just absorb it from the air? They are obviously taught this. As a side note, one of the best descriptions of science in a short space is the Dover court decision - it quite concisely elaborates what Science is, from the base of "what scientists do is science" out.

...the function is to present one unit of results to the writers' peers in a stereotypical cookie-cutter form, making it easily digestible and comparable to other works.

Exactly.

Hardly. The function of a scientific paper is to further the career of the author(s) - all else is a means to this end. "Presentation of results", not being the best means to the end, is thus usually avoided.

(Of course, if the term "presentation of results" was being employed as in "The latest Coca-Cola ad presents the flavored sugar-water product to the TV viewers", then I withdraw my objection.)

On the form of papers, I agree with the commenters who sees different standardized styles.

I can add that I think the selection pressure is to pass the reviewers who are accustomed to a certain form. That the readership will also benefit from easily recognizable and digestible stereotypes is coincidental.

Also, writing a series of papers you benefit from borrowing from earlier reference and methods sections, since unfortunately papers must stand alone. (Another thing that web publishing is currently streamlining with easy access and linking.)

On the form of presentation vs method, I think Medawar may confuse data collection and hypothesis building with hypothesis testing. Of course you use earlier hypotheses and working hypotheses to collect enough results to build a coherent picture.

This, as the failures, are scaffolding that you don't mention to concentrate on the results. Though AFAIU it is now possible to publish such work too, it is educational and can save others work.

During these iterations you will make inductions, and some of them will remain to form the basis for testable models. For example, I have been doing work with gas chambers, and one of the basic assumptions in the data collection was that for example mass flow meters were linear.

This isn't testable over each flow without pulling the flow meter apart and test the construction. But one can verify linearity for some flows and use interpolation as induction and to get a rough statistical description of the instrument and its accuracy and precision. And this sanity check is mentionable, if not perhaps the data and figures.

By Torbjörn Lars… (not verified) on 13 Jul 2007 #permalink

Small factual point: I think the off-cited paper is more usually called Is the scientific paper a fraud?. It's also in his collection "The Strange Case of the Spotted Mice and Other Classic Essays on Science".

This message was brought to you in association with Pedants Anonymous.

Bob

The paper is always a reconstruction of what actually happened. But it's not a fraud if everybody is aware that it's a reconstruction. Maybe nobody is really interested in knowing the real story. The paper does serve other purposes. To quote "Latourian" sociologists, it serves to enroll others into your point of view, it attempts at "creating" facts, "blackboxing" pieces of knowledge, etc. Of course it also adds to your list of publications ...

Personnally, I got around this limitation by structuring my talks, rather than my papers, the way Medawar suggests. It may be OK for a paper to have this traditional rigid structure, because, as one poster said, the reader likes to get to the point fast. But there's nothing more boring than a scientific talk where the presenter follows this kind of plan. Especially when they use the first slide to show you the plan of the talk, which I find an absurd practice. So I got to present my results with the following general synopsis:

1. We know about such and such results or theory
2. but this raises an interesting question
3. Here's how we thought we might get some answers
4. We setup that experiment and...
5. surprise! the result didn't quite fit what we might have expected
6. back to the drawing board, after scratching our heads, we thought, well, maybe we can explain things this way
7. We redid an experiment, or tried a different model, and...
8. Voilà! Now we have a very good fit between model and experiment
9. In conclusion: we've learned something both unexpected, useful and exciting

Note a couple of things:

1) the "punch" comes at step 8. That way, you keep the audience interested, and hopefully awake troughout your talk.
2) At about 1, and sometimes 2 slides per step, this fits well into a 12 minute talk, with about 1 minute per slide.

By Francois Ouellette (not verified) on 15 Jul 2007 #permalink

Bob, other versions of the paper may have slightly different titles, but the one I photocopied from the Saturday Review has the title I typed here.

If I understand what Medawar is saying, it's something of a strawman. Any longish paper in cell or molecular biology, for example, presents a series of experiments that were performed in the reported order because the results of one suggested the next. (Obviously there are exceptions.) Thus, the Results section contains interpretation of the results of each experiment, because that's what justifies doing the experiment that follows. The authors do not present the Results as "five or six experiments we did for the hell of it, hoping something related to our initial question would drop out".

By hip hip array (not verified) on 17 Jul 2007 #permalink

I've often seen papers with a single "Results and Discussion" section -- not much different than how most have a single "Material and Methods" section.

By David Marjanović (not verified) on 31 Mar 2008 #permalink

In my field (psychology), I've seen papers with lots of different styles. My only published article so far is hypothesis-led. The technical note I have in press which describes the construction of a novel piece of equipment is based on basic physics and describes how we put the thing together and how it works.

Different journals have different styles, as do different types of article. Research notes (or brief reports, short papers, etc) which are usually for bringing out novel findings quickly for wide dissemination are different to long, involved, multi-experiment articles that rigorously test a hypothesis.

Bill quotes this up thread:

"If the purpose of scientific methodology is to prescribe or expound a system of enquiry or even a code of practice for scientific behavior, then scientists seem to be able to get on very well without it. Most scientists receive no tuition in scientific method, but those who have been instructed perform no better as scientists than those who have not. Of what other branch of learning can it be said that it gives its proficients no advantage; that it need not be taught or, if taught, need not be learned?"

Personally, I wonder if he has any data or just made it up? The statement is provocative, but speaking as someone who learnt no philosophy of science at school or university doing a chemistry degree, I would have been helped greatly by some discussion of experiments, why we did them, how we worked out science. Or in other words, an introductory philosophy of science course. Now, after reading books on the topic of science and doing some pottering at work and in my back garden, I think I know how to do science better and come up with better ideas now than I did at uni.

No doubt many people get by without it, because the basics of experimental science are quite simple, and they can absorb most necessary knowledge when learning their stuff whilst doing a PhD. But how much time would be saved by a few lessons half way through first year? After they've done some labs, before they do some more, get students to talk about what they do and why when they are trying to investigate things, the variables, equipment, ideas they bring to it. I think you'd see some improvement, even if only intellectually in their ability to talk about what they do and why and their self critical ability.