This is another attempt to get to the bottom of what’s bugging people about the case of Marcus Ross, Ph.D. in geosciences and Young Earth Creationist. Here, I’ve tried to distill the main hypotheticals from my last post on the issue into flowcharts*, in the hopes that this will make it easier for folks to figure out just what they want to say about the proper way to build scientific knowledge..
First, here’s the process that no one thinks is a good description of how to come to a scientific conclusion:
Believing something doesn’t make it so. Science is an endeavor that is not concerned with what a person believes about the world but instead with what one can establish about the world, usually on the basis of emprical evidence.
The worrisome thing about the Marcus Ross case was that his YEC committed him to views (e.g., the Earth is at most 10,000 years old) that directly conflict with claims made in his disseration about the abundance and spread of marine reptiles which disappeared about 65 million years ago. He seems to be claiming not-P while believing P, and that seems a lot like lying. This is why I labored through the doppelganger-Ross post to try to work out whether it’s even possible to build good scientific knowledge while believing (for completely non-scientific reasons) the opposite.
My commenters seemed divided on this. In the “unlikely it’s possible” column, we have Brian:
… as a scientist, you’re committed to the idea that the most parsimonious explanation is likely the truth.
and Larry Moran:
The Earth is billions of years old. That’s not a theory, it’s a fact. (Where fact is defined in the Gouldian sense of something that’s so well established that it’s not worth questioning any more.) Yes, of course there’s some place deep in our brains where we retain a smidgen of doubt, but the practice of good science demands that it stay down deep unless some contrary evidence comes along. We’ll only dredge it up when we’re playing with philosophers.
and possibly David Harmon:
A basic part of being a scientist is being able to suspend your beliefs. Not your disbelief — that’s easy — but your beliefs, and especially the ones you actually like!
since I take it the suggestion here is that a serious scientist ought to be able to set the YEC aside. These responses seem to fit with a picture of scientific knowledge production that looks like this:
For the record, if you’d rather switch the order of “Believe that P” and “Conclude P” boxes (and similarly with the corresponding not-P boxes), that’s OK with me. The important feature here is that the empirical evidence, theories, and inferences lead to something you think is properly identified as a belief — and that believing the opposite of what the data/theory/inference process directs you to believe would be an astoundingly bad thing to do.
Other commenters seemed willing to say that even if the real Marcus Ross is not someone they’d want to call a good scientist, doppelganger-Ross might be able to do good science despite his YEC beliefs. This group included Paul Schofield:
… what does a belief matter to the work done? Surely what goes on inside your own head only becomes a problem if it goes beyond that and influences your work and writings. …
In the case of the hypothetical here, the belief is kept entirely detached from the work produced (otherwise there would have been no way any PhD, or science fair sticker for that matter, could have been awarded). It would be no different to an atheist making an argument to Christians that referred to the bible. You may not believe it is true, but that doesn’t stop you understanding the others viewpoint and using it to make arguments.
What he “actually” believes is of course rather unrelated to how his work should be evaluated. …
What matters is the quality of the work and the evidence he brings forth in it. The rest is really irrelevant.
and Lab Lemming:
A person who can solve problems is a scientist. … Science is an outcome-based activity. If it works, it works. Whether or not he is delusional is irrelevant, as long as his work is transparent and reproducible.
These responses suggest a picture of scientific knowledge production that looks like this:
The only difference between this picture and the last one is that there are no boxes that have to do with whether you believe P or not-P. In other words, what you conclude in this process is determined by the data/theory/inference process — not by whay you believe. If this is a good picture of how scientists arrive at their conclusions, then it’s at least possible for a scientist to conclude P (on the basis of the data/theory/inference process) while believing (for entirely separate reasons that he himself recognizes as non-scientific) not-P. Because “Believe not-P” isn’t part of this process, it’s not going to bring you to a scientific conclusion of not-P.
If you’re a serious Popperian you might worry about those conclusion boxes, given the possibility of new data or updates in our theories or the persistence of the problem of induction. A real Popperian keeps riding the data/theory/inference merry-go-round. That’s fine; read “Conclude P” as “TENTATIVELY conclude P” and, in the case of new information that could undermine that conclusion (and we promise, Sir Karl, that we’ll keep looking for that information!), revisit the available data and theories to draw the best available inference. This is the kind of thing Larry Moran is pointing to with the possibility of “contrary evidence” above. However, he’s acknowledging that actual scientists don’t keep beating that (tentatively) dead horse as long as Popper makes it sound like they should.
Scientists, of course, are human. As such, they have beliefs, and there’s nothing wrong with that. The question is whether there is, or ought to be, a certain kind of relationship between their beliefs and their scientific conclusions.
The sense I’m getting from some of the comments is that people are deeply suspicious that a person could come to the scientific conclusion that P if that person holds a belief that not-P. There are all sorts of efforts scientists take to remove bias from their scientific work, to shift the burden of proof so that they won’t give an unfair advantage in their interpretation of the data to the view they’re predisposed to believe. Sure, it’s hard to completely remove your own individual biases, but that’s why scientists build knowledge in communities. It doesn’t become knowledge until you can persuade the others in that community of your conclusions, and how you do that is by displaying the data/theory/inference used to arrive at those conclusions.
Maybe whether a particular scientist working within the community can be sufficiently unbiased to contribute to the building of good knowledge is an empirical question. How the community would judge whether his conclusions were biased or unbiased, though, would probably come down to the data/theory/inference displayed to back up the conclusions. This is not to say that a belief that not-P couldn’t be the relevant cause of the biased conclusions, but rather that that belief is not the thing the community needs to trip over to identify that the conclusions are biased.
But perhaps the worry is really something like this: A real scientist ought only to believe conclusions reached through an appropriate data/theory/inference process. This would mean that scientific conclusions ought properly to smash any beliefs you have that contradict them. It would not be acceptable, on this view, to say, “I know my belief that P is not scientifically supported! I understand that there’s no reason for anyone in the scientific community to take my belief that P as a scientific conclusion, and I have no intention of asserting it as such, whether to other scientists or to non-scientists. Yet, in my heart of hearts, I believe that P.”
Again, there’s probably an empirical question about whether it’s really possible for humans to hold contradictory beliefs. But, must all of a scientist’s beliefs be on solid empirical footing? Can any human actually live up to this standard (without simplifying the problem by believing very few things)?
Believe me, I understand the consternation around the actual Marcus Ross. I will be the first one to decry any arguments-from-the-authority-of-having-a-geosciences-Ph.D. offered to defend YEC, as well as any silly claims that his being a scientist and his believing YEC means that YEC constitutes a set of scientific beliefs.
But, it seems to me that the aim of the scientific enterprise is to find ways to draw inferences that move beyond the beliefs of any individual scientist. Leaving the “belief” boxes out of the flowchart doesn’t seem to remove any of the steps required for building sound scientific conclusions. Scientific conclusions may well affect the belief structures of individual scientists, but that’s a matter of their own personal growth, not required step in the construction of the shared body scientific knowledge.
*”You’re using hand drawn flowcharts?!” exclaims my better half. Yes, I am. Now you all know what a Luddite I am. Please excuse me while I churn some butter.