Given that in my last post I identified myself as playing for Team Science, this seems to be as good a time as any to note that not everyone on the team agrees about every little thing. Indeed, there are some big disagreements — but I don’t think these undermine our shared commitment to scientific methodology as a really good way of understanding our world.
I’m jumping into the fray of one of the big disagreements with this repost of an essay I wrote for the dear departed WAAGNFNP blog.
There’s a rumor afoot that serious scientists must abandon what, in the common parlance, is referred to as “faith”, that “rational” habits of mind and “magical thinking” cannot coexist in the same skull without leading to a violent collision.
We are not talking about worries that one cannot sensibly reconcile one’s activities in a science which relies on isotopic dating of fossils with one’s belief, based on a literal reading of one’s sacred texts, that the world and everything on it is orders of magnitude younger than isotopic dating would lead us to conclude. We are talking about the view that any intellectually honest scientist who is not an atheist is living a lie.
I have no interest in convincing anyone to abandon his or her atheism. However, I would like to make the case that there is not a forced choice between being an intellectually honest scientist and being a person of faith.
Fans of science claim that science makes knowledge. Often, they claim more — that science provides the best way to build knowledge, or even the only way to build knowledge. What, precisely, counts as knowledge is one of those pesky philosophical issues that some philosophers spend their whole careers trying to work out. One flash card definition people seem to like is that knowledge is justified true belief, but it’s not an unproblematic definition (see “Gettier problems”).
Even if it were an unproblematic definition philosophically, when scientists are being careful, they don’t claim that they’re making anything this strong.
Let’s back up for a moment and look at the rules of the scientific discourse. Science is engaged in a project of trying to build reliable accounts of phenomena in the world — depending on the scientists, these will be phenomena in the physical world, the biological world, or the social world. Regardless of the sort of phenomena with which particular scientists grapple, there is a shared commitment that these phenomena are publicly accessible — that they are features of a world we share with others.
One reason this is important is that in building their accounts of the world, scientists are aiming for a kind of objectivity. They are trying for something that goes beyond a subjective report — how the world seems to them. Instead, scientific claims reach for what anyone with well-functioning sense organs could observe — hoping, of course, that the observable features of the world we can agree on are features of the world as it really is, existing outside our heads.
The impulse to demonstrate that your account of the world is not merely a subjective account is central to scientific discourse. It explains the importance of empirical evidence — evidence that anyone could observe given the right circumstances (circumstances which scientists take pains to specify precisely so other scientists can replicate their experiments) — in grounding scientific claims. It also explains why scientists try to be explicit in setting out their chains of inference from the empirical evidence. If your goal is to build good accounts of the world outside your head, it is useful to have others working toward the same goal checking your work to ensure that you haven’t leapt to any unwarranted conclusions. Since seeing the world as we want or expect to see it is a constant danger, scientists need each other to work out what the empirical facts are, and how much can be justifiably inferred from those facts.
Since there are always more empirical data to be had — and since the danger of drawing conclusions that outstrip the empirical data in evidence is always near — scientific conclusions are always tentative. If “knowledge” is a success term that asserts the truth of a claim, scientific knowledge is knowledge with an asterisk, for scientists recognize that their claims are tentative — inferences judged to be well-grounded in the empirical evidence available right now. In light of new evidence, claims might be updated. As well, scientists might decide a particular inferential chain is not as reasonable as they first thought it to be.
In other words, while scientists may be hunting truth, they understand something about how hard it is to be sure they’ve found it.
The skeptical attitude is something scientists feel they ought to put on, like a lab coat and safety glasses, when they are on the clock as scientists. In their capacity as scientists, they don’t think they should accept claims until those claims have something like proof behind them. Because the evidence is still coming in, the proof will hardly ever be once-and-for-all proof. Maybe to compensate for this, scientists try to impose a high burden of proof that a claim must meet before it is deemed scientifically credible.
A good scientific claim has to convince a whole passel of skeptical scientists. The case for that claim has to be grounded in publicly accessible evidence from the world, and the way that evidence counts as support for the claim must be made transparent. An undeniable part of the appeal of science is its democratic potential: if your sense organs work and you’re able to set out logical inferences, you can help build and test claims about the world. Because any human with good sensory apparatus and the rational powers could participate in the scientific discourse, scientists are inclined to think that scientific arguments ought to be persuasive even to non-scientists.
To the extent that such arguments are set out in ways that make the ground rules of the scientific discourse transparent, they frequently are persuasive to non-scientists.
There are moments, though, when enthusiasm for scientific discourse may manifest itself in ways that overlook what scientists think they know when they’re on the clock as scientists. When folks claim that science is a guaranteed route to truth, we’re dealing with a claim of a different nature than the claims that scientists are hammering out in their accounts of the world.
Scientific knowledge, or scientific faith?
Are we warranted in calling scientific claims knowledge – that is to say, identifying them as true claims that we are justified in believing to be true? Isn’t the methodology of science as powerful an apparatus for building knowledge as we’re likely to get our mitts on?
Maybe we’re as justified as we can be in believing the claims we come to through scientific discourse. Maybe a good number of those claims are even true. Certainly, within the bounds of the scientific discourse, there is a methodological commitment that the claims that remain in play must stand up to certain kinds of scrutiny, and meet a certain burden of proof on the basis of empirical data and inferences that can be publicly interrogated.
To assert that this methodology succeeds in establishing truth depends on certain kinds of metaphysical commitments.
You think the data we collect today can help us make good predictions about what will happen tomorrow? That reflects a metaphysical commitment you have about what kind of universe you’re living in. And there’s nothing wrong with having that commitment. Indeed, it’s what helps some of us get out of bed in the morning. You want to show me the analysis that shows your results are statistically significant? Fine, but don’t forget that the claim of statistical significance rests on metaphysical commitments about the normal distribution of data in the bit of the world you’re studying. If you didn’t start with some metaphysical hunches, there would be no way to do any science. Some folks are happy to acknowledge that these hunches are methodologically useful, but that they aren’t proven to be true or guaranteed not to fail us. That they have worked so far shows them to be useful, but it would be unwarranted to infer from their usefulness to their truth.
In scientific discourse there’s a serious attempt to do the job of describing, explaining, and manipulating the universe with a relatively lean set of metaphysical commitments, and to keep many of the commitments methodological. If you’re in the business of using information from the observables, there are many junctures where the evidence is not going to tell you for certain whether P is true or not-P is true. There has to be a sensible way to deal with, or to bracket, the question of P so that science doesn’t grind to a halt while you wait around for more evidence. Encounter a phenomenon that you’re not sure is explainable in terms of any of the theories or data you have at the ready? Because the scientific rules of the road insist on claims grounded in evidence that is accessible to others, it is methodologically out of bounds to assert “A wizard did it!” Instead, scientists are bound to dig in and see whether further investigation of the phenomenon will yield an explanation. Sometimes it does, and sometimes it doesn’t. In cases where it does not, science is still driven by a commitment to build an explanation in terms of stuff in the natural world, despite the fact that we may have to reframe our understanding of that natural world in fairly significant ways.
Some scientists and friends of science believe the methodology works because the metaphysical hunches that get the discourse off the ground are true. Maybe they are. But because these are metaphysical claims, you can’t establish their truth scientifically. Taking on these metaphysical commitments is an exercise of faith.
Beliefs we come to by other than officially approved scientific methods (i.e., faith).
If the hallmark of scientific claims is their grounding in empirical data and the ability to interrogate them publicly with some hope of coming to a conclusion that will be persuasive to all the parties involved in that interrogation, our non-scientific beliefs do not have the same publicly accessible character. They are not grounded in empirical evidence that others can examine, nor do they offer logical chains of inference others may check for mistakes. They are subjective, not objective.
They are my beliefs.
But if my experience of the world is not the sort of thing someone else could pick up, examine, or have herself, that does not mean that my experience does not exist. Nothing could be realer to me. I just cannot make others feel its pull. To do that, I would need to be able to make my personal experience an object of public scrutiny, something that others could actually experience for themselves.
It wouldn’t be enough to describe my experience, or to spell out what I believe. This falls short of others being able to have my experiences. It doesn’t transmit the impetus that gets me to my belief.
For those who hold that the methods of science are the only good ways to come to beliefs (or to hold onto those beliefs once you notice that you have them), the common line is that any belief that you cannot ground in the empirical facts and good logic does not warrant your belief. This presents a bit of a problem when one holds a belief that the scientific method can be counted on to produce true claims, or even that the laws of nature won’t change next Tuesday. Both are fine beliefs, but neither is grounded in the empirical facts and good logic.
Knowing the difference between public evidence and private belief.
One of the things that scientists learn in their training is how to frame arguments that are persuasive to other scientists – arguments that attend to the empirical evidence and draw the right kind of inferences from such evidence. They discover early on that arguments from authority or from intuitions don’t have the same persuasive power among their fellow scientists. In other words, they learn what kinds of evidence count in the scientific discourse. They understand that claims without the right kind of grounding will not be counted as scientific claims.
This need not stop scientists from believing things that might not stand up as scientific claims – whether believing that there could be immaterial stuff in the universe, or that everything in the universe is at least in principle empirically accessible, that the laws of nature can be counted on to be stable and regular, or that we’ve been on a long run of apparent stability that might soon run out. The crucial thing for the scientist as a scientist is to recognize that these beliefs stand outside of scientific discourse. They don’t count as evidence that ought to persuade anyone else.
An intellectually honest scientist can believe in a deity; she just can’t deploy that belief in a scientific argument. Similarly, an intellectually honest scientist can believe that there’s no deity, but he can’t use that belief to undermine the empirical evidence or logical inferences of a scientist he knows goes to church. If there’s something wrong with the churchgoing scientist’s data or arguments, the problem should be detectable with no recourse whatsoever to the non-scientific beliefs that might be in his head.
What matters, from the point of view of engaging in the scientific discourse, is what you can demonstrate to other participants in that discourse. As far as your scientific activity is concerned, your other beliefs are your own private affair.