Given that I am relentlessly flogging a book about the universality of the scientific process (Available wherever books are sold! They make excellent winter solstice holiday gifts!), I feel like I ought to try to say something about the latest kerfuffle about the scientific method. This takes the form of an editorial in Nature complaining that Richard Dawid and Sean Carroll among others are calling for discarding traditional ideas about how to test theories. Which is cast as an attempt to overthrow The Scientific Method.
Which, you know, on the one hand is a kind of impossible claim. There being no singular Scientific Method, but more a loose assortment of general practices that get used or ignored as needed to make progress. It's all well and good to cite Karl Popper, but it's not like philosophy of science stopped once he published the idea of falsifiability as the key criterion-- "Pack it up, folks, we're all done here!" There's been a ton of activity post-Popper, and if you're going to take up the defense of SCIENCE against some new generation of barbarians, you need to at least attempt to engage with it(*).
At the same time, though, I have a lot of sympathy for the defenders of method, because the calls to scrap falsifiability are mostly in service of the multiverse variants of string theory. And I find that particular argument kind of silly and pointless. The multiverse idea is ostensibly a solution to the problem of fine-tuning of the parameters of the universe, but I'm sort of at a loss as to why "There are an infinite number of universes out there and one of them was bound to have the parameters we observe" is supposed to be better than "Well, these just happen to be the values we ended up with, whatcha gonna do?" I mean, I guess you get to go one step further before throw up your hands and say "go figure," but it's not a terribly useful step, as far as I can see.
I'm probably most sympathetic with the view expressed by Sabine Hossenfelder in her post at Starts With a Bang. After noting that the quest for quantum gravity seems to have gotten stuck, she writes:
To me the reason this has happened is obvious: We haven’t paid enough attention to experimentally testing quantum gravity. One cannot develop a scientific theory without experimental input. It’s never happened before and it will never happen. Without data, a theory isn’t science. Without experimental test, quantum gravity isn’t physics.
[...]
It is beyond me that funding agencies invest money into developing a theory of quantum gravity, but not into its experimental test. Yes, experimental tests of quantum gravity are farfetched. But if you think that you can’t test it, you shouldn’t put money into the theory either.
I'm very much an experimentalist by both training and inclination, though, so of course I find that view very congenial.
I suspect, on some level, this mostly comes down to psychology and the past successes of particle physics. All through the 1960's and 1970's, progress at accelerators was rapid and went hand in hand with theory. So people got fixated on accelerators as the solution to every problem. And there's always been the tantalizing possibility that with just a little more energy, everything will fall into place. And if that's your expectation, well, then there's no reason to put all that much effort into phenomenology for experiments other than the next new accelerator.
(There's also the faintly toxic notion that phenomenology is for second-raters (paraphrasing Dyson paraphrasing Oppenheimer). Which is another of the many pathologies afflicting academic physics...)
In a weird way, I think a loss of momentum for next generation colliders might end up being a good thing for fundamental physics. If the price point for pushing the well-trodden path a few TeV higher is more than we can afford, that will force people to become a little more clever about how they approach problems, and explore a greater diversity of approaches. Because many of the other things you can think about doing to probe exotic physics can be funded from the rounding error in the LHC construction budget.
So, I guess I would say that it's a little early yet to give up on falsifiability and other traditional methodology. I just don't really believe we've exhausted all the options for testing theories, just because one particular approach has hit a bit of a dry spell. There are almost certainly other paths to getting the information we want, if people put a bit more effort into looking for them.
------------
That's one set of reasons why I've been a little reluctant to weigh in on this particular argument. About equally important, though, is that this amounts to a game that I'm not all that interested in playing.
In writing Eureka, I tried to avoid using the phrase "the scientific method," or giving too much detail about how to define science. The four-step process I bang on endlessly about-- Looking, Thinking, Testing, and Telling-- is very deliberately a cartoonish sort of outline. There's definitely an element of Popperian falsifiability in the way I talk about Testing, because I am an experimentalist, but the way I use it is vague enough to accommodate some of the alternatives people throw out.
I did that deliberately, because I'm really not interested in exploring the boundary between science and not-science. I'm interested in the stuff in the middle, the broad expanse of stuff well away from the edges, that absolutely everyone will agree is science. I'm more interested in celebrating accomplishment than calling out transgressions. I'd like people to turn their backs on the bickering over the precise location of the boundary, and take a moment to appreciate the awesome spectacle of what's there in the middle, where the great successes of the past few hundred thousand years sit.
From that standpoint, whether multiverse theories are properly scientific or not is stunningly unimportant. The number of people who will ever deal with questions for which direct experimental tests are so difficult that they might require an alternative standard is vanishingly small compared to the number of people who directly benefit from mundane empirical testing every single day. That, to me, is an idea that's vastly more deserving of public attention than what standard you use to judge the status of multiverse models.
Which is, of course, why I wrote the book I wrote...
- Log in to post comments
I'm very sympathetic to the idea "One cannot develop a scientific theory without experimental input. It’s never happened before and it will never happen", but is it completely true. What about antimatter, say. Dirac developed that idea without experiemental input (that came later I think), just based on the beauty/symmetry of his equation.
Dirac wasn't working with no data. He was working on a relativistic theory of the electron that could match the observed energy levels of the hydrogen atom. The positron was a side effect of that theory.
I'm so happy that at least one other person finds the anthropic principle to be a yawn. The probability of an event that has already happened is precisely 1.
How do you see "explore a greater diversity of approaches"? Precision measurement is one broad type of alternatives to high energy; it's in the nature of ingenuity that there isn't a complete list, but are there any other approaches that *can* be put on a list of types of experimental approaches?
Precision measurement is obviously my favorite more diverse method, but I think there's probably more that can be done with observational approaches-- CMB measurements and high-energy cosmic rays, and so on.
@Peter Morgan #5: The P5 (Particle Physics Project Prioritization Panel) committee of HEPAP (High Energy Physics Advisory Panel) used a framework of "frontiers" for different approaches. The "energy frontier" is the LHC, ILC, and other accelerators at the highest energies. The "intensity frontier" is lower energy accelerators designed for extremely high luminosity, to study rare interactions. And the "cosmic frontier" is an observational approach, using the CMB, extreme astrophysical events, and other observational methods, as a "natural laboratory" to probe physics in regimes inaccessible to accelerators.
This is all "particle physics" centric, of course (I'm an experimentalist, too). Nuclear physics, g-2 experiments, the AMO guys, all have a wide range of approaches to new physics. It doesn't have to be all about the accelerators.
Many thanks, Michael #7. To me it's unfortunate that there isn't a "precision frontier" in the report (http://www.usparticlephysics.org/p5/), but that precision becomes more-or-less a footnote to intensity on p43ff. The intensity frontier admits greater precision of statistics, but for example might not of itself find the J/Ψ, which I think of as the preeminent result of precision in HEP (specifically, precision of energy preparation). My personal pipe dream is that I'd like to see what if any response there would be in HEP statistics to precisely varied bunch shape.
@Peter Morgan #8: That's an excellent observation. In some ways, the "precision frontier" (as a separate entity) lies elsewhere, with the AMO folks, or the EotWash experiment, or some such. In other ways, the precision frontier really is a consequence of intensity.
One of the great benefits we physicists have (compared, for example, to my epidemiologist mother-in-law) is that Nature gives us randomly distributed data, and our measurements become more precise with 1/sqrt(N), without all of the horrible complications of biased participants, confounding factors, etc.
Our experiments can be biased (limited angular acceptance, non-uniform efficiency, non-uniform sensor systems, etc.), but the underlying physical activity is not. The real complication (confounder in other fields) is backgrounds. We don't get to measure _only_ the particular, rare interaction we care about; we have to apply filters to get rid of lots of other interactions that look very similar in our apparatus. That, more than most other things, can introduce biases and limit our precision.
We humans have an extremely long history of explaining the world with untestable hypotheses - hundreds of thousands of years worth. The result was, among other things, every creation story from every religion you ever heard of and thousands you haven't, down to astronomy based on circles because circles were obviously the most beautiful, most fundamental object possible. Most of which were handed down by some flavor of high priest to whom all had to bow down, because mere mortals were unworthy of such deep understanding (or communication with some God, or however they got "the message" of what was just so "beautiful" as to be obviously "true".) Funny how little we managed to progress during most of that long history. So today we are supposed to bow down to those elite physicists who can truly comprehend the beauty of what they propose, as the arbiters of what is and isn't reality? The discrimination between speculation and experiment testability is the only way we have made the progress we have, allowing us gain an understanding of what to trust, what to question, and what to ignore as we try to solve the next set of questions. Speculation isn't bad -- it's essential. But understanding what is just speculation and what isn't, and understanding the difference, is what will allow us to continue to make progress. And someone's or some group's pronouncement of sufficient "beauty" or even "logical inevitability" as being adequate evidence won't cut it.
+GP Burdell: Nicely written! I want to steal that.
this purpose of this post by Orzel seems to be primarily to promote its author's book. the comments by Hossenfelder and the comment by Ellis and Silk in Nature are, on the other hand, quite to the point and should be required reading for all those individuals interested in science As to there being no scientfic method according to Orzel, that's nonsense. While Feyerabend was correct in saying "anything goes" in regards to the develoment of theory, Popper was correct in identifying "falsifiability " as THE criterion by which science can be distinguished from pseudo-science. i think that in regards to the methodology of theory development (based on my own work as a theoretical physicist), it is well-described by von Neumann (who should be held accountable for suggesting to Shannon that he identify his information formulation as entropy, thereby introducing a flood of nonsense in theoretical physics, not that dissimllar from the 'idea' of the multiverse) who wrote "The sciences do not try to explain or interpret, they make models which are meant as mathematical constructs [and computer programs]. These, with the addition of certain verbal interpretation, describe phenomena. The justification of a model is solely and precisely that it is expected to work".
Gaylord #13: Having been a reader of Orzel's for a number of years, I absolutely and categorically deny that he ever self-promotes --- but when he does it's always with humility and humor.
The von Neumann quote as you have it here seems less to be associated with the "falsifiability" that you say is "correct" than to be a positive-leaning and rather American pragmatism of a theory being "expected to work" --- and as such it appears quite compatible with Chad's post, and certainly with what he's said about method over time.
I quite enjoyed your dichotomy of Feyabend and Popper, but insofar as a theory is essentially always in development and always in competition with others, that anything goes is always interweaved with always choosing --- individually and agreeing with others, on many criteria --- which way to go next. Speaking of which, I must take the dog for her Christmas Day walk.
I'm well aware of the von Neuman quote, and in fact used it as the epigram for one of the sections of Eureka...
to peter #14. i didn't say that Chad was self-promoting. i said he was promoting his book just as he says in his first line "Given that I am relentlessly flogging a book about the universality of the scientific process (Available wherever books are sold!" all scientists promote their own work by giving presentations at meetings and seminars, by referencing their own articles in other articles they write. nothing wrong with that at all, it's part of the process of disseminating ideas. i look forward to seeing the von Neumann quote in Eureka! as soon as Amazon delivers the book to me. hope your dog enjoyed her xmas walk.
just did a 'Look Inside' of Eureka! on the Amazon site (see, i'm helping to promote the book LOL) and found the von Neumann quote at the beginning of chapter 2. i think the statement that follows on p.88 that "scientific models are built up from observations" overstates the role of induction in theory/model building. In my own field of soft condensed matter, we have developed successful (able to fit the data) models of rubber elastic behavior based on the entanglement of long polymer chains. but we don't actually observe these entanglements; we just know that they MUST occur. moreover, we don't model the entanglements themselves (that would be impossible since we don't know the topology or ''architecture' of the various entanglements; instead we ( e.g. P.-G. deGennes, 1991 Nobel Prize winner and Sir S.F. Edwards - btw, this is known as argument from, or appeal to, authority) model the EFFECT of the entanglements on the polymer chains, model building is a creative act, sometimes based on observations, sometimes based on principles (such as symmetry) and sometimes made out of whole cloth. It is indeed, as feyerabend said, 'anythng goes'. and as someone (i forget who) said, sometimes it's the theory that tells us what to observe - that's the deductive part of the hypothetic-deductive process, experimentally.