There's a slightly snarky Review of Leonard Susskind's book on string theory (The Cosmic Landscape: String Theory and the Illusion of Intelligent Design) in the New York Times this week. Predictably, Peter Woit is all over it.
The central issue of the book, and the review, and Woit's whole blog is what's referred to as the "Landscape" problem in string theory. This is a topic that seems to consume a remarkable amount of intellectual energy for what's really a pretty abstract debate. It also leads to a remarkable amount of shouting and name-calling for something that just doesn't seem like that big a deal to me.
The central issue, in my outsider's view, is that the current version of string theory does not appear to make a definite prediction about the nature of the universe, in terms of particle masses and interaction strengths and whatnot. Instead, it allows a dizzying array of equally likely possible universes, each with slightly different masses and interactions and all that-- something like 10500 possible universes, or a one followed by more zeroes than I care to write out.
To string skeptics like Woit, this is proof that the whole enterprise is a bunch of crap. If the theory doesn't predict a single set of particle properties, or at least a small number of possibilities, then it's worthless. Susskind, on the other hand, appears to think that this is the greatest thing since sliced bread. If there are infinite possibilites, he reasons, then anything is possible. More than that, everything is inevitable. Then he starts talking about the Anthropic Principle, and the whole thing drifts off in the general direction of late-night dorm-room bull sessions.
Personally, I can't quite see what all the fuss is about. (More after the cut...)
I mean, the essence of the problem here seems to be that the theory doesn't predict a unique set of values for the fundamental constants of nature. At least, that's what I think is meant by the enumeration of universes-- there are 10500 different combinations of fundamental particle masses and interaction strengths that are possible in a mathematically valid formulation of string theory.
If that's what they mean (and it's entirely possible that the universe tally could refer to something completely different), then I really don't understand what the problem is. Or, more precisely, I think I see what the issue is, but I'm not really bothered by it.
If our universe is one out of ten bazillion possible universes, then it would mean that theorists would have to sort of hand-select theories that happen to give parameters close to the ones we observe in the real worlld. That strikes some people as a crisis, but my immediate reaction is, "Welcome to my world."
I mean, whenever I go to do a calculation or write an exam problem, I have to look up a bunch of constants that are just... constants. The mass of an electron is 511 keV/c2, give or take a bit. Why is it 511 keV/c2, and not 512, or 510, or 347? Shut up and calculate.
Yeah, fine, it's inelegant and aesthetically unappealing. Life sucks, get a helmet.
Even if you were able to generate a single theory, or even a limited number of theories that look more or less like what we observe, I don't think that has much probative value. It's really easy to come up with theories that "predict" what we already know. A theory isn't interesting until it predicts something beyond what we already know-- some new particle, or some physical effect that doesn't arise out of the current models of the universe. And it's not right until somebody detects that particle, or observes that effect.
Now, as I said, it's possible that I'm misunderstanding the issue-- even the pop-science descriptions of this stuff are pretty obscure. It could be that they're referring to 10500 different universes that all predict the parameters we observe, and differ only in the extra particles and effects that are possible (I don't think that's what they mean, based on the Anthropic Principle stuff). That's a slightly bigger problem, but I'm still not particularly bothered. If they differ in ways that can be measured, then we can (potentially) sort them out with experiments, and if they differ only in ways that can't be measured, well, wake me when you start doing science.
(I should note, just so I'm not only hacking on string theorists, that I have a similar opinion of the debates over interpretations of quantum mechanics. The question of whether the Copenhagen Interpretation or the Many-Worlds Interpretation, or some other Interpretation is the right one is interesting on an abstract level, but I can't see investing a great deal of intellectual energy in it, because they all "predict" exactly the same thing. If some really smart person comes along and invents a sort of meta-theory version of Bell's Inequality that lets you distinguishi experimentally between the various Interpretations, then this will become a really interesting issue. Absent that, it's an interesting diversion, but not worth getting worked up over.)
Whatever the real Landscape issue is, I don't really understand the name-calling. It's vaguely amusing as spectacle, but it's a little hard to take seriously.
- Log in to post comments
Fair enough. However, supposing that this is the case, could you remind me what's the
point of doing string theory? It sounds really boring.
Well, if you could do enough experiments to overconstrain the vacuum, you start getting predictions. As Chad implies, that's pretty much what happens with the theories we have now.
(SubTitle: "Look! Work-Avoidance Mode!")
The Anthropic Principle (Strong or Weak!) is one of the most tremendously useful phrases that philosophy has ever produced. It lets me know when I can safely tune out a conversation knowing that nothing useful will be said.
That people-- even people like Susskind-- are raising the Anthropic Principle in their discussions of various competing theories has the perverse and (I assume) unintended effect of making me just not care much any more.
On a more serious note, I sort of share and don't-share the distaste at coming up with a meta-model that might, perhaps, contain something that matches all observed phenomena. If I understand your argument, there's not much difference between accepting a constant as a constant (what you do now) and accepting a rules permutation as a rules permutation (what M-theory would have us do.) To that extent, yes, I dont' see this as a ground-breaking problem. In fact, you can get to the point where your one chosen rules permutation explains *all* the various constants you can readily measure, I'd even go so far as to count it an advance.
Being a pretty well educated layman in the field it just... "feels"... more impressive. More integrated, more unified, more well-understood. It would feel like a real system instead of a collection of isolated or loosely related observations. I expect I'm explaining this badly because I'm an engineer, not a physicist, and I'm influenced strongly by my own profession's sense of system-aesthetics. To a degree, I can empathize with the attitudes that drive you nuts: "This is so beautiful, it has to be true!" Even knowing it's bad science, I can see the allure. (Engineers do that, to: "This is so gorgeous it will have to work!")
On the other hand, though, would it really predict anything we don't already know? And is there even a path toward winnowing those possibilities down? I know almost nothing about M-theory. (I know more about loop quantum gravity, which is already close to zero, and I know a little about that only because I can lie to myself that I can follow the rudiments of it on the strength of my graph theoretic training.) Is there anything like an algorithm by which one can insert known values (perhaps with error bars) of constants into the metamodel of M-theory, turn the crank, and get back a smaller set of it? We won't even discuss the computation complexity of an algorithm that does that-- yet-- I'm just curious as to whether there is any path forward.
I mean, it would be amusing^W terrible to grind through all the numbers and find out that none of the 10^500 possibilities match the real world....
And then the question becomes, are there any constants it might predict, even if they are presently far beyond our ability to measure, that we haven't already? Do any of them predict odd things, liek the constants changing under bizarre conditions? (I seem to recall LQG predicting slightly different speeds for photons depending on their energy, something Smolin thinks might be measureable by observing Gamma Ray Bursters.)
I pretty much agree with your thoughts about 'the anthropic principle'. I think I'm about ready to add 'background independence' to that list of phrases, too. I'm still on the fence about 'the wavefunction of the universe', though.
It is oftened joked that, having produced precisely zero vacua that look like our world, the landscape people have extrapolated that out to 10^500.
Unfortunately, at least in my opinion, we don't understand string theory well enough to do any real nitty gritty phenomenology with it, so it's rather hard to do what everyone would like to do and start trying to match things up with the real world.
If the landscape exists, it just means (as Chad says) that string theory is just like quantum field theory, or quantum mechanics, or classical mechanics -- a broad framework, in which there can be a large number of free parameters specifying any particular incarnation, which parameters we actually have to go out and measure rather than predicting from first principles. I also do not see why this qualifies as the end of science, rather than business as usual.
As an outsider looking in a chemist I agree that you have to start somewhere in order to get anywhere interesting. We thank the physicists very much for electrons, protons, neutrons, wave functions, and the laws of thermodynamics, and then try to do something interesting with them as starting points. Shut up and calculate, in other words. I am also very sympathetic to Woit's criticisms: there seems to be less to string theory than meets the eye, which is saying a lot. At least his take on it.
I have a nit to pick with physicists that reading these posts raises again in my mind: the use of the term "theory" in what seems to be the sense in which lawyers and non-scientists use it. That is, a "theory" is presented as any conjecture or hypothesis that may not have been tested (and may not even be testable if you buy Susskind), rather than as a hypothesis that has been dignified by a body of consistent evidence. If that distinction in usage is adopted, then I would have to say that Susskind's string theory lies firmly within theology, not science. Enumerating universes is rather like enumerating angels on pinheads.
Chad, Aaron, Sean,
You're missing the point here. Sure, the way scientific theories work is that they typically require you to go out and measure some things in order to determine the parameters of the theory. Then you have a theory that makes specific falsifiable predictions and can be tested.
This just isn't the way the string theory landscape works, and this is why it is pseudo-science.
We know a huge amount about particle physics already, but given all this data as input, the string theory landscape predicts absolutely nothing: zip, nada. Furthermore, given all the results of all the experiments we can imagine doing if someone will fund them, that's still not enough input to get a single prediction out of the string theory landscape.
If you read my review of Susskind's book, you'll see that the main point I am making is that he's not doing science because he doesn't have any idea at all what to do about this problem. If he had even a vague proposal about how to get a scientific prediction out of the landscape he would be doing science, but he doesn't. The one idea of this kind people have had and have pursued has been that of using the statistics of vacuum states to make statistical predictions. In particular, Douglas et. al. hoped to be able to predict whether the supersymmetry breaking scale was low (Tev scale) or high (GUT/Planck scale). This failed, they have given up on it, and they don't have any other proposals for how they are going to get predictions and test the theory. If you know of one, let's hear it. I've read Susskind's book and the papers of the landscapeologists carefully, and I haven't found such a proposal.
At this point, the argument of the people doing this is just "We don't understand the structure of the landscape very well, maybe if we learn more about it, we will find something that will allow us to make a prediction." The problem with this is that there is just zero evidence that this will work. A scenario of hugely complicated constructions that can't be used to predict anything is exactly what you expect to get when you pursue an idea that is wrong because it is vacuous. That's what has happened here, and it continues to amaze me that trained scientists refuse to acknowledge this.
At this point, it is clear that string theory unification has failed, but people are refusing to admit this and making excuses for the theory of a sort that evades any possibility of evaluation of it by standard scientific norms.
Chad, if this had happened to your subfield of physics, you'd be pretty pissed-off too...
You know, sometimes it feels like there are no beginnings or endings in these discussions. Sort of like a wheel, really.
Aaron,
Instead of posting content-free comments that imply that I'm just pointlessly repeating myself, how about dealing with the substantive point I'm making? Specifically, if you believe the landscape is like any other speculative physical theory, and can potentially be used to make falsifiable predictions, let's hear a proposal for how this is going to happen and what the evidence is that there's some chance it will work.
Mostly I feel like I'm pointlessly repeating myself. The statement is solely this: if we can do enough measurements to precisely determine the vacuum in which we live, then the theory becomes predictive. No statistics, probability or any other crap. Is it feasible? Beats me.
Aaron,
If you look at what the people studying the landscape are doing, then think for a minute about what is needed in order to "do enough measurements to precisely determine the vacuum in which we live", the answer to your "Is it feasible?" question is obvious. And it's not "Beats me."
It's clear that you know more about this than I do, then, because it's not at all obvious to me. And I'm not being disingenuous here. I don't know of a single vacuum consistent with all of our observations.
Actually, Peter, maybe you could take a crack at the question I asked, for the laymen among us: Is there, in fact, any systematic way to begin narrowing the vacuums down based on existing knowledge? Is there an algorithmic machine into which we could, in principle, insert the value of some constant or constants, turn the crank, and come back with a smaller subset of that now infamous space of 10^500 combinations? It sounds to me as though the answer to this is, "no," but I don't recall seeing it ever addressed in the popular accounts I've read.
If not, is it known (or generally thought) to be something that could be developed in the future? Are there approaches to such a thing?
Or is it considered to be an inherently hit or miss proposition?
On a coarse level, the answer is yes. In many classes of vacua, one can look at the spectrum of light particles and forces and see that it doesn't match up with what we see. The cosmological constant also is a strong restriction on the particular vacuum. If we could understand the theory better, we might be able to understand things like the masses and interactions of the light particles to further restrict the number of physically viable vacua.
My feeling is that we don't even understand the theory well enough to understand what are all possible vacua, especially the supersymmetry breaking ones. Thus, all this seems horribly premature to me. Now, Peter might say that it's been 25 years, and we should be doing better than that, and maybe he's right. But that's a different argument.
We know a huge amount about particle physics already, but given all this data as input, the string theory landscape predicts absolutely nothing: zip, nada. Furthermore, given all the results of all the experiments we can imagine doing if someone will fund them, that's still not enough input to get a single prediction out of the string theory landscape.
See, I'm still not clear on what you mean by "nothing." If you mean that there are an infinite number of theories that exactly match the particle parameters we already know but don't predict any new phenomena or particles, then I would agree that there's a problem. I find that a little hard to believe, though.
If you're saying that even with the constraints of the Standard Model, there are still a ridiculously large number of possible theories that predict all sorts of different extra phenomena and particles (which I find a little more credible), then I'd say that leaves a real possibility of progress. You just do some experiments, and add some more constraints.
If you're saying that there are a nearly infinite number of theories that all match the particles we have, and produce the same values for all the particles and phenomena we can conceivably hope to measure, then yeah, that's a problem.
But none of these cases strike me as much of a crisis. It'd be nice to have some single master theory that you can plug one number into ("42"), and generate everything else. But it's not like I'll die unfulfilled if we never get that.
I would say that a bigger problem is that, as Aaron puts it, "we don't understand string theory well enough to do any real nitty gritty phenomenology with it, so it's rather hard to do what everyone would like to do and start trying to match things up with the real world." I'd really like to see more (or some) phenomenology before we start with the triumphalist books and the Anthropic Principle. Actually, I'd be perfectly happy to skip the Anthropic Principle altogether...
And, Aaron, picking up on another comment, what does "background independence" even mean in this context? I've seen Lee Smolin throw it around over on Cosmic Variance a lot, but I don't have the foggiest idea what he's talking about...
S. C. Hartman: I have a nit to pick with physicists that reading these posts raises again in my mind: the use of the term "theory" in what seems to be the sense in which lawyers and non-scientists use it. That is, a "theory" is presented as any conjecture or hypothesis that may not have been tested (and may not even be testable if you buy Susskind), rather than as a hypothesis that has been dignified by a body of consistent evidence.
I have some sympathy for this, but really, I haven't noticed that many people in other sciences being really scrupulous about the "theory" vs. "hypothesis" distinction that we all learned in grade school. At least among most of the chemists I know, "theory" is used more or less the same way it is by most physicists, namely to mean "not experiment."
It'd be easier for a lot of people if we'd managed to keep that distinction a little sharper (and "string hypothesists" has a nice ring to it), but even with that, the anti-Enlightenment wing of our society would find a way to be a nuisance.
John,
Maybe Aaron and I don't actually disagree about this. He says it's "horribly premature" to calculate anything you can hope to compare to the real world, I say it's completely unfeasible.
As Aaron says, the first problem is that one doesn't even know how to characterize all the possible vacua. The 10^500 number refers to a specific construction, but there are lots of others that seem to make sense, some of which involve infinite numbers of vacua.
The problem with this whole field has always been that you can only calculate things in a small number of very simple cases, and even there you generally can only get some crude information, like the ranks of the gauge groups, and the representations of low-energy fermions. You can't reliably calculate the actual standard model parameters. What you find is that the very simple backgrounds you can handle, even at a very crude level, don't have the right particle content. To get the particle content you want, you have to go to more complicated backgrounds, where you really have no hope of actually calculating standard model parameters.
The numbers of these backgrounds are so huge though, that if you assume all the things you'd like to calculate are more or less randomly distributed, picking out the right particle content and correct values for the parameters doesn't cut things down anywhere near enough, and you'd end up with an astronomically large number of models, all of which agree with the standard model, and worse, all of which agree with about anything you might see at the LHC or any conceivable other accelerator. So, if things are randomly distributed, you're really doomed and have a "theory" which can never predict anything.
This certainly hasn't been demonstrated, and it may not even be a well-formulated problem to do so. One can hope that there is some structure to the set of vacua such that predictivity is not completely ruined. This is possible, but since I haven't seen any evidence for this, I'd say it looks like pure wishful thinking.
While I think wishful thinking is a bad way to be doing science, what I find most appalling are those, like Susskind, who seem to accept the idea that these things are randomly distributed and predictivity is ruined, but refuse to admit that that means the game is over and the idea of string theory unification is just wrong.
Chad,
The "theory" is not well enough understood to be able to conclusively show this, but yes, there's a real danger here that there is an essentially infinite number of possibilities, compatible not just with everything we have measured, but with every possible thing we can ever hope to measure (at least for the next few centuries).
This is all a somewhat ridiculous philosophical debate, since we're not likely to ever be in a position to actually calculate reliably all this stuff and check to see how real this problem is. But what is amazing about it is there is no evidence that we have any business mucking around in this ugly mess to begin with. There's no evidence at all for this ugly scenario of what the world looks like at a fundamental level, so you'd think that the fact that it looks essentially impossible to ever get predictions out of it would be enough for people to give up on it.
I'm not criticizing this stuff because it doesn't generate everything out of one simple idea. That would be nice, but maybe it's too much to expect. I'm criticizing this stuff because it doesn't generate anything at all. It requires adding in layers and layers of complexity just to avoid contradicting what we already know, never telling us anything we don't. It's not science.
Assume, for example, that the string vacua have a distribution across weak susy breaking lagrangians (which is highly unlikely, but what the hell). Then, we have 100 or so parameters to measure if we ever get our hands on a nice big linear collider. All of a sudden 10^500 doesn't seem like a very big number any more.
For Chad, sorry, but I think I need to work up a bit more energy to try to explain background independence. The basic idea is that the equations of GR do not make any reference to any particular spacetime manifold, ie, background.
Let's see, 100 parameters, lots of which we'd be lucky to measure to within 1%... So, by 2040 or so, string theory still won't predict anything, and there will only be about 10^300 theories compatible with everything we know about particle physics (OK, throw in the CC and everything we know to high precision, maybe we're down to 10^100 at best). In the face of prospects of this kind for a really ugly theory, I still can't figure out why people don't immediately give up on it, and instead spend their time defending it. Is there a particular number at which you're willing to give up? 10^600, 10^1000?
These models are so far from reality that there's not much point in arguing over details. My only point is that one hundred parameters gets you into the right ballpark.
And for the record (thanks, Peter, for the answer) I'm coming at the question not from a physics perspective, but from a computer science perspective, where the first question is, "Is it possible in principle to apply pre-existing physical knowledge to the new theory, and whittle down the possibilities?"
It seems the answer is presumed to be yes, but that no one's really thought about how that might happen. If the answer were actually no, I'd consider that to be the most damning flaw of all, in that you'd have a haystack and needle to look through, and no magnet to pick up the needle. I'd also, frankly, be terribly surprised and suspect that the answer is wrong.
The next questions begin narrowing that down algorithmically-- what, for instance, might the computational complexity of that narrowing process be? How fast could the states be narrowed down, in principle? In the formulations that have infinite states (which alone sounds rather dubious to me) can any useful narowing be done? What is the character of that infinity?
Narrowing down even further, is it even possible to do rigourous calculations of the sort that says, "Well, if I measure these constants to *this* precision, then I could narrow the states down by this much?" Again, it seems the answer is yes, since you were doing that in a back of the envelope, or perhaps a facetious, sense earlier. Hard to tell from here....
Answers to those sorts of questions-- which really ought to be of fundamental interest to someone with a strong CS and physics background, I'd think-- would at least give a way of characterizing what sort of progress could be made on that path in the future. It might, for instance, mark the beginnings of a plan.
As a final pair of notes:
Aaron, every time you say something like, "The models are so far from reality that..." you make non-physicists (especially hard-hearted engineers) roll their eyes and wonder why you're not doing something more productive with your time.
But Peter, every time you decry the length of time to get results from some approach, you make us reflexively ask what sort of schedule would be acceptable to you? Do we need a path to get results by, say, 2015? Or is 2020 good enough? Or, how few states to search would be acceptable? 10^10? 10^100?
John,
People have certainly thought long and hard about what is needed to identify the possible string theory vacua, analyze them, and whittle their number down. The problem is that there are obvious obstructions to doing this successfully that no one has an idea of what to do about.
I do strongly object to the recent attitude string theorists are selling of "25 years of working on this is not long at all". The thing about working on a very speculative idea is that you are asking to be allowed to not be subject to the standard discipline of science: having to make predictions that can be tested to see if you are wrong. It is quite reasonable to ask for this temporarily, for a few years while you try and see if the idea works out. If you start asking for not five years of this, but for fifty years of it, you are asking to be given credit to do something for your entire professional life, with the understanding that the bill will not come due until you are dead and gone. Not ever having to be responsible for the success or failure of what you are doing is not a good thing for people, in life in general, or in science in particular.
What I don't see string theorists doing at all these days is taking any responsibility for where they are going, or for their failure to get anywhere over the last 25 years.
The question of whether the Copenhagen Interpretation or the Many-Worlds Interpretation, or some other Interpretation is the right one is interesting on an abstract level, but I can't see investing a great deal of intellectual energy in it, because they all "predict" exactly the same thing.
Actually, the interesting stuff in the "QM interpretation" field happens when people go beyond the realm of interpretations that predict the same thing, one way or another.
Sometimes they claim that interpretations that we thought predicted the same thing really don't (these people are usually wrong, but they spark interesting arguments in the meantime). Or, they insist that all the available interpretations are so absurd that we have to throw out QM altogether and replace it with something else, in which case they typically pull out a new theory and people try to show whether it replicates all the predictions of QM. Or, they insist that QM is completely incapable of converging to the everyday world of classical-ish phenomena that we see around us, and people try to show that it can (e.g. decoherence).
Actually, the interesting stuff in the "QM interpretation" field happens when people go beyond the realm of interpretations that predict the same thing, one way or another.
Sometimes they claim that interpretations that we thought predicted the same thing really don't (these people are usually wrong, but they spark interesting arguments in the meantime). Or, they insist that all the available interpretations are so absurd that we have to throw out QM altogether and replace it with something else, in which case they typically pull out a new theory and people try to show whether it replicates all the predictions of QM. Or, they insist that QM is completely incapable of converging to the everyday world of classical-ish phenomena that we see around us, and people try to show that it can (e.g. decoherence).
That's why I made the Bell's Inequality comment in the initial post. I think that the field of QM interpretation would become absolutely fascinating if someone were to devise an experiment that could clearly distinguish between different meta-theories. Even the occasional near misses (the Afshar thing a while back) are pretty interesting.
But we're not really there yet.
The decoherence stuff, I see as something separate from the interpretation question. Regardless of the interpretation, you need to have a mechanism for getting from the microscopic world where superposition states are fairly robust up to the macroscopic world where they're nonexistent, and that mechanism is worth investigating. And in the longer term, it might help with applications in quantum information processing.
That's why I made the Bell's Inequality comment in the initial post. I think that the field of QM interpretation would become absolutely fascinating if someone were to devise an experiment that could clearly distinguish between different meta-theories. Even the occasional near misses (the Afshar thing a while back) are pretty interesting.
But we're not really there yet.
Didn't Deutsch have a QC approach to, maybe, someday, winnowing out the different interpretations? Or am I just hallucinating?
I think the discussion of QM interpretations can be interesting, if it provides a new point of view e.g. for quantum gravity. This is why I wrote about it on my blog.
E.g. the question if it is even in principle possible to detect single gravitons and whether we can measure graviton states without disturbing them (I think not).
I think Deutsch claimed to have such a thing. I don't precisely remember what it was, but I remember not thinking too highly of it. My feeling on the subject is that there are no-collapse theories and collapse theories, and that's the only distinction that matters because it's a question of physics that can, in principle, be experimentally probed.
Re: Deutsch: I remember not thinking too much of it either, and remember thinking that it seemed ideologically designed to give him the answer that he wanted, e.g., something related to the nature of computation being the interaction between the many-worlds themselves.
I'll confess to not being enough of a physicist to really be able to critique it (then or now) but it set some of my wishful thinking detectors off at the time.
The details are now lost since I no longer have time to keep up with the fun developments in QC.
I didn't see anyone make this point in the comments yet. Maybe someone has, and I missed it.
One way to give operational meaning to a theory being predictive in the sense of being empirically testable is to ask
What future experimental result would cause you to reject the theory?
I think what worries a lot of people about string thinking is that it seems so amorphous that it might be able to accomodate any future experimental measurement. In fact I am not aware of any string theorist's answer to this basic question.
Something like this question was asked from the audience last July at the Toronto Strings '05 conference panel discussion. "What theoretical or experimental result would make you get out of string theory?" The panel had no answer. Steve Shenker the moderator quipped "You're not supposed to be asking that!"
I think a lot of people feel that if a theory is genuine science then practitioners have to have to come to grips with this issue---the theory can't be so mushy that it is infinitely accomodating. For example if in 1919 if Eddington had measured a much different angle of light-bending, then Einstein would have had to scrap General Relativity. A real theory must survive repeated tests by measurements which could refute it--- so must bet its life on the outcomes of future observation. To be science a theory has to UNpredict something. It has to be able to say---if you measure this and it comes out different from what I say, then I am wrong.
So, what does string theory UNpredict? Or is it infinitely accomodating mush (as one begins to get the impression.)
Personally I would be very pleased if some string theorist would give a definite answer and say what would make him or her abandon string and try some other approach to modeling spacetime and matter. This would validate string for me, as science instead of pseudoscience, which would be very welcome. It would not necessarily persuade me to prefer string to some of the alternative appraoches but at least it would show that it is not phony science.
Chad,
like your blog---this is an interesting thread. I just had an epiphany about why discussions of the string predicament can easily get scattered. I think the practitioners may have lost touch with the main goal.
So I will say what I think the main goal is and see if anyone disagrees.
The aim is a testable general relativistic quantum physics
This should be a path integral which has QFT as the limit as G -> 0 and Gen Rel as the limit as h-bar -> 0
So it is a path integral involving spacetime and matter (as combined entity) which gives General Relativity in the classical limit (as h-bar goes to zero) and which looks like Feynman diagrams or Quantum Field Theory in the zero-gravity limit (as G goes to zero).
And it should be testable. there should be some definite thing that it unpredicts, so that it self-destructs if the result of some definite future experiment goes against it.
We know this is a reasonable goal because it was recently achieved in the 3D case by Freidel and Livine. hep-th/0512113. They were using spinfoam formalism rather than string, but this doesn't matter----the goal of a general relativistic quantum physics is the same in either approach.
In the F-L paper, matter is realized as a feature of spacetime, and spacetime doesnt have an independent existence apart from matter. So there is a Lagrangian involving essentially one thing: spacetimematter. And there is a path integral---which has Gen Rel as classical limit and QFT as zero-gravity limit---and it predicts a planck-scale dispersion in the speed of very high energy gamma rays. So IF the theory can be extended to 4D (big if!!!) then it will be a testable general relativistic quantum physics.
Now what I am suggesting is that probably string practitioners have the very same goal, but it is just not being enunciated very clearly. So in the string area of research you don't hear of measurable progress being made towards that goal. What you mainly hear is the assertion that there are not any alternatives to string (which I think is not the case indeed as http://arxiv.org/hep-th/0512113 shows, but indeed there are several other emergent examples).
What I hope is that (if I am mistaken) someone will explain why the Freidel-Livine paper does not do what I say it does and why the goal of quantum-gravity-and-matter research is NOT what I've described. If my statement of the goal is incorrect, I would like the correct goal clearly stated.
I think sometimes that string theory itself has become confused with the goal, in some people's minds. hope to help remedy this.
thanks and best wishes,
Who
This should be a path integral
How do you know that?
We know this is a reasonable goal because it was recently achieved in the 3D case by Freidel and Livine.
2+1D gravity was quantized years ago. In fact, there are multiple, inequivalent quantizations of it (see Carlip's book, for example). Gravity in 2+1D has no local degrees of freedom, so it's significantly more tractable than 3+1 dimensions.
Aaron,
this may be a good opporunity to ask a (perhaps stupid or naive) question: Is there a specific reason for superstring theory to compactify to a M4 x CY space and not M8 x something or M3 x something?
Or in other words, did somebody study compactifications M3 x something and compare with the 1+2 results ?
You can compactify down to any dimension you want. 4D is focussed on because that happens to be the real world. Intriguingly, though, topological string theory works best for 6 dimensional Calabi-Yaus.
Let me get back to you on your other question.
Aaron,
thank you for the quick answer. I would really be curious about the M3 x something case, because it would be the only case where one can (currently) compare string theory with other approaches.
The way I understand your answer so far, the "Landscape" does not only contain 10^500 possible universes of the form M4 x CY but in addition
a gazillion of universes M1 x something, M2 x something, ... M10 ?
One would expect that some sort of consistency or stability argument should exist to rule some of them (all of them) out ...
I checked with Jacques to see if there was stuff I don't know about, as far as I know the basic is story is that lots of 3D compactificatoons have been studied, but they all have lots more than just gravity floating around, so it's tough to compare them to the various known 2+1 quantizations.