in which I triangulate on string theory and quantum gravity and ponder the "Trouble with Physics"...
which is that physicists are hired the same way we pick apples at the supermarket.
Look! Shiny! Big! Red!
Finally, I finished Smolin's "Trouble with Physics". Hopefully in time for the paperback coming out...
It is very good, in parts. Well worth reading, and will amuse some, interest others and infuriate the occasional technician.
Fortunately I am not the first to review the book and I will lay no claim to being comprehensive nor unbiased.
I had a brief and early fling with string theory enough to admire its beauty, want to know its progress, and worry about where it is going.
I should also note that I am an associate member of the Institute for Gravitational Physics and Geometry, and overlapped with Smolin for a year or two before he left for the Perimeter Institute. IGPG is kind of "Quantum Gravity Central", and I spend a small but finite amount of time (in a normal year) peering at quantum gravity seminars or PhD theses. I also occasionally sneak off to relativity meetings and have been know to pop up at meetings where string theory is spoken, especially in cosmological and astrophysical context.
To complete the circle, I am currently a guest at Stanford, and am receiving value from both the physics department, SLAC and the Kavli institute - place is crawling with particle theorists and the string theory subspecies thereof, some of whom I have had pleasant and productive conversations with going back 20 years now.
Smolin is really challenging academic dogma, string theory is just unfortunate in being the immediate and rather blatant example of that - string theory really is still hot, the underlying beauty of the theory is amazing when you first meet it, and it is intuitively obvious to most physicists who looked at it closely that it must capture some essential truth about the universe.
But, Smolin's specific criticisms of string theory: that it failed to make falsifiable predictions, that it is too faddish and narrow, and that it favours narrow technical expertise over imagination and breadth also have essential truth in them.
Witten famously noted that string theory came on the scene a century ahead of its time, he also noted that we don't know what the "deep" principle of string theory is.
There is a possible interesting analogy with General Relativity - Gauss probably discovered non-Euclidean geometry in the 1820s, but he seems to have refrained from pressing the results in deference to colleagues in order to avoid conflicting claims of priority.
If Gauss had pressed on the topic, right after his work in gravitation, he might have explored what happens when the curvature is the integrand in the principle of least action - it would not have taken much for Gauss to have discovered that the leading order gives you Newtonian gravity, and the higher order tems give post-Newtonian corrections to gravity - which are perturbative and finite in the weak field limit.
But, there are large families of possible geometries which give different post-Newtonian corrections...
Without a physical principle, such as the Equivalence Principle, General Relativity would not be uniquely recovered, it would just have been one of the simpler possible extensions of Newtonian gravity, out of many permitted by the general application of the principle of least action to curvature in non-Euclidean differential geometry.
I fear that is where string theory went. It discovered a family of theoretical approaches and solutions which clearly encompass an essential truth of quantum gravity and its union with the standard model and post-standard model physics, but we missed on the deep guiding principle which tells us how to select the true theory (such as it is) and where the exact theory came from and leads to.
And that sucks. Especially because a lot of people will not only not believe that this is a real problem, they will make sure nobody else believes you either.
Someone out there quite likely is already on the right track to the true theory, but their odds of survival in the current academic system are not wonderful. We may just have to wait a generation or two for a good approach to be rediscovered, which is a shame, cause some of us want to know! Now!!
I don't know that loop quantum gravity or any of the other half-formulated alternative approaches are going to go anywhere either, although they do hint at underlying truths also - possibly different aspects of the same truth string theory probes. Smolin covers this quite well, and notes the strength of the alternative approaches - they are falsifiable, which means most of them will be proven wrong. That is good science, but it will be used by silly people to disparage the alternate approaches. It is better that they can be proven wrong.
Smolin, to my mind, goes a little bit off-track in the last section of the book, for a while, but he finishes strong. His points on groupthink, and the systematic bias which discourages innovation and risk taking by young researchers hits painfully home - it is all too true, and yet it self-perpetuates because the mechanisms which reinforce conservatism in science are there for reasons. The system is flawed, and possibly broken, but the fix is not as simple as Smolin suggests - funding agencies are terrified of funding bad science, since there is so much pretty good science it is safe to fund, and as a community scientists are very harsh when bad science is mistakenly given precious resources.
It is the same market flaw that gives us beautiful flawless large red apples in supermarkets - with no taste.
To get the old intense flavour varieties that everyone loves when they taste, we would have to choose small bruised discoloured apples when we shop, and leave the flawless big red apples with no taste in the bins. But collectively we do not, and the market responds. All for the fear of being the one department head comsumer to go home with an occasional rotten apple.
The real shame is that the big red shiny tasteless apples are rotten just as often, they just look so good sitting there, waxed and sprayed, in the bin.
'Course if you only get to buy one apple every three years you learn to be very conservative in your choice; don't want a rotten or even tart apple this decade.
We will muddle through, progress will be made again, hordes of string theorists will be proven wrong, and some few of them may well be right, but no one will remember which.
Science is self-correcting, which is its great strength, as long as we don't let the sociology do long term damage to the underlying scientific methodology.
It is tempting, when reading such summaries to start speculating what is really going on: is supersymmetry dynamically broken by time evolution of fields? could the essentials of stringiness be united with the quantum geometry of loop gravity - maybe the propagators on the QLG vertices are constrained by the string toplogies? would an action principle provide the right constraints in QLG? how would it work with string compactification? do the orbifold singularities trap closed strings? do they even make sense in the QLG limit? is the topology dynamic? can we take the black hole degrees of freedom off-brane preserving unitarity in the higher dimensional theory? are there any additional gauge fields and would we see them as black hole hair on-brane? just wtf is going on with large extra dimension and do they really somehow explain the cosmological constant? do branes collide, and if so, what is the probability of seeing a glancing collision inside our horizon that did not also totally annihilate us through reheating to the Planck scale... ok, I actually want to think more about that last one, at least at the blog post level...
or I could get back to work and think about actual data on stars and stuff. I promised NASA when they gave me the grant... it is good solid stuff that.
Or, as a great director of a great institution told me once when offering me a postdoc: "you could even work on quantum cosmology, for one year..." - he was only half-joking.
PS: I though Smolin was a little harsh on the theme of "no one is doing foundational quantum mechanics" - going back and glancing at the index I see no mention of the Aspect experiment or some of the stuff being done in quantum optics and condenses matter physics which really goes back and hammers on these issues, with experiments and theory really tightly tied together by modern standards. He may have discusses it in the quantum computing context which is the other venue where some of the foundational issues are really being tackled.
- Log in to post comments
Hey, interesting post. It seems like much of this string theory stuff is hand wringing out of impatience. It took how many centuries between Newton and Einstein? Even taking into account the exponential growth in scientific papers over the past half century, it seems perfectly reasonable it would take us longer than a century to merge the two great theories of the 20th century in a satisfying, falsifiable way.
Science, like any human endeavor is pretty inefficient. But at least we have some metric for self-correction, as you say.
As one who tries to keep up on physics but is by no means a professional in the field, I have never heard of QLG before, but am fascinated in hearing about something other than String Theory. All the popular science books on the subject seem to cover String Theory, with a few making a mention of the fact that it is still unverified; is there some non-specialist level work that looks at all these other theories as well?
Yarn Theory is softer and more colorful than String Theory, but ultimately your preference may depend on whether you are going to the beach or the mountains.
Ouch.
Halcyon: "Trouble with Physics" tries to cover some of the string theory alternatives at the non-specialist level, so it is a start.
John: you're right, but... the peope around right now are impatient, they don't want to wait a century. Me wanna know now!
Impatience can be a good thing, but it does lack a certain zen.
Smolin covers this quite well, and notes the strength of the alternative approaches - they are falsifiable, which means most of them will be proven wrong.
As I recall, Smolin lets this implication lie around, but never actually states it. Lorentz violation is not a prediction of LQG, and there is still a disagreement amongst people who work on doubly special relativity as to whether it predicts anything for satellite experiments.
As best I can tell, absolutely nothing in quantum gravity has made anything even remotely resembling a falsifiable prediction. We're all in the same boat here.
Counterfactual history is a risky business. Having said that, don't you think Gauss should have known special relativity for this to happen ? After all, the GR action is a spacetime curvature(rather than just the spatial curvature) where spacetime is a manifold with an indefinite metric - a fact coming from SR. IMHO, this is a crucial insight(apart from the Equivalence principle) without which Gauss and Riemann could have never seen what Einstein/Hilbert saw.
Not quite: for example, to start at the entry level - MOND makes a fairly unambiguous prediction about absolute acceleration which is clearly falsifiable.
On Lorentz violation - the quantized geometry class of theories in general predict violation of Lorentz invariance at high energies; the prediction is not unique to those classes of theories, but if Lorentz invariance holds at high enough boost then it rules out whole classes of theories - and that is testable in the near future.
Arguments have been made that Lorentz variance can be restored to loop gravity, but it is not clear that it is possible and the mechanism for doing so has other interesting implications... a you know, Bob.
On Gauss: he was pretty smart. But he would not have got the right extension to gravity, the post-Newtonian terms would have looked interesting and plausible, but always wrong in detail.
No doubt some smart kid in the 1840s or 1850s would have played with 4D curvature...
then someone would have started flipping the signature on the metric, probably through algebraic error initially...
It would have been a mess, we might still be arguing about it now, had things gone that way.
MOND makes a fairly unambiguous prediction about absolute acceleration which is clearly falsifiable.
MOND is wrong. TeVeS is probably wrong, too, but it's not hopeless. Either way, it's not a theory of quantum gravity.
the quantized geometry class of theories in general predict violation of Lorentz invariance at high energies
Which theories are those? In most of the theories I have seen that start with a discretization, one takes a limit over finer and finer discretizations eventually obtaining a continuum theory. I suppose there are gedankentheories that postulate some minimum lattice spacing, but I haven't met anything that's even marginally developed.
The situation with LQG, if you're referring to DSR, is that there is no derivation of DSR from LQG and even there, as I said, it's not at all clear that DSR actually predicts any modification of dispersion relations.
MOND is almost certainly wrong, but that is because it made a prediction that was then tested, and MOND is not holding up to the additional tests.
That is a good thing - MOND was never a "theory", it was a one parameter heuristic toy model, a "lets step back and reconsider". That is the sort of thing that needs to be done, and done more. Because it could be proven false.
As I recall, in QLG at the current level of approximation, there is an "optimum patch" size which minimizes deviations from Lorentz invariance.
DSR as I understand it, approaches the problem from the other hand, and basically says "if there is Lorentz invariance how could we formulate a dispersion relation that is coherent given what we know about physics" - ie if you break Lorentz invariance it better break in a way consistent with what we already know.
I am curious now - in what way is, say, quantum loop gravity, more of a gedankentheory than string theory? Right now? Is it just the quantity of thought that has gone into the respective theories?
Or to put it another way: does string theory, as such, make any falsifiable predcictions about, say, GLAST observations? That is to say, can any subclass of string theory be ruled out as inconsistent with reality based on observations or upper limits made by GLAST in its first five years of observations?
There are certainly "beyond the Standard Model" models which make falsifiable predictions for GLAST observations, and GLAST observations could rule out, or test, classes of quantum gravity models.
I'm just using GLAST as a "for instance".
As best I can tell, there is no unambiguous prediction of Lorentz violation from LQG. If you could direct me to a reference that says otherwise, I'd be happy to give it a perusal.
I was explicitly using gedankentheory to refer to theories other than LQG (or, say, CDT) which are reasonably developed. LQG uses strange Hilbert space that looks lattice-like, but it is not, as best I can tell, a discretization theory. And it doesn't predict DSR, and DSR doesn't necessarily even predict a modified dispersion relation.
As I said in my first comment, no theory of quantum gravity makes a prediction for GLAST. Lee often tries to imply otherwise, but he's usually careful not to explicitly say it outright because he knows it's not true.
On the other hand, outside of quantum gravity, there have been attempts at phenomenological QFT models that include Lorentz violation, but it's usually quite hard to stop that violation from infesting everything we already know to be Lorentz invariant. I'm not so up on what the status of these ideas is.
Great review, Steinn. I do have to agree with the spirit of Aaron's comments: although Lee talks a lot about loop quantum gravity, and then talks a lot about phenomenological models like MOND and DSR, and then talks a lot about potentially anomalous observations like the Pioneer anomaly and changes in the fine structure constant, he leaves you with the impression that these are all somehow connected, which is completely untrue. The idea that loop quantum gravity is somehow more intimately connected to experiment and observation than string theory is very hard to support -- both have chances to connect if things go really well, but neither makes a hard and fast falsifiable prediction. Yet, anyway.
"In what way is, say, quantum loop gravity, more of a gedankentheory than string theory?"
In my mind the main difference is the question of whether one has a unique "theory" at all, a mathematical machinery that gives unique answers to physical questions. In every construction of LQG (there are many, and no indication they are equivalent) there are choices made, and it is not clear what is the space of all choices and what is the physical reason to single out these particular ones. It seems many times those are made to allow tractable discussion at all.
This is a crucial point because ultimately these are attempts to quantize what looks like a non-renormalizable theory. One can do that and extract finite answers without a problem, but the real issue is that there are infinitely many ways of doing that. So the thing for LQG to prove is not that results are finite, but that they are not ambigous. To my knowledge there is no indication that the set of LQG quantizations is not infinite dimensional, as one would get for example by conventionally quantizing Einstein action with an ad hoc cutoff.
A defensible opinion about string theory excludes falsifiable predictions. There are none. String theory is vulnerable to a falsified founding postulate.
"The equivalence between the effects of a massive body and an accelerating geometry in perturbative string theory follows from the state-operator correspondence and the BRST invariance of the graviton vertex operators." A reproducible Equivalence Principle violation kills string theory. The EP has been robust,
http://www.mazepath.com/uncleal/lajos.htm#b1
Affine, teleparallel, and non-commutative gravitation are EP-free. A chiral pseudoscalar vacuum background divergently acts upon mass sector opposite chirality events to give EP violations.
http://www.mazepath.com/uncleal/lajos.htm#a2
This experiment has never been performed. It measures non-identical vacuum insertion energies of opposite parity mass distributions plus EP parity violation. A parity Eotvos experiment opposing space group P3(1)21 and P3(2)21 alpha-quartz test masses measures an EP parity anomaly alone.
Somebody should look. Parity divergence is the only allowed but untested EP anomaly. Its presence does not contradict 420+ years of observations in any venue at any scale.
"MOND is almost certainly wrong, but that is because it made a prediction that was then tested, and MOND is not holding up to the additional tests."
Stipulating MOND's limitations, what prediction is being referred to here? If the issue in question involves the observations of weak lensing and X rays from the bullet cluster, I agree that the result is beautiful and intriguing. But speaking as an oft-scarred button pusher, I have learned to be leery of basing *any* broad conclusions on the data for a single object.
Bob
you mention the possibility that some unknown principle could save the day selecting the true theory.
First, you would loose the anthropic "explanation" of the smallness of the cosmological constant and of other features of our universe. In my opinion, the fact that the Standard Model Lagrangian is so dirty (as compared to 11d supergravity) gives a strong indications that no such principle exists.
Second, even if this principle exists and is as simple as "the deepest vacuum wins", it would likely remain useless because searching among 10^500 possibilities looks practically impossible. In fact, even computing the energy of one vacuum with the needed 120 digits is practically impossible.
It really looks a hopeless situation.
Given an alphabet of twenty amino acids arranged to make a protein chain 650 amino acids long, you have 10^500 possible proteins. This is not so remarkably long (it's certainly not a patch on titin).
Just sayin', sometimes Nature finds a way. . . .
Let us stipulate that MOND is "wrong" and cold dark matter (CDM)is "right", in accordance with majority opinion. Then where is the CDM explanation of the MOND phenomenology, which represents an enormous fine tuning in CDM? In my mind, this represents a big problem for CDM, which should be talked about more by the CDM people. It seems like there is a missing "selection principle" here (in CDM cosmology) also.
"Then where is the CDM explanation of the MOND phenomenology, which represents an enormous fine tuning in CDM? In my mind, this represents a big problem for CDM..."
Well, for what it's worth, Kaplinghat and Turner published a paper on that topic a few years ago (ApJ Letters, 569, L19 [2002]), but I found it somewhat contrived and ultimately unpersuasive.
Bob
Aaron Bergman writes:
You're right. So far, anything that looks like a 'prediction' of Lorentz violation from loop quantum gravity involve extra assumptions that essentially stick this violation in by hand, with no ability to predict its magnitude. In particular, the beautiful derivation of κ-deformed special relativity from 3d quantum gravity is not something anyone knows how to mimic in 4 dimensions.
After all, the GR action is a spacetime curvature(rather than just the spatial curvature) where spacetime is a manifold with an indefinite metric - a fact coming from SR. IMHO, this is a crucial insight(apart from the Equivalence principle) without which Gauss and Riemann could have never seen what Einstein/Hilbert saw.
Interestingly, some of the observations from which one could have been led to SR were available to Gauss. It's no longer widely known, but in his work on mechanics (with which Gauss would have been familiar) Euler was admirably clear about the fact that it's impossible to give any operational meaning to the notion of an absolute reference frame. But he erroneously thought that thought experiments showed that transformations between reference frames are Galilean. The interesting speculative question then becomes whether one could have hit on the idea of Lorentz transformations before the emergence of a theory of EM and the realization that EM isn't invariant under Galilean transformations.
Hm. My understanding is that most of "Trouble with Physics" has to do with the sociology of science, with the scientific coverage only being a small part. Are the science parts of The Trouble with Physics meaty enough to justify the book from the perspective of someone who's interested in some kind of introduction to LQG and what it means but less interested in a full book about the sociology/philosophy-of-science stuff?
Trouble with Physics does a fair bit of sociological conjecture, but a good half of the book is a concise history of recent theoretical physics, including a fair exposition of string theory and some of the alternative theories like LQG.
MOND is in pretty serious trouble, but the data it sought to explain is still there.
I would agree the cold dark matter has some more explaining to do, the predictions of vanilla CDM theories are wrong in detail, but it is not clear if the problem is the physics of the CDM itself, or the messy physics of what happens to CDM as it meets the rest of the universe. Could be something significant there, but hard to say.
Ok, now the hard stuff: one of the things that make me sympathetic to Smolin's point of view is that QLG seems to be held to a higher standard of proof than string theory - I don't know that it is fair to say that QLG should be less ambiguous in the results of calculations or make harder predictions than string theory...
Now, one the things I'd like not to do in life is get caught simultaneously on the wrong side of the argument with Bergman and Baez - specially since I do not work in either theory - but... if you quantize geometry then the generic result seems to be you lose Lorentz invariance, there are claims that there are hints the full theory might recover apparent Lorentz invariance (ie that the formal symmetry break is effectively suppressed) and I suspect that the theory could be fine tuned to avoid breaking Lorentz invariance, but it seems a bit contrived.
That 2+1 QLG meets Doubly Special Relativity in the middle is very suggestive, but not conclusive, but then that is really what can be said about most all string theory results - a calculation can be done for non-physical cases which is suggestive of a result that extends to physical cases. Very good, but non-predictive.
I give a slight edge there to QLG, they're closer to a reasonable physical test.
I am ambivalent about the landscape and anthropicism - it seems an ugly way to run a universe, even if we do it in the context of some eternal inflation model; but there has never been any guarantee that the real universe is aesthetically appealing to me.
But, I keep coming back to the GR analog - there we got a unique, no free parameter theory, by induction from broad general principles - the class of metric theories that could be good effective theories of gravity with a Newtonian limit is much larger, but we came to them later as toy models.
I suspect with string theory we have the other end of the stick.
I also suspect that the quantized geometry approach may have grabbed a side branch fo the same stick - I keep getting the sense that if, for example, we looked at propagation in QLG with string like restriction - basically putting tension or a causal link between vertices, with either open or closed topologies, and then using an action principle to suppress non-locality, then we might have an interesting theory.
It would be fun to explore.
I don't know that it is fair to say that QLG should be less ambiguous in the results of calculations or make harder predictions than string theory...
Not to be too peevish, but what part of "we're all in the same boat" wasn't clear?
(technical response, maybe, in the morning -- g'nite)
The popularly perceived benefits of string over LQG are numerous and exciting, extra dimensions, supersymmetry, branes, massive landscape of 10^500 solutions which includes every conceivable universe in the multiverse (and is therefore incapable of being falsified), plus a great direct connection to the twilight zone and science fiction films. It is beautiful and true, because you can't disprove it. In law, all are innocent until proved guilty.
Hey. it was Moshe who started the "ambiguous" thing.
Your "same boat" comment was like three comments ago which is far too far back to remember at midnight - I was probably reacting to the "resembling prediction", if you squint hard enough a lot of things resemble predictions...
Speaking of "sociology". . . . I'm starting to wonder if this "10^500" number isn't spreading like a meme. Quick, now, how many people here can tell why it's 10^500 and not 10^250 or 10^1000? (I can't.) It's just a nice round figure, said because other people say it. Heck, it may even be "right", in that it accurately reflects the best current understanding, but it spreads the same way that creationists spread that line about evolution being like a tornado making a 747 out of a junkyard, or the way we keep hearing that we only use 10% of our brains.
Blake,
Actually Linde now seems to be favoring using 10^1000 in his public pronouncements (see his recent press release). The 10^500 is pure convention, based on what some early landscape exponents chose to adopt as an order of magnitude of order of magnitude.
The actual number is probably infinity. There are known constructions that lead to an infinity of solutions (together with hopes that one can somehow restrict attention to a finite, but huge, number of these as "physically sensible").
There is a very precise technical point about the ambiguity thing, but I really don't intend to be drawn into the tiresome string vs. loop faux debate, so perhaps some other time.
Bob, I also read the K+T paper when it came out and found it very contrived.
Steinn, re "the predictions of vanilla CDM are wrong in detail",
Let us hope that strawberry CDM or chocolate CDM emerges soon.
Enough about MOND.
In a more serious vein, it does seem that the social dynamics of physics and astrophysics (at least) do seem to lead to a dominant paradigm that sometimes hangs on long after it is clearly in trouble, if not untenable. Usually the dominant paradigm accumulates "epicycles" during this phase before it finally collapses. Examples involving gamma ray bursts (GRBs) are discussed below:
In my view, the internal shock model of GBRs is looking to be getting into deeper and deeper water. (Even forgetting about the elephant in the room, explaining the central engine). But it is still the dominant paradigm.
By the way, short GBR 070201 was possibly (probably?) in Andromeda. Its probably an SGR. If it were a classical short GRB, and hence perhaps a pair of merging compact objects, LIGO should have seen it easily. Furthermore, LIGO was operating in triple coincidence mode at the time, according to their logs. I was starting to get excited before I realized it was probably an SGR, which emit much weaker gravitational waves. I think we have been expecting an SGR from Andromeda. Perhaps it is even overdue. This can be related to the dominant paradigm sociology discussion which is the main point of this post and this thread by the fact that the dominant paradigm for GRBs was an SGR like model up until about a decade ago. This paradigm held on for many years after Paczynski had shown it was impossible for this to be correct in the majority of cases.
Peter Woit's insightful comment: "The actual number is probably infinity. There are known constructions that lead to an infinity of solutions (together with hopes that one can somehow restrict attention to a finite, but huge, number of these as 'physically sensible')." is fascinating meta-theory.
In a naive sense, there are several different models of the differences between individual universes in a metaverse, i.e. the topology of the Landscape.
"The American philosopher Charles Sanders Peirce
somewhere remarked that unfortunately universes are
not as plentiful as blackberries. One of the most
astonishing of recent trends in science is that many
top physicists and cosmologists now defend the wild
notion that not only are universes as common as
blackberries, but even more common.... at every
instant when a quantum measurement is made that has
more than one possible outcome, the number specified
by what is called the Schrodinger equation, the
universe splits into two or more universes, each
corresponding to a possible future. Everything that
can happen at each juncture happens. Time is no longer linear. It is a rapidly branching tree."
"Obviously the number of separate universes increases at a prodigious rate. If all these countless billions of parallel universes are taken as no more than abstract mathematical entities-worlds that could have formed but didn't-then the only 'real' world is the one we are in. Those holding what I call the realist view actually believe that the endlessly sprouting new universes are 'out there,' in some sort of vast
super-space-time, just as 'real' as the universe we
know! Of course at every instant a split occurs each
of us becomes one or more close duplicates, each
traveling a new universe. We have no awareness of this happening because the many universes are not causally connected. We simply travel along the endless branches of time's monstrous tree in a series of universes, never aware that billions upon billions of our replicas are springing into existence somewhere out there."
"When you come to a fork in the road," Yogi
Berra once said, "take it...." In an article on
"Quantum Mechanics and Reality" (in Physics Today,
September 1970), DeWitt wrote with vast understatement about his first reaction to Everett's thesis: "I still recall vividly the shock I experienced on first encountering the multiworld concept. The idea of 10^100+ slightly imperfect copies of oneself all constantly splitting into further copies, which ultimately become unrecognizable, is not easy to reconcile with common sense. This is schizophrenia with a vengeance!" [Martin Gardner]
OK. Now for the techincal reponse.
As I understand it, LQG -- by which I refer to canonical LQG, not spin foam LQG -- should not be thought of as a latticization of spacetime. The quantized geometry is instead a derived property of the model. One works with connections and holonomies. A vector is defined by embedding certain graphs into space. In order to get a Hilbert space, one defines an inner product by making all these vectors orthogonal. You end up with a nonseparable Hilbert space and discountinuous representations of operators on this Hilbert space. Nonetheless, spacetime was never quantized at any point.
What you can do at this point is define an area operator and see that it has discrete eigenvalues. I believe you can do something similar for volume. However, as I understand it, there is no such thing for distance; it does not appear to be quantized. All this is my understanding as of a few years ago, the last time I looked into this stuff -- if I've misstated something, I hope someone will correct me. Regardless, I think that this shows that it is not at all obvious that Lorentz violation should be a generic prediction of LQG.
As I stated earlier, I do not know of a theory that takes as its input a quantization of geometry. Lattices models (CDT is the only one I know of that doesn't end up with some horribly scrunched up monstrosity) should probably thought as analougous to lattice gauge theory where one sees that Lorentz invariance is recovered in the continuum limits. I've seen various handwavy statements for other directions, and that is what I meant by gedankentheory -- take Wolfram's ramblings, for example.
Hi Steinn,
you know, over the last 6 months I've read quite a lot of reviews on that book. Some worse, some better, some completely off the hook. But I like yours the best. I totally agree on The system is flawed, and possibly broken, but the fix is not as simple as Smolin suggests.
Just curios, when you write: but we missed on the deep guiding principle which tells us how to select the true theory (such as it is) and where the exact theory came from and leads to. Do you have anything specific in mind? I share this perception, but I haven't met it often.
Best,
B.
Given the 10^1000 figure, if 1/10^120 of the vacuum states have the correct lambda and 1/10^19 match a given one of the ~20 free SM parameters to 15 significant figures, there would still be 10^500 distinct string vacuua answering to the description of our universe.
Even with the anthropic priciple, this seems a bit excessive.
If Aaron Bergman is correct that in canonical LQG: "distance... does not appear to be quantized" then how can this be reconciled as a physical theory with Planck length? If Area is quantized, does the relate in some way to the Holographic models?
This is not my area of expertise. I know what a Hilbert space is. I am good friends with people who do lattice gauge theory, and have a second-hand notion of that; I did know Wolfram back when he was at Caltech and at conferences since then.
It was my impression, from sitting in on some seminars, that there is a lot of work currently with "quantization of geometry" -- but the lectures I've seen were typically by mathematicians who admitted not being well-grounded in Physics. At once such seminar, I amused the participants with my impression of what my mentor Feynman might say about Math without Physical Intuition.
Thank you for trying to more formally describe what my martin Gardner quotation did elegantly but verbally, and from back in the day when a mere "10^100+" was the number being kicked around.
The topology of the landscape could be VERY different if that number was infinite. The toplogy could be, in the Math sense, pathological, and in some very interesting ways.
By the way, since there are only 26^3 of the 3-letter abbreviations, LQG is ambiguous. It also means "linear-quadratic-Gaussian (LQG)" as in
http://arxiv.org/pdf/math-ph/0702079
Title: Eventum Mechanics of Quantum Trajectories: Continual Measurements, Quantum Predictions and Feedback Control
Authors: V. P. Belavkin
Comments: 39 pages. Review paper
Subj-class: Mathematical Physics
Quantum mechanical systems exhibit an inherently probabilistic nature upon measurement which excludes in principle the singular direct observability continual case. Quantum theory of time continuous measurements and quantum prediction theory, developed by the author on the basis of an independent-increment model for quantum noise and nondemolition causality principle in the 80's, solves this problem allowing continual quantum predictions and reducing many quantum information problems like problems of quantum feedback control to the classical stochastic ones. Using explicit indirect observation models for diffusive and counting measurements we derive quantum filtering (prediction) equations to describe the stochastic evolution of the open quantum system under the continuous partial observation. Working in parallel with classical indeterministic control theory, we show the Markov Bellman equations for optimal feedback control of the a posteriori stochastic quantum states conditioned upon these two kinds of measurements. The resulting filtering and Bellman equation for the diffusive observation is then applied to the explicitly solvable quantum linear-quadratic-Gaussian (LQG) problem which emphasizes many similarities and differences with the corresponding classical nonlinear filtering and control problems and demonstrates microduality between quantum filtering and classical control.
Good post and good comments too. The CDM, now LCDM model has been gathering a lot of epicycles lately. The sociology of groupthink is indeed an obstacle to innovation. GLAST may find AGN formed at very high redshifts, an indicator of "Lorentz invariance." Roger Blandford highlighted Lorentz Invariance in his GLAST talk, which didn't mention "dark energy" at all.
I mostly buy locally grown Michigan apples. They're a bit smaller than the big red waxed ones, but i buy them by weight, which means i get more of them. Does that mean that i should be the one to pick which experiments should get funded?
If the task is to decide how to spend research money, clearly, there should be some big experiments, and some cheap ones. One assumes that one only gets a new big experiment every three years. Should the next one be a space based gravitational wave detector? Or should it be a particle smasher? Surely it isn't the case that every researcher has to work on the same thing.
I'm curious about some of the comments here regarding the often-mentioned "10^500 possible string theories". Someone mentioned that it was really just conventional, and someone else said the real number might be infinite. I'm prepared to believe either (or both) of those comments, but I have trouble reconciling them with Smolin's account. He says
"A question we CAN answer is how many string theories pass these [reasonableness] tests... The answer depends on what value of cosmological constant we want... If we want a positive value, there are a finite number [of theories]; at present there is evidence for about 10^500 or so such theories."
So, not only does he suggest that this number (10^500) is supported by evidence (and hence not entirely conventional), but he also specifically states that the number is FINITE for a positive cosmological constant. Maybe the comments above were referring to a larger class of theories?
But here's what really puzzles me about Smolin's summary: He says, in full,
"If we want to get a negative or zero cosmological constant, there are an INFINITE number of distinct theories. If we want the theory to give a positive value for the cosmological constant, so as to agree with observation, there is a finite number..."
I'm not bothers by the two different levels of the word "theory" in that passage (well, maybe a little), but I'm surprised that there are an infinite number of theories compatible with a negative or zero cosmological constant. The whole thrust of the narrative was to say that things were going along fine until 1998 when the accelerating expansion was detected, and this required a positive cosmological constant, when then led to the landscape of 10^500 possible universes, thereby forfeiting all predictive power. But if there were already an INFINITE number of viable theories compatible with negative or zero cosmological constant, why wasn't the "landscape" already an issue prior to 1998?
I completely agree with the apple policy. Unfortunately the actual policy is just the opposite. Look, blue, small, average, commie, politically correct, crying, conspiracy theorist, stupid crackpot who has never written a paper that makes any sense and probably never will. Let's take him to the Perimeter Institute, print his book, and allow him to decide where science goes.
Smolin is the ultimate symbol of most things that are so terribly wrong with the Academia.
Blue Apples?
You didn't google before you came up with that one, eh?
I would have thought the Perimeter Institute was a rather admirable market solution to right thinking people - private funding, aggressive recruitment of scientists, well paid and lots of resources. Shouldn't it be a better indicator than stodgy old Ivy League universities of what is hot?
Hi, I just saw this thread which has lots of interesting comments, let me first, however, insist on accuracy: Aaron states "As I recall, Smolin lets this implication lie around, but never actually states it. Lorentz violation is not a prediction of LQG..."
Please look on the last paragraph on p 237 which closes chapter 14. The text is I hope all will agree, unambiguous and accurate. I would like, not for the first time, to ask Aaron and others to please check the text of a book before making false statements about what it says. Here is the text, after discussing what the implications would be for string theory of an experimental result that contradicted lorentz invariance I say:
"What about other approaches to quantum gravity? Have any predicted a breakdown of special relativity? In a background-independent theory, the situation is very different, because the geometry of spacetime is not specified by choosing the background. That geometry must emerge as a consequence of solving the theory. A background-independent approach to quantum gravity must make a genuine prediction about the symmetry of space and time. As I discussed earlier, if the world had two dimensions of space, we know the answer. There is no freedom; the calculations show that particles behave according to DSR. Might the same be true in the real world, with three dimensions of space? My intuition is that it would, and we have results in loop quantum gravity that provide evidence, but not yet proof, for this idea. My fondest hope is that this question can be settled quickly, before the observations tell us what is true. It would be wonderful to get a real prediction out of a quantum theory of gravity and then have it shown to be false by an unambiguous observation. The only thing better would be if experiment confirmed the prediction."
This makes it very clear, I believe, that no prediction is being claimed for the time being, but that work is in progress that I hope will lead to that.
To Amos: Your point is well taken. The result showing an infinite number of string vacua for zero or negative cc were from the last year, after the positive cc results. But as I discuss on p 124 of TTWP the landscape was an issue since 1986, known to anyone who read the conclusion of a paper by Strominger on "Superstrings with Torsion" where he showed that there were vastly many more solutions to 4d string compactifications than the CY manifolds. Several of us were discussing this in the late 80s and early 90s. My own work on the landscape, (summarized in my first book) was published in 1992. The question is why it took most string theorists so long to recognize the problem when it was evident from 1986.
In TTWP, Smolin mentions that in 3-dimensional QG there can be no "soccer balls"--no composite objects with m > m_pl. Is this connected to the mathematical tractability of 3d QG in comparison to the 4d varieties?
And yet, Lee, people seem to come away from your work with the impression that falsifiability for LQG is right around the corner. Funny that.