With last week's announcement on new constraints on dark energy models, the blogosphere chatter on dark energy is inching up, in particular The Babe in the Universe is irate and throwing some harsh comments around
So... let me give a quick perspective on "dark energy" from the astrophysics side:
When analysing cosmological data, on large enough scales, the default model for most scientist is that the dynamics follow General Relativity, and that the universe is well approximated as homogenous and isotropic - that is, it looks the same on average, from any spot and when looking in any direction.
In General Relativity, there is a simple solution for all of space-time, the FRW metric. In its basic form it describes the evolution of the universe as a whole, containing matter and radiation.
An interesting feature of the solution is that it the general solution for a static universe is unstable - the static solutions tend to diverge into either expanding or collapsing solutions when perturbed.
This lead Einstein to propose an additional term the "cosmological constant", which acts as a negative pressure term, and was introduced ad hoc in order to provide stable static solutions.
Shortly afterwards it was discovered observationally that the universe was actually expanding and the cosmological constant term was discarded as a mistake. Only of relevance as a mathematical curiosity, a "what if".
As data got better, the universe seemed to fit the FRW model very well, with minor discrepancies; one of which was solved by introducing "inflation" a period of very rapid expansion at early times driven by a term with interesting parallels to the cosmological constant, but one which was not constant and presumably became zero at some very early time.
Minor puzzles remained - one was that some aspects of the observations, and theoretical prejudices, required that the universe by "flat" - that it have just enough mass-energy to exactly offset gravitational attracition on average - this is a special solution of the FRW model, but one that is in a sense unstable, in that if the universe is close to flat but deviates slightly from the exact solution it tends to move rapidly into strongly expanding or collapsing regimes, which would be observationally obvious unless we lived at a special time just as the universe was not close to flat but not yet far from flat.
The other problem is that the inventory of mass, including dark matter, did not add up to enough mass to make the universe flat. This was a problem, but one where systematic uncertainties were large enough that people didn't really worry about it too much, most of the time. Cosmologists were acutely conscious of it though, and in retrospect some who insisted this was something to look into were right.
Finally, the age of the universe didn't quite add up, the age predicted by a flat FRW model was little bit too low, in particular it was lower than the model ages for the oldest stars observed. So people worried about the accuracy of stellar evolution models.
This was all resolved, in a particular way, when observations of high redshift supernovae Ia were made and they showed the distance (the "luminosity distance", which is a particular metric measure) to the supernovae was too large compared to standard FRW solutions.
IF type Ia were well calibrated standard candles (their luminosity varies with time, and there are different subtypes with different absolute peak luminosities, but these are calibrated spectroscopically), then the simplest resolution of the distance problem was if the metric in fact was "more stretched" then predicted, which is most simply done by reintroducing the cosmological constant with a particular value (about 0.7 in relative units) which provides for an expanding universe, one in which the expansion initially decelerated under gravity but is now accelerating under the growing dominance of the cosmological constant.
There are criticisms of the type Ia calibration, but most of the obvious errors that might be made make the problem worse - they'd make the Ia intrinsically more luminous and therefore further away than they appear. Proposed schemes for making them fainter are quite contrived and mostly thrown out as tests of the theory, not as serious alternatives to the basic concept.
Now, the cosmological constant is a particular concept of a broader class of negative pressure stuff, in particular it has a "maximal negative pressure", W=-1 in the vernacular (W can be more negative than -1 but it makes for rather unpleasant physics, and such solutions are mostly ruled out observationally), and it is constant - dW/dz=0 where "z" is the cosmological redshift, a measure of distance (larger z is further away).
So... physicists would like something more fun and dynamic then a "constant" so there are lots of models proposing alternative "dark energies" stuff with -1 < W < 0, and in general dW/dz not equal to zero. So a lot of proposed measurements of dark energy start off by looking for non-zero dW/dz or at least putting an upper bound on dW/dz. If we find it to be non-zero, then there is some immediate physics to be done and hypotheses to be ruled out.
This is all in the context of the FRW model! It is a successful model, and modifying it by adding the "W" term is the simplest extension of the model. It is both physically motivated, in that there are reasons to worry about negative pressure stuff, and it is parametrically testable - measurements can rule out for example simple cosmological constant models (dW/dz not zero would rule it out) and some more exotic alternatives (eg if W << -1/2 then some models are ruled out).
Everyone knows that this is not the unique model or approach, there are many other approaches being considered, including different geometries for the universe, different laws of gravity which violate general relativity on large scales, and variable speed of light theories, which also violate general relativity.
Such models are interesting in so far as they are testable - make quantitative predictions distinct from a simple cosmological constant model - and in so far as they are consistent with ALL other physics.
A model which solves the type Ia luminosity distance, but which changes the weak coupling constant at redshift >1, or which shift silicon emission lines at high redshift in a way inconsistent with other data cannot be a correct explanation.
Coming up with such models is hard, and people who want them to succeed should not make philosophical objections to the "dark energy" model, which is primarily a parametric phenomenological model based on the FRW equations. Rather people need to show how their model predicts something different and better than the naive cosmological constant model.
That is hard work.
- Log in to post comments
This is all in the context of the FRW model! It is a successful model, and modifying it by adding the "W" term is the simplest extension of the model.
Indeed, it's not even much of a modification. It's there from the beginning, as you say, in the form of the cosmological constant. Yeah, nowadays we think about stress-energy tensors with negative-pressure things other than just vacuum energy, but it's been in there and worked with the models all along.
So it's not really a modification of the model so much as looking at some of the parameter space of the model that had previously been ignored.
-Rob
One other point that I would add is that the recent CMB work, like WMAP, show that the universe really is flat. My first thought when I saw that the WMAP year 1 data was, "those type Ia people were right!"
Yeah, the fact that the SNe Ia, CMB, and Cluster work all yield consistent results is a big part of the reason so many astronomers are willing to go along with the standard picture.
In 1998-1999, lots of astronomers were dubious about Dark Energy. Then we got Boomerang and Maxima, and we already sort of had the cluster stuf... it all started to hang together.
The SN Ia doesn't really say much about the geometry of the Universe, even now. There are big error bars in the Ω_M+Ω_Λ direction. But any two of (SNIa, clusters, CMB) predicts the third, and they all get it right.
I kinda fell into observational cosmology research when a professor here had a little extra money for a "junior scientist". Now I'm a graduate student, still working on the project, and I'm just now catching up on my understanding of the science, so articles like this are very helpful. Incidently, the Maxima cryostat is not only still sitting in our lab, but is obnoxiously in my way right now.
The comments of you, Rob Knop and others are most helpful. The weak-coupling constant is usually considered as a multiple of the fine-structure constant. Since in this model the product hc is constant, emission lines shold not be affected.
Concerning WMAP data, Roger Penrose wrote "In my opinion, we must be exceedingly cautious about claims of this kind--even if seemingly supported by high-quality experimental results. They are frequently analysed from the prespective of some currently fashionable theory."
As Rob commented in his blog, during 2002-2003 any proposal involving dark energy was "surfing the wave" of fashion. The 2003 announcement convinced a lot of smart people that DE existed, leading to the mess we are in now. If you look at the low-l data, which the WMAP team blithely ignores, the universe is not flat. Data also indicates there is curvature of radius R = ct.
Theory predicts a changing speed of light, which could be found in the lab someday. Theory also predicts that baryonic matter is 4.507034%, which provides another indication of curvature.
As Rob commented in his blog, during 2002-2003 any proposal involving dark energy was "surfing the wave" of fashion.
You've used this quote before, and when you use it you imply it to mean something different from what I really meant.
Dark Energy was -- and still is! -- surfing the wave of fashion. With supernovae, less so now, simply because any committee evaluating it will say, "waidaminnit, ESSENCE and SNLS are huge surveys netting gigantic numbers of SNe Ia at z<=1, so what are you going to add?" That's not that the science is out of fashion, it's just that it's harder for smaller teams to fully contribute. At z>1, Adam Riess is the dominant researcher together with Lou Strolger, and they *are* a small team. It's all still in fashion.
The point I was trying to make is: that kind of stuff is currently very fashionable and sexy. Gamma Ray Bursters are similar. Young stars and disks and planetary formation are similar. Other things are less so. This does not mean that anybody thinks that the other things are not good science, or that they aren't worth doing-- it's just less in the consciousness of all scientists that this is a "big topic" right now, so if it's out of your field it may sound random.
We're not in any kind of mess right now. The accelerating Universe and dark energy explanation was pretty solid before 2003; indeed, it was already fairly solid even before WMAP. (WMAP often gets the credit for discovering the Universe was flat, but BOOMERANG and MAXIMA did it a year or two before WMAP announced it. WMAP added precision, whole-sky coverage, and a lot of other things.)
It's a vast, vast, vast misrepresentation of what's going on to say that scientists have all bought into dark energy because that's what everybody else does and everybody is afraid or unwilling to challenge the paradigm. We all buy into it because it's working very well with a whole lot of different measurements.
There is no serious low-multipole problem with WMAP. There are outstanding questions that may lead somewhere interesting, but cosmic variance is very large at low multipoles.
Re: your GM=tc^3 equation and how it "fits" the supernova data, it's very curious that you would seem to think that you could be doing anything so nearly simple as what you do considering that the Chandrasaekar mass is a balance between gravity and electron degeneracy pressure for relativistic electrons. If you change c without changing G, the physics of those explosions are going to change in more than a simple "rescaling the energy of photons" way... so any kind of comparison of supernova data with your GM=tc^3 model the way you've done it is utterly meaningless.
-Rob
Hello Dr. Rob: Your blog and posts are most appreciated. I look at fashion magazines and know that DE is the fashion. All scientists have not bought into it, not completely out of fear but because they have not been presented with alternatives. Whether there is a low-multipole problem is a matter of friendly disagreement, though Krauss and Maguiejo claim that there is.
The Chandrasekhar mass is dependent upon G, hc, and proton mass m_p. Since all three of these quantities (including product hc) are considered constant, the physics of SN explosions are not seriously changed.
Again I appreciate all your writings. Hopefully my posts don't sound too defensive, but it is a welcome change that people are interested.
Sometie between COBE results coming out and the dark matter crisis in clusters being firmed up, it was clear something was wrong.
A lot of data pointed to a flat universe with Omega=1, but the matter inventory wouldn't add up to more than 0.3, and there were persistent problems with both the age of the universe vs its constituents and details of structure formation in a pure cold dark matter universe.
A cosmological constant did in fact solve that, and I first heard it floated as something to seriously reconsider around 1995 (I was conservative and objected to it as being redundant and premature to add that as a model term, bad mistake). As so often data drove theory rather than vica versa.
A cosmological constant is still unsatisfactory, for many reasons to extensive to fit in this comment box, but it works, and if dW/dz=0 be measurement, then that is the default null hypothesis to be tested.