A reader emailed me with a few questions regarding How to Teach Physics to Your Dog, one of which is too good not to turn into a blog post:

What is a photon from an experimental perspective?... Could you perhaps provide me with a reference that discusses some experiments and these definitional issues?

The short form of the experimental answer is "A photon is the smallest amount of light that will cause a detector to 'click.'" (For some reason, hypothetical light detector technology has never really advanced past the Geiger counter stage-- even though it's all electrical pulses these days, we still talk about light detectors as if they made audible sounds.)

That seems more like a statement of limited detector efficiency, but in fact you can show that light really does come in discrete "packets" of energy. Even if the energy of a single photon is significantly greater than the energy required to trigger a "click" in your detector, for a given frequency of light, you will only ever find the energy arriving in single-photon amounts.

How do we know this is the case? There are three great historical experiments that mark the key steps on the way to full acceptance of the photon model: the photoelectric effect, the Compton effect, and photon anti-bunching.

The **photoelectric effect**, as you might guess from the name, involves using light to knock electrons loose from some substance (essentially every photon detector made uses some form of the photoelectric effect). This was discovered in passing by Heinrich Hertz during his experiments to demonstrate that light is an electromagnetic wave, and posed quite a problem for the wave model of light. You can make a number of predictions from the wave model, and only one of them is borne out in the experiments (as described in the pre-ScienceBlogs version of this blog).

One of Einstein's 1905 papers was "On a Heuristic Viewpoint Concerning the Production and Transformation of Light," which proposed a simple model to explain the photoelectric effect using quantized light. A single light-quantum comes along, hits an electron inside a metal, and provides all the energy needed to knock it loose. This model is a radical change in the way we look at light-- Einstein referred to it as the only truly revolutionary thing he did in his career-- but it reproduces all of the observed results. It even stands up to hostile investigation-- the American physicist Robert Millikan set out to disprove Einstein's model, and wound up confirming it in every detail (which didn't prevent him from getting a little pissy in his paper). Einstein and Millikan each got Nobel Prizes out of the deal.

The photoelectric effect by itself wasn't enough, though, and it turns out that you can reproduce all the important aspects using a classical model of light with quantized matter (though this wasn't worked out in detail until the 1960's). The experiment that really got people to take the photon picture seriously was the **Compton effect**, an expeiment on the scattering of x-rays by electrons, carried out by the American physicist Arthur Holly Compton in the early 1920's.

The Compton effect involves what is essentially a collision between two particles: an electron that is more or less at rest, and a photon of light. In the photon model of light, each quantum of light carries a small amount of energy, and thus necessarily a small amount of momentum, both of which depend on the wavelength of the light. When the photon hits the electron, it will transfer some of its energy and momentum to the electron, which means that after the collision, the photon leaving the area has a different wavelength than the photon that entered. The change in the wavelength is related to the angle between the direction of the exiting photon and the entering photon in a very simple way.

Compton fired x-rays of known energy at a metal target (which contains lot and lots of electrons, whose energies are much smaller than the energy of the photon), and looked at the energy of x-rays leaving the target at different angles. Though he went through the usual series of misinterpretations of his data, he eventually found that his observations agreed perfectly with the predictions of the photon model.

This convinced most people that the notion of light as a particle needed to be taken seriously, but it didn't completely seal the deal. I have seen it asserted that the Compton effect can also be explained using a wave model of light, though I've never read a good explanation of how that works, but it is generally agreed that the experiment that absolutely nails the existence of photons is the **photon anti-bunching experiment by Kimble, Dagenais, and Mandel** in 1977 (more than 70 years after Einstein's paper explaining the photoelectric effect in terms of photons). Anti-bunching, as you might expect from the name, involves showing that photons are discrete objects that, under the right circumstances, will be spaced out in time. The goal of the experiment is to show that photons of light are emitted one at a time, and that the detector "clicks" we see really represent single photons.

The way they did this was to take a very weak beam of sodium atoms, and illuminate them with light as they passed near a detector. The light would excite some of the atoms to a higher-energy state, and they would drop back down to the ground state a few nanoseconds later, emitting a photon. These photons were then picked up by the detector.

The beam of atoms was chosen to be very weak, so that there was generally only a single atom present in front of the detector at any time. In this case, you expect to see a particular pattern in the arrival time of the photons at the detector-- specifically, you expect to see a delay of a few nanoseconds between the detection of one photon and the detection of another, due to the fact that the single atom will need to be excited again, and decay again in order to produce a second photon that can be detected.

This anti-bunching effect is something that cannot be explained using a classical picture of light as a wave. Using a wave model, in which light is emitted as a continuous sinusoidal wave, you would expect some probability of a detector "click" even at very short times. In fact, you can easily show that any wave-like source of light must have a probability of recording a second click immediately after the first one that is *at least* as big as the probability of recording a second click after a long delay. Most of the time, the probability is actually higher at short times, not lower. A decrease in the probability of a second detection at short times is something that can only be explained by the photon model.

Kimble, Dagenais, and Mandel measured the time required to see a second photon after one photon was detected, and found exactly the behavior they expected: they saw almost no photons within the first few nanoseconds after the detection of one photon, and the few counts they did see could be attributed to the tiny fraction of cases where they had two or more atoms in front of the detector at the same time.

These antibunching experiments have been repeated many times with different systems, all with the same result. A particularly nice version was done by Alain Aspect in the early 1980's using an atomic cascade source to produce single photons, which went on to also demonstrate the interference of single photons. Later groups have used nonlinear crystals to produce single photons in huge numbers, allowing all sorts of nifty experiments, to the point where Kiko Galvez at Colgate uses single-photon experiments in undergrad labs.

As for references to this stuff, the best collection of material on the particle nature of light that I have seen is The Quantum Challenge by Greenstein and Zajonc, which goes through the whole question of how we know photons exist in an approachable way without skimping on detail. It's got a bit of math, but nothing too awful. There's also a centenary review article by Anton Zeilinger in Nature (you probably need a subscription to access it) that gives a good but brief summary of a lot of the cool things that have been done with photons in the hundred years since Einstein suggested they were real.

- Log in to post comments

In the photon model of light, each quantum of light carries a small amount of momentum, and thus necessarily a small amount of momentum

I think you wanted the first "momentum" to be "energy", no?

On the other hand, tautologies are pretty sweet.

Where would pair production fall into this? At first sight a wave-model of light wouldn't predict the kinetic energies and momenta of the particles produced.

@1: Sweet tautologies are tautologies that are sweet

Maybe all this are no so bad at all. We must manage it and we will winn.

I think you wanted the first "momentum" to be "energy", no?

That's what I get for writing blog posts in the waiting room at the car dealer (getting a taillight replaced). It's fixed now.

"It's fixed now."

The post, the taillight, or both? :o)

The "anti-bunching" stuff is cool, I had no idea folks were still testing the photon nature of light in the 70's.

Your explanation is excellent, but I have always wondered if the term 'photons' can be applied to radio waves generated by an electronic oscillator? They appear to be unquantised when seen on an oscilloscope screen. Are quanta only a function of atomic energy level changes and not of electronic oscillators?

Terry

Kiko's undergrad quantum labs are some of my favorite moments in my physics education. Getting to see the violation of the Bell Inequality yourself, and understanding where it comes from, is a truly wonderful experience for an undergrad to have.

If somebody said a photon from the long wave radio part of the spectrum had a wavelength of 10 million meters, what does that mean exactly? I assume that such a photon would arrive at a detector as one "click", as a particle-like thing, rather than waiting for the back-end of the photon (if you know what I mean?) to arrive several seconds later. What is the wavelength really, especially when considering photons of monster wavelength?

Your explanation is excellent, but I have always wondered if the term 'photons' can be applied to radio waves generated by an electronic oscillator? They appear to be unquantised when seen on an oscilloscope screen. Are quanta only a function of atomic energy level changes and not of electronic oscillators?

They're still quantized. They appear not to be for the same reason that light from a laser appears like a continuous wave in interference experiments: the number of photons is so incredibly huge that one photon more or less doesn't register. A cheap red laser pointer, with a power output of a milliwatt or so, sends out 10^15 photons every second. A radio-frequency source, with a frequency seven orders of magnitude smaller (50 MHz, say, instead of the 5x10^14 Hz of red light) will have 10^22 photons/s.

You can, however, see the effects of this quantization. There have been a number of "cavity QED" experiments done in the microwave region of the spectrum, which clearly show the effects of single photons of light. I wrote one up a couple years ago, and you can find lots more examples from the Haroche group. You need to use superconducting mirrors and cool the whole thing to really low temperatures to eliminate the thermal background, but radio-frequency light is quantized just like everything else.

If somebody said a photon from the long wave radio part of the spectrum had a wavelength of 10 million meters, what does that mean exactly? I assume that such a photon would arrive at a detector as one "click", as a particle-like thing, rather than waiting for the back-end of the photon (if you know what I mean?) to arrive several seconds later. What is the wavelength really, especially when considering photons of monster wavelength?

That's a good question, and a tricky one to answer. I'm not sure it even makes sense to try to assign a position to a photon with resolution better than the wavelength of the associated light. I remember talking to Luis Orozco about this at one point, but not what conclusion he reached. I'll dig around and see if I can find something that will let me give a better answer.

Well, most of the literature on localizing single molecules works from models where you have a camera with an array of pixels that are smaller than the wavelength of the light (albeit not as ridiculous of a ratio as a hand-held detector and an ultra-low frequency radio photon), and in these models the point spread function (basically, the profile of the light beam) is treated as giving a probability distribution for photon detection. Detect enough photons and you can infer the center of the beam profile (with precision that depends on noise and how many photons you detect). In that sense, a direction (and hence source) can be assigned to the beam. A position can also be assigned to each detection event, while a direction is assigned to the beam.

Of course, all of this work is done by people who are thinking about biophysics and statistics, and a lot of us do not have a significant background in quantum optics. I don't know that the more subtle issues of quantum optics always factor into this work.

My crack at this interesting question, for what it is worth:

I think that speaking of single photons (or any definite number thereof) is an approximation, in which they are completely delocalized objects. In order to discuss quantization of energy, you'd have to single out a frequency (which the experiments you describe do efficiently, by being responsive to only narrow range of frequencies). Otherwise, energy would not be quantized if you allow photons of any frequency. If you know the frequency, you know the momentum, and your wavepacket is delocalized in space (and time). In order to localize photons you'd need to construct wavepackets superposing photons with different frequencies, and you'd need the machinery of QFT to do that correctly. In particular, if you want a wavepacket localized below your central wavelength, you'd need a wide distribution of wavelengths around that central value.

If somebody said a photon from the long wave radio part of the spectrum had a wavelength of 10 million meters, what does that mean exactly? I assume that such a photon would arrive at a detector as one "click", as a particle-like thing, rather than waiting for the back-end of the photon (if you know what I mean?) to arrive several seconds later. What is the wavelength really, especially when considering photons of monster wavelength?

That's a good question, and a tricky one to answer.

This was a question I asked pretty much every prof I had in grad school. (Plus I think I asked it here a few times.) The wave theory of light seemed so intuitive to me, and the photon theory just didn't come with a physical picture in my head at all.

The story I eventually told myself, based on my profs' responses, goes something like this.

You can describe an electric field in lots of different ways. As a sum of plane waves, for instance, but also in terms of spherical harmonics (I believe) and even in terms of localized pulses. Pick a basis in which to describe your electric field. Let's say it's plane waves in this case. Now consider one "mode" -- a plane wave with frequency "f" let's say. In a classical theory of light, that plane wave could have whatever amplitude it wanted and whatever phase it wanted. In quantum mechanics, though, it doesn't work that way. There is an uncertainty relationship that says that we cannot know both the amplitude and the phase. So what you have to picture now is not a sine wave but a blurry sine wave, with the amplitude and the phase both uncertain.

Now we ought to be able to have states of definite energy, but in classical E&M the energy depends on both the frequency (which we know for the mode we have chosen) and the amplitude (which we don't.) What does a state of definite energy look like? Well, it turns my blurry sine wave into just a *blur*. The phase is now completely uncertain, which means that I don't know where the minima and maxima of my field are at all (though I still know they repeat with frequency f, wherever they are...). I don't really know the amplitude either (no more than I know the exact position of a quantum harmonic oscillator which is in any engergy eigenstate) but I know a wavefunction to describe it, and if I calculate the energy using that wavefunction, I get a definite answer. This is a photon number state a state of a particular mode of light with doubtful amplitude and phase, but definite energy.

Does the one photon state have a location? Not if you have chosen to describe your field in terms of plane waves. Plane waves don't have locations. The one photon state in this case is a state of a plane wave and so neither does it.

However, had you chosen to describe your field in terms of some other, more localizable modes, you would find the same observations true of the amplitudes and phases of those modes. You can have a single photon state for a pulse just as well as you can for a plane wave (although it makes the most sense if that pulse is a basis function in a good orthogonal basis for describing an arbitrary field). That single-photon pulse would not (unlike the single-photon plane wave) have a definite wavelength, but it would have a better defined position. It's electric field amplitude and phase would again be described by wavefunctions which give definite values for the energy, but not for the amplitude and phase themselves.

So -- does a photon have a location? Does it always have a definite wavelength? That depends on what brand of photon you've got. It never has both, anyway.

Physicists tend to casually assume their photons have locations when it is convenient to picture them as having locations, and as being quantized plane waves the rest of the time, and don't distinguish very well when they switch between those pictures.

(If I'm wrong, I almost don't want to be told, because that understanding was very, very hard-won.)

As you noted, Compton didn't just look at the momentum transfer in a billiard-ball like collision, he looked at the angular distribution predicted by the two competing models. I would assume an alternative wave approach would go beyond the dipole model Compton used.

However, the challenge anyone faces when doing this is not just to explain the rather crude data Compton had, but data for a wide range of energies - some of exceptional quality - that include polarization as well as simple scattering distributions. Better yet, they should do what all good science requires: predict where a wave model would disagree with QED and propose the relevant experiment.

Of course photons exist! I've got a picture of one....

I missed this yesterday, probably due to being on UK time.

When I try to explain about photons my standard line is that the only safe definition of a photon is that it is an elementary excitation of the electromagnetic field. I recognise that this is not much of a definition for the nonspecialist. The single detector click explanation works for all practical purposes, unless you have a perverse detector which, for example, includes a nonlinear element to downconvert first so that you get two clicks per photon.

Both wave and particle interpretations of light can be undermined by experiment, and neither seems to fit well with Hanbury-Brown and Twiss, or Hong-Ou-Mandel 2-photon interference (predicted first I think by Loudon).

Anyway, an excellent article about photons has been reprinted, which should be pretty readable to undergraduate physicists "Take a Photon" by Frisch - Vol. 50, No. 1, JanuaryâFebruary 2009, 59â67.

Chad: "This anti-bunching effect is something that cannot be explained using a classical picture of light as a wave."

The "we couldn't do it so it has to be impossible" attitude so common in QM.

I can actually see a possible explanation right away as long as we use both retarded and advanced waves of WheelerâFeynman absorber theory. The emitter and absorber both emit continuous waves, once the retarded wave from the emitter reaches the absorber and the advanced wave from the absorber reaches the emitter they cancel each other and energy transfer between them is concluded, the emitter is no longer excited and so before it can emit another wave it has to be excited again by the source.

Now this is just a sketch and Wheeler-Feynman absorber theory has it's problems, but it shows there is nothing inherently impossible to explain using classical waves in this experiment. We may simply lack imagination.

Actually I won't be surprised if one day all of the supposedly impossible-to-explain-classically effects of QM are in fact explained classically.

Dear Paul,

Anti-bunching can't be explained classically. Classical intensities are essentially real numbers (with attached units, watts per square metre). Numbers have to obey the mathematical rules for their particular type of numbers. One of these is the Cauchy-Schwarz inequality. Classical intensities must obey this and this leads to the bunching condition. Antibunched intensities are therefore not explainable within classical physics.

Unless your explanation can get around this...

Dear lane ranger,

Thanks for your reply, I am not convinced however by your argument because what is being observed and therefore needs explanation is photon antibunching not antibunched intensities. Photons are not intensities, they are detection events and I don't see why a classical theory should have a problem explaining the fact that detection events cannot be arbitrarily close in time.

For a trivial toy model imagine a bunch of atoms, each emitting light in the form of a sine wave with the same period T and some random phase (different for each atom), all those waves overlap in space. Now imagine we have a detector at some point which registers events (=our photons) when first derivative of wave amplitude at that point is zero. The detector however is not perfect and only reacts to a certain percentage P of events meeting criteria (just to make it slightly more realistic).

This setup will naturally lead to photon anti-bunching in that no matter how many atoms are emitting the detector will never register two photons one just after another, they will always be at least half the wave period apart (the sum of sine waves with period T and differing phases is another sine wave with period T). If P is 100% and the detector registers all the events meeting criteria it will simply click once every half period, if P<100% it will click less often.

So this simple toy model shows that classical models can have photon antibunching.

The sentence broken in half near the end should read:

If P is 100% and the detector registers all the events meeting criteria it will simply click once every half a period, if P is less then 100% it will click less often but another click will never follow sooner then half a period.

And one last thing, I am not saying this model correctly captures reality of course it's just an illustration that a classical model can experience photon antibunching.

Dear Paul,

Photons and intensities: You are correct, photocounts are not the same as intensities, but I can relate them. Within a detector the average ionisation rate produced by an em signal is essentially proportional to the energy density summed over the detector volume. Using some simple maths we can show that this is proportional to a quantity known as the Poynting Vector at the entry surface of the detector. The PV is essentially the intensity with the direction of energy flow attached. Now for narrow frequency light travelling in a beam (think laser), The PV can be written instead as the propagating energy (strictly power) of a classical harmonic wave. I can therefore relate intensities to the excitation of harmonic waves. The intensity must obey the bunching condition, as it is just a number.

When I quantise the system the harmonic wave excitations come in lumps which we call photons. These photons can cause counts at the detector at times which violate the bunching condition. The reason is that in quantisation I have to change the numbers such as intensities, electric fields, into quantum operators, for which the order of writing matters (gives different possible experimental results). The ordering which corresponds to the detection process gives rise to antibunching. Experiment bears this out.

Note that none of this proves that quantum physics is right, or that photons even exist - it merely says that classical intensities cannot be the whole story. To prove that photons "exist" we must work harder.

Your model: If the detector registers when the first derivative is zero (why in any case?) then there will be many places where this occurs in one period T. The sum of the sine waves with random phases will ensure this (for enough sine waves it will give you a pretty flat curve with small wiggles).

Actually my last sentence is not quite right. The curve will not be pretty flat. It will be pretty bumpy, but with plenty of places within T where the first derivative vanishes.

Dear lane ranger,

I agree that intensity is related to photon count, but as you say it only means that intensity alone cannot be the whole story and that any hypothetical classical explanation needs to go deeper.

As for the model I originally imagined it in one space and one time dimension and there it works as stated, in two or three space dimensions (+ 1 time) it needs one more condition for photon detection - the time derivative needs to be zero and the signal has to be nonzero at that instant. The reason is that in two or three space dimensions there are some directions in which waves exactly cancel and the signal stays zero for the whole period. So if you talk about this effect when you say there are plenty of places within T where the first derivative vanishes, you are right. But other then that the model works fine.

The point of derivative is to detect local extremes, the model uses extremes since the amplitude changes as additional sine waves are added but the frequency does not. The whole model depends on this later fact - that adding sine waves with arbitrary phases but same frequency preserves frequency and so no matter how many sines there are there will only be two local non-zero extremes per period and therefore the time derivative will also be zero only twice per period.

In the case this is still not clear I'll describe an example of a 2 dimensional (1 space + 1 time) model in more detail - we have atoms at say x0=0, x1=1 and x2=3.5 and each one of them emits a sine wave given by sin(k(x-xn)-t/T+pn) where xn is position of n-th atom, pn is phase of n-th atom and T and k are wave period and wave vector which are the same for all atoms. All those waves interfere - they are summed to produce the combined amplitude at each point. We also have a detector at say x=10 which differentiates the combined wave amplitude with respect to time and if the derivative is zero registers a photon with some efficiency, for example 30% of the time.

The result is that this detector will never click more often then twice per period T since as I already explained adding shifted sines preserves the period. At the point of detector each sine has the form of Sin(t/T+constant) and so the sum of them has the form of

Sin(t/T+const1)+Sin(t/T+const2)+Sin(t/T+const3)

which is equal to

(Cos(const1)+Cos(const2)+Cos(const3))Sin(t/T)+(Sin(const1)+Sin(const2)+Sin(const3))Cos(t/T)

=Const4*Sin(t/T)+const5*Cos(t/T)

which has the same period T.

I went over the model in such detail cause I also wanted to make sure myself I haven't overlooked something.

I don't have a physics background outside 101, but was looking up some claims about quantum entanglement that sounded suspicious to me when I found a website called

www.the-phoney-photon.com

The premise is radical, that photons simply do not exist, they are fictitious particles that can be dispensed with using classical physics. The arguments seem to be clear and compelling. My inability to visualize/'get my head around' wave/particle duality had always troubled me, that is to say, if this is how reality works why can't I understand it.

Intrigued, I tried to find more dialogue on the subject, and here I am.

Any expert insights would be appreciated. Specifically problems with the material presented on the-phoney-photon website.

Thank you

I gave the phoney-photon site a cursory look but while I'm sympathetic to attempts to classically explain quantum phenomena the physics there doesn't look solid.

For example I checked the Compton effect and it's explained by substituting photo-electrons for photons, but the argument fails for scattering from an electron beam.

The calculation of photon density in the case of a green 1mW laser is interesting, but it also is not as easy to interpret as stated on site. For one there will be on average 4 photons per wavelength in the direction of travel. It's only the distance in the direction ortogonal to direction of propagation that will be much larger then wavelength but this distance depends on the assumed aperture and distribution I am not sure they are right, and even if they are the size of a photon in this direction is not well defined.

I do agree that photon density seems strangely low if one want's to see light as a stream of photons in flight. But I already see photons as quantized detection events anyway (so I agree to some extent with the premise of the site that it's only interactions of light with matter that are *certain* to be quantized).

Finally there are also some completely wild ideas like a speculative attempt to dismiss wave-like properties of matter with some "just so" story without anything to show it works.

All in all I wouldn't take things you read there very seriously unless you can verify them yourself.

Thanks!

Since I have neither the resources nor the knowledge base to verify these things for myself, I suppose it will be an open ended question for me. However, being philosophically minded, I feel that positing the existence of something that can't be detected (without circular argument) is only valid if it is an 'inference to the best possible explanation.' It seems that there are some, at least to me, reasonable explanations for these phenomena that do not require the kind of leap that Einstein took in creating the photon.

I guess i'll just leave it to the experts!

I don't understand why the anti-bunching experiment eliminates the classical wave model either, but that's probably because I don't know enough about physics. Here's my understanding of what you wrote though (as I say I am not an expert in physics, so please expect some misconceptions):

You excite an electron in a sodium atom with a light beam, and then when that electron decays to it's ground state it emits a photon (or at least some electromagnetic radiation) that is then detected. These electrons in the sodium jump up and down eliminating more photons, and photon model then predicts that you will measure some time delay between the detected photons which corresponds to the time it takes these electrons to jump up and down between different energy states.

Please humor for a moment. Suppose you have a large pond. Assuming it's deep enough, the surface of the pond moves up and down according to a "classical wave model" (yes?). Now suppose you drop a series of stones in the middle of the pond. What you will see are a series of expanding circular wavefronts on the surface of the pond. If you had a detector in the pond it would measure a "click" each time a wave front went by. With my crude understanding this seems similar to the anti-bunching experiment as I've described it (the stones corresponding to the electrons decaying to their ground states). So my question is how is it different?

I am interested in this at least in part because I am a mathematician currently studying "coherent" solutions for hyperbolic equations (ie. gaussian beams, curvelets, wave packets), and I don't understand what separates quantum photons from wave-packet type solutions of maxwell's equations (although I gather there is a real difference). Thanks.

I don't understand why the anti-bunching experiment eliminates the classical wave model either, but that's probably because I don't know enough about physics. Here's my understanding of what you wrote though (as I say I am not an expert in physics, so please expect some misconceptions):

You excite an electron in a sodium atom with a light beam, and then when that electron decays to it's ground state it emits a photon (or at least some electromagnetic radiation) that is then detected. These electrons in the sodium jump up and down eliminating more photons, and photon model then predicts that you will measure some time delay between the detected photons which corresponds to the time it takes these electrons to jump up and down between different energy states.

Please humor for a moment. Suppose you have a large pond. Assuming it's deep enough, the surface of the pond moves up and down according to a "classical wave model" (yes?). Now suppose you drop a series of stones in the middle of the pond. What you will see are a series of expanding circular wavefronts on the surface of the pond. If you had a detector in the pond it would measure a "click" each time a wave front went by. With my crude understanding this seems similar to the anti-bunching experiment as I've described it (the stones corresponding to the electrons decaying to their ground states). So my question is how is it different?

I am interested in this at least in part because I am a mathematician currently studying asymptotic solutions for hyperbolic equations (ie. gaussian beams, curvelets, wave packets, although not in the context of electro-magnetism), and I don't understand what separates quantum photons from wave-packet type solutions of maxwell's equations (although I gather there is a real difference). Thanks.

Sorry for posting that twice ... . It was an accident :-(

Since I've had several requests for it, here's the explanation (with math) of why anti-bunching cannot be explained classically.

Do you happen to know if that includes the lack of a delay before electron emission found by Meyer and Gerlach in their 1914 experiments with metallic dust? (There's a brief description of them on p12 of this).

Do you happen to know if that includes the lack of a delay before electron emission found by Meyer and Gerlach in their 1914 experiments with metallic dust?

Yes, it does. It's a Fermi Golden Rule argument, essentially, and you wind up predicting a transition rate that is constant in time, which means there's a probability of instantaneous emission.

There's a bit of a dodge there, in that the Golden Rule requires some time averaging, but the time scale for that is set by the optical period, and thus a whole lot shorter than can be tested by most experiments.

There was a paper mentioned somewhere recently that seemed from the press release to be reporting an observed delay in the emission of electrons hit by ultrafast laser pulses. I'm not sure whether that's relevant to discussions of the photoelectric effect, though, and I haven't had time to read the paper in detail.

Great! Thanks.

Please google "The phoney photon", where you will see that using a series of carefully described classical scientific experiments, EMR is shown to be A.a wave and B. not a stream of particles. There is no need for the concept of "photon".

I wil be more than willing to reply to any criticisms.

Geoff Harries

I am the author of "The-phoney-photon.com" I am a retired aerospace electronic engineer and the final argument in the debate must surely be that engineers and scientists have been looking for this magical partical, this "proton", for years now. Just imagine, it would enable the construction of a parallel computer and the transmission of information at faster than the speed of light. Every now and then some article appears in "Nature" announcing its discovery (again). But there is nothing on the market.

Geoff Harries