A Confusing Light OPERA: How Does a Loose Fiber Optic Cable Cause a Signal Delay?

So, the infamous OPERA result for neutrino speeds seems to be conclusively disproven, traced to a problem with a timing signal. Matt Strassler has a very nice explanation of the test that shows that the whole thing can almost certainly be traced to a timing error that cropped up in 2008. This problem is generally described as resulting from a "loose fiber optic cable," and Matthew Francis's reaction is fairly typical

The main culprit was a fiber optic cable that was slightly out of alignment. This is not quite a "loose wire", as it sometimes has been described: it's far more subtle and harder to check than that, but it's still fundamentally a simple technical problem. (My prediction that the effect was due to something really subtle turns out not to be correct!)

As a professional Optics Guy, I would beg to differ a little. Assuming that this hasn't been garbled by some sort of translation issue, this really is something subtle and surprising (albeit in a technical way, not a new physics way). You wouldn't generally expect to get a significant timing delay from a loose fiber optic connector, because of the way that fiber optics work, which is fundamentally different than the way ordinary electrical cables work.

In an ordinary wire, the signal is carried by a voltage change propagating along a piece of copper (typically), with electrons shifting around very slightly to make the change that's then registered by some voltage sensor. The speed at which these electrons can move around depends on the configuration of the wire, so you will never see a pulse that "turns on" instantaneously (that is, going from 0 to 1 in an infinitesimal time). Any real configuration of cables will, in circuit terms, have some electrical resistance and some inherent capacitance, so the actual voltage vs. time graph will rise quickly at first, then slow down and approach the maximum value following a curve like that traced out by the black dots in this figure:

i-ffe081e09ba594c2d39b86c746c9a3a9-RC_delay.PNG

If you loosen a connection somewhere, you can dramatically increase the capacitance. When you do that, the rise to the maximum value is much slower, giving you something like the white circles in the figure.

When you're using pulses for timing, you need to establish some criterion for when you say a given pulse has arrived. This often takes the form of a threshold value, illustrated by the horizontal line on the graph, which is at 75% of the maximum. If you look at the time when each signal crosses that line, you'll see that there's a significant delay in the time when the white circles cross compared to the black circles.

When people think of loose cables messing up signal timing, that's the sort of thing they generally think of. It's the sort of thing you immediately think of when you're trying to track down a timing issue in an experiment using electrical signals.

So, what's different about fiber optics? Well, as the name suggests, fiber optics carry optical signals, that is, pulses of light. Which propagate down a very thin (100 microns or less, about the thickness of a human hair) flexible strand of glass at, well, the speed of light in that type of glass.

This is a fundamentally different operating principle than an electrical cable, and should not produce the same sort of capacitive delay from simply loosening the cable. If you loosen a fiber optic cable, opening an air gap, the signal pulse will propagate across that air gap at, well, the speed of light in air, which is typically a few tens of percent greater than the speed of light in glass. It doesn't change the shape of the signal significantly, or delay its arrival, unless you've "loosened" the cable by inserting about one foot (30cm) of empty space between the end of the cable and the socket for every nanosecond of delay that you're trying to produce. That's the sort of thing you would notice right away.

So, how do you get a 70ns delay from a loose fiber optic cable? I'm really not sure, but I suspect it involves some subtle problems in both the cable alignment and the electronics of the detector. The only thing that definitely happens when you misalign a fiber optic connection is a decrease in the amplitude of the signal, which can produce a delay, but it's much harder to do.

How does this work? Well, the incoming pulse of light will have some rise time (which I've faked as a Gaussian in the graph below, if you're wondering). As with the electrical signal, you need to establish some criterion for when you say the pulse has arrived, say, for the sake of argument, the same 75% of the maximum that we used before:

i-20fea7caee93855098f2e9fd83f314de-fiber_optic_delay.PNG

If you loosen a fiber optic connection, the most likely effect of this is to point the light in a slightly different direction and possibly defocus the beam a bit, which means that whatever you're using to detect the light and convert it back to a voltage pulse will see a smaller amplitude signal than when the connector is properly attached. If you're using a threshold value to determine the pulse arrival, and you incorrectly assume that it has the same amplitude as the properly-attached case, you get a delay in the arrival time, as you can see from comparing the black circles to the white ones, which represent a final signal amplitude of 80% of the maximum.

Of course, to get a substantial delay this way is really pretty difficult. You need optical pulses that rise slowly-- that's a Gaussian with a 50ns rise time, which produces about a 2ns difference between the two cases-- and you need somebody to have set the threshold value once, and never checked it ever again. That's a bad combination of factors, so I doubt very much that it's as straightforward as this.

To get a 70ns delay out of a loose fiber optic would require... I'm not sure what, exactly. Probably some sort of detector nonlinearity, such that the reduced intensity from a misaligned fiber would give an electrical output pulse that rises more slowly for a low-amplitude optical input than a high-amplitude one. That's a possibility, particularly if they're dealing with fairly weak pulses to begin with, but it's not in the top ten of things I'd think to check when looking for a timing issue.

(I also suspect that "loose" is the wrong word here, because while I know intellectually that it doesn't make any difference, I reflexively check the tightness of fiber optic connections all the time, as if they were regular wires subject to capacitive issues. I suspect that "misaligned" is a better word-- somebody put the connector on slightly wrong, so that it was to all appearances screwed down tight, but was slightly crooked. This would also explain how it managed to stay consistently wrong for a period of more than three years.)

So, while the news reports make this sound incredibly stupid and obvious, it's actually much more subtle and unexpected than that. If you know how fiber optics are supposed to work, the idea of a loose connection creating a 70ns delay is a real head-scratcher.

(Disclaimer: The above is 100% pure speculation on my part, based on zero inside information. If there's a good, detailed technical explanation of exactly how the bad fiber connection created a delay, I haven't seen it (in English, at least-- it's possible that there's one in one of the Italian-language stories floating around, but I can't read Italian). Pointers would be welcome in the comments.)

More like this

If 3σ results are wrong half the time, does that mean 6σ results are wrong all the time? The social networks are a-buzz over the claim of a significant detection by the OPERA experiment of a neutrino pulse propagating superluminally over a 750 km baseline from CERN to the Gran Sasso lab. arXiv…
So I'm trying to ease back into the chaos theory posts. I thought that one good way of doing that was to take a look at one of the class chaos examples, which demonstrates just how simple a chaotic system can be. It really doesn't take much at all to push a system from being nice and smoothly…
film at 11 PS: Gulf cable cut was due to ship anchor, no need to stress. Med cut maybe due to common geologic incident like subsea slide see link for details not wanting to get weird, but another Mid East undersea fiber optic cable break appears to have happened Daily Tech reports a second break in…
threefour major undersea cables cut in two separate incidents internet traffic to Middle East affected UPDATE: Apparently it is now four cables cut, another one in the Persian Gulf just went snap... that is an amazing coincidence story at phys.org and discussion on slashdot - some strident…

This was written yesterday, and scheduled to post today, lest it be lost in a flood of April Foolishness. In the interim, I see that Matt Strassler has provided the technical explanation I mention in the final disclaimer. It's pretty much what I thought: something screwy and non-linear with their detector (they refer to "photodiode capacitance," which doesn't quite parse). If you look at the oscilloscope trace he provides, you see that the signal they're looking at with the bad connection is both lower in amplitude and has a longer rise time than the signal with the correct connection, leading to the delay.

You'll also see that the signal from their amplifier has a rise time of around 100 ns in the best configuration, which is kind of crazy for a system designed to do timing at the nanosecond level, but whatever.

That was puzzling me too.

A couple of days ago I was arguing about OPERA's neutrinos with someone who had a conspiracy theory involving GPS and the Pentagon, but the one good point he made was exactly this: how does a bad optical cable connection cause a delay like that? I sure didn't know.

He's since been convinced by Strassler's explanations.

...my own best guess for what was causing the apparent superluminal neutrinos was a completely different hypothesis having to do with GPS time transfer, but months ago I did the math and figured out it couldn't be right...

After reading Matt Strassler's description, I'm much more pissed off at OPERA than I was before.

Not having a cable plugged in is an honest mistake. Everyone screws up. It's a hard screwup to find. But to learn that they claimed a violation of relativity based on a delay in the 10's of ns -- when their 10%-90% rise time for their calibrating signal was on the order of 200 ns -- is outrageous. I apologize for the Rumsfeldian language, but it's one thing for them to miss an unknown unknown (a screwed up optical connection), but it's entirely something else to miss a known unknown: possible errors associated with the trigger threshold on a multi-100 ns pulse.

What makes this all the more outrageous is that there are readily available commercial photodetectors with much faster rise times. Using wider pulses is fine if they're not claiming a relativity violation: they don't need ns-wide pulses for their day-to-day operation / central mission. But if they want to claim a relativity violation, it's insane that they wouldn't have first switched to better timing technology.

By Anonymous Coward (not verified) on 02 Apr 2012 #permalink

AC@4: I agree that claiming a timing precision an order of magnitude smaller than the rise time of your detector signal is unwise. But it's not so obvious to me that they could use something with a much faster rise time. I'm not in this field, but to me it makes no sense to have a detector circuit with a rise time much faster than the light transit time across the detector. That's not an issue for the sort of work I do, but when you need a huge tank under a mountain to get a high enough count rate, as neutrino experimenters do, the finite size of the detector will become an issue.

Of course, that makes it all the more important that they make sure the connections are right in the first place. I wouldn't have known enough to spot this particular bug (I work with copper, not glass, for signal connections), but somebody on the OPERA team should have known.

By Eric Lund (not verified) on 02 Apr 2012 #permalink

EL@5

I agree the timing issue for the big neutrino detector is tricky. That's the place where most people would go looking for a screwup.

But that's not where this long rise time is coming from. If I read the article correctly, it's coming from the time sync pulses! Which don't have any of the "big detector" problems.

By Anonymous Coward (not verified) on 02 Apr 2012 #permalink

I'd think that in your electrical example a loose connection would be more likely to produce an increase in series resistance than an increase in capacitance. This would also increase the rise time of the signal since the inevitable capacitance at the receiving end would take longer to charge.

If I'm not mistaken, photodiodes essentially act as capacitors charged by photoelectrons. Thus a decrease in signal strength caused by a connection issue increases the charge time in much the same way.

If the experimenters were good, and I think they were, they would have had a timing loop that would have had a known and stable loop response to both groups.

Example I send a signal to you.
You read the signal and 5 nanoseconds later you send a signal to me.

Reverse the loop and you should both know how much time delay is in the loop. Depending on your equipment this time may not match exactly, as long as it repeats, this can be acceptable.

Using the measured loop time, you can generate the time stamp for any neutrino detections. I believe that they had to have a pair of detections, to consider it to be a non noise detection. The neutrinos were modulated to have a pattern so they could be pulled out of the background.

The measurement should have ignored the bad connection as long as the rise time was stable and didn't change.

The clocks on the surface could have been another problem. If the Earth, Solar System, and Milky Way Galaxy are traveling faster then 2 million meters per second (relative to fixed space) the clocks could be could be slowed by relativistic velocity effects.

The path of the neutrinos through the Earth may have been along a path that was not experiencing relativistic velocity effects.

This experiment could prove that sufficient mass would shield the interior of an object from relativistic velocity effects. Another proof for this is: Do stars traveling at relativistic velocities have a lower rate of energy production from relativistic velocity slowing their atomic reactions? Why wouldn't high relativistic stars collapse, or at a minimum get dimmer? The relativistic effects on the stars surface is certainly visible, but is relativistic time dilation affecting the nuclear reactions at the core?

Just a thought.

By Darth_Loki (not verified) on 02 Apr 2012 #permalink