You probably haven’t been able to avoid seeing the televised bombs AT&T and Verizon have been throwing at each other over the maps of their coverage. Both sets of commercials (to differing degrees) fail to make it especially clear just what their maps mean to the consumer. For instance: Verizon’s commercial brags of overwhelming coverage while showing AT&T’s comparatively sad little sparse map. AT&T fires back with a different map and a claim that they cover 97% of the US population. Both sets of claims are correct – if you pay close attention. Verizon’s map indicates 3G coverage specifically, AT&T’s maps indicate their voice coverage.
So to the extent that Verizon makes you think that AT&T’s voice and non-3G data coverage is spotty, it’s misleading. And to the extent AT&T makes you think its 3G coverage is as omnipresent, it’s misleading. Sadly, I don’t have any reason to care. None of this makes any difference to me since my low-budget phone and plan can’t do anything more sophisticated than text and picture messages anyway.
What does interest me is the science behind these maps and this news story:
AT&T wants its iPhone users to use less wireless data, and it plans to introduce new pricing models to curb users’ data usage as it tries to keep up with growing demand.
At an investor conference in New York on Wednesday, Ralph de la Vega, AT&T’s head of wireless, announced that the wireless operator plans to introduce new pricing for heavy wireless-data users.
AT&T seems to realize that this is not a long-term solution. And not only is the carrier upgrading its network, but it’s also asking the Federal Communications Commission to find more spectrum to auction off that can be used for wireless-data services. Jim Cicconi, senior executive vice president of external and legislative affairs for AT&T, said in a separate interview with CNET on Wednesday that something needs to be done to deal with the flood of wireless-data traffic.
Cicconi and AT&T’s CEO Randall Stephenson met with FCC staff members earlier this week to discuss the spectrum issue.
“Clearly, there is a looming crisis that needs to be addressed when it comes to spectrum availability,” Cicconi said in an interview at his office in Washington, D.C. “Wireless-data usage is growing far faster than anyone had expected. And if we don’t do something soon, we will run out very fast. And then we will have to start telling wireless customers that they can’t do all the things they want to do with their devices.”
FCC Chairman Julius Genachowski has made freeing up more spectrum a top priority. And he has already proposed that the FCC look into taking some spectrum away from TV broadcasters to give to wireless operators to deliver more wireless-broadband services.
Spectrum. Now there’s a word near and dear to my heart. But what’s it have to do with iPhone jockeys siphoning in torrents of data from the internet?
As we know, wireless communication of pretty much every stripe uses radio waves to send data from the transmitter. The receiver measures those waves and converts them to data. We hear that data on the radio, see it on the TV, and read it on our mobile devices. Let’s take a closer look at this process and make up a wave to our specifications:
It’s a nondescript wave with a frequency of 100 Hz (if we take the x-axis in seconds), and short of a very simple sort of “On if by land, off if by sea” message we can’t actually do anything with it. We need a more complex wave, and we can come up with a way to translate the particular type of complexity into a message. Transmit a 110 Hz wave on top of our previous one and let’s zoom out the graph a bit to see how the waves look added on top of each other:
The two waves coalesce into a train of pulses. We’re getting somewhere. What if we add a 105 Hz wave on top of that?
Little pulses and larger pulses, in a row. You can see that we’re getting complexity, and that might suggest a way of sending messages. For instance, what if we write down our message, express it in binary numbers, and vary the amplitude of the 105 Hz wave with time so that the relative size of the small pulses changes? A very tiny small pulse might represent a “0″ and a relatively larger small pulse could represent a “1″. This works, though unfortunately the very process of varying the amplitude of the 105 Hz wave itself requires you to add in other different-frequency waves. The total slice of the spectrum that you use is called the bandwidth.
In a perfect world there would be no limit to the data transmission rate no matter how small the bandwidth. You could imagine a 100 Hz wave on top of a 100.11000010001011000000… Hz wave, where the decimal places corresponded to the bits of the data you wanted to send. Which is fine if you can measure to arbitrary accuracy, but if there’s a source of external noise or error that starts cropping up around (say) the 6th decimal place, you’re stuck. If you want to send data faster, you must have more bandwidth so the wave carrying the signal is more complex and can fit more modulations in a smaller time window.
Due to the brilliant scientists Claude Shannon and Ralph Hartley, we have an equation that tells us the maximum data rate we can squeeze out of a given bandwidth:
C is the data rate in bits per second, B is the bandwidth in Hz, S/N is the signal strength divided by the strength of the noise. Here log_2 is the base-2 logarithm.
So the stronger the signal is in relation to the noise, the higher the rate of data you can send. But it’s a logarithmic improvement, so improving your signal strength enormously doesn’t actually buy you all that much. Bandwidth is the key.
The actual spectral allocation for this sort of thing is always a matter of political wrangling with the FCC, but typically there’s a few blocks of spectrum around 50 MHz wide allocated to this kind of mobile data transfer. The signal/noise ratio can’t be improved indefinitely, because what’s signal to my phone will be noise to yours. As such with this kind of frequency allocation there’s somewhere in the vicinity of 50 Mbps available for all the phones communicating with a given tower. The basic finite nature of the available spectrum and the iron laws of information theory prohibit you from doing very much better.
The way around this is to make more spectrum available. If you look at the FCC’s spectrum allocation, you’ll see that huge bands are taken up by broadcast TV (this chart dates back before the DTV transition, but the overall allocation is not all that different today). “But doesn’t everyone have cable or satellite by now? Isn’t our application more pressing for society?”, quoth the telecoms. Well, it’s not a bad point but it’s not one the TV broadcasters are likely to accept without a fight. It’s a pickle for sure. I personally lean toward the “internet data is more important” school of thought, but I’m no policy wonk.
You might wonder why not just use higher frequencies, since there’s a heck of a lot of MHz between (for instance) 1 THz and 1.1 THz. Unfortunately physics limits the practicality of higher frequencies with respect to mobile communications. High frequencies perform poorly when asked to pass through or around obstacles like walls and buildings, which is not acceptable for cellular communications.
What do I propose for keeping everyone happy? Beats me. I don’t envy the FCC’s job at all.