All right, now that you know my conclusion, let’s see how to get there with data. First, some background.

Let me give very quick overview of Bitcoin in this context. (There are many comprehensive overviews elsewhere.) Bitcoin is an ongoing ledger of transactions of along the lines of “This guy had 5 Bitcoins, and he sent 2 of them to that guy. Now he has 3 Bitcoins.” The transaction ledger is public, which prevents people from sending coins they already sent – the ledger rejects invalid transactions. Everyone on the network has a copy of the ledger. New transactions are appended to the ledger in the form of blocks containing a bunch of transactions. The ledger is formally called the blockchain, because it’s a chain of blocks which contain descriptions of transactions. There’s a bunch of cryptography which prevents people from making transactions involving coins they don’t control, but we don’t care about that for now. What counts here is how to add new blocks to the blockchain.

Anybody can add new blocks, but you don’t want any one person to have full control over this. The process of adding new blocks is called mining because the person who adds a block is currently rewarded with 25 Bitcoins. Each miner puts a valid block together and attempts to append it to the end of the chain. They do this by running a random number generator on their computer which spits out random numbers at a colossal rate, and the first miner whose random number generator spits out an appropriate number is the miner whose block is appended to the chain. Then everybody starts work on the next block, hoping to be the one to win the RNG lottery this time. (The details of this process are a bit involved, but the required output of the RNG is tuned such that a block will be appended about every 10 minutes on average. The RNG involves a cryptographic hash, which is why the number-generation process is called hashing.)

The “selfish miner” attack, proposed by Ittay Eyal and Emin Gun Sirer of Cornell, is a way that a dishonest miner could finesse the protocol and win more blocks than their percentage of the overall hash rate would indicate. In this attack, a miner finds a new block but doesn’t immediately distribute it to the network so everyone can get to work on the next one. Instead, the miner begins work on the block which would follow their unreleased block. If they find the next block in the chain, they keep going. When they see that the rest of the miners have found a block, the selfish miners quickly release all the work they’ve done – which might be several blocks, which means their blocks get added to the chain because in the event of a conflict the longest chain wins. The rest of the miners never had a shot at those previously hidden blocks, so some of their hashing power was wasted. The selfish miners could thus generate blocks disproportionately faster than their percentage of the total hashing power would indicate. If the selfish miners can generate more than one block before the next fair-mined block is found, they always win because they have the longer chain.

Selfish miners can do even better if they can manage to win more of the “ties” where they find the first block but the fair miners post a block before the selfish miners find their second block. If the selfish miners are able to quickly release their single block before the honest miner’s block can propagate through the network, they’ll always be at an advantage relative to the honest miners. This takes some work, because they have to have lots of nodes which can react quicker than honest blocks can propagate. But even if the selfish miners can only get their single blocks accepted half the time, they’ll still come out ahead if they have more than 1/4 of the hashing power. And if fact even if the selfish single blocks never beat the honest blocks in their distribution throughout the network, the selfish miners will still come out ahead if they have more than 1/3 of the total hashing power. A salient question is thus how to detect whether or not such an attack is in progress.

Note that mining is essentially a lottery. The miners generate a ton of random hashes, and when one of them happens to win, that block is valid and may be posted to the blockchain. Each hash is essentially a ticket to a lottery with 1 in a quadrillion odds. The creation of each new block is a new lottery, but the odds are the same and there’s no pause between blocks. Thus, each win is an independent event. The number of independent random events in a given time interval is Poisson distributed, and the time between events in a Poisson process is exponentially distributed.

But dishonest mining relies on quickly publishing as soon as one of those honestly-generated blocks is found. This means the creation of the next block is no longer uncorrelated with the creation of the block before. Block creation would not be independently distributed. Thus a dishonest miner will appear in the statistics of new blocks. If blocks are being generated in clusters, there’ll be a spike in the distribution of time-between-blocks near t = 0, in comparison to the smooth falloff of the exponential distribution. This t = 0 spike would be a characteristic signature of the presence of dishonest mining.

So let’s take a look at the exponential distribution for a rate of 1 per 10 minutes (the nominal block creation rate):

λ is the average rate at which events happen, here 0.1 per minute. Of course with independent events something is just as likely to happen in minute number 35 as it is minute 1, but the exponential distribution measures the probability of the when the *first* event will happen after you start your timer. This happening in minute 35 means it didn’t happen in any of the previous minutes, which is pretty unlikely, so the exponential distribution trails off as you’d expect. Honest mining should follow this distribution. Dishonest mining involves releases of blocks immediately subsequent to a previous block, so more events will happen in the first minute than the exponential distribution suggests.

To test this against the actual blockchain as it’s created, I wrote a Python script to monitor the timing of blocks as they’re released into the blockchain. I let this script run for 202 blocks (a little over a day), and if you want to review the data yourself here’s the CSV file. (Note the ‘messiness’ paragraph below for caveats.) I have binned out this data into one-minute increments and plotted a histogram. (The technically knowledgeable will note that Bitcoin block rate is not strictly constant because of the growth of the hash rate. It varies by up to about 10% over 2-week intervals, though not nearly by so much over the shorter interval I measured. In this spike-detection context it doesn’t matter terribly much either way.) Here’s the histogram of the actual data, with a bin size of 1 minute:

Now let’s figure out the exponential distribution. To be as clean as possible, I’m converting the actual measured time-between-blocks into a fraction of the average time between blocks. Ie, “2” on the axis means “2 times the average time between blocks over the interval I measured”. I’m overlaying this with an exponential distribution of rate 1, and scaling the whole thing such that the areas under the curve and the histogram are both equal to the total number of blocks in the sample. The bins are 1/5 of the average block time. The result is below:

What do we see? The histogram matches the exponential distribution pretty well. Most crucially, the first bin is not notably higher than the distribution predicts. According to the exponential distribution, some 18.12% of blocks should be found in the first 1/5 of the average block time. In fact, we see that 31 out of 202 blocks, or about 15.3%, are actually found in that period of time. Given the small sample size, we can’t really be that precise with the percentage. Using the binomial confidence interval, we can only say that with 95% confidence something between 11% and 21% blocks are actually being created in that first 1/5 of the average block time. But the expected 18.12% is comfortably within that range. Given that we expect about 18% in a fair mining scenario and we can rule out anything greater than about 21% with pretty high confidence, concerned Bitcoiners can perhaps breathe a little easier.

Some messiness: I calculated the times based on the arrival time of the blocks at my computer, with blocks being received through the blockchain.info websockets API. This assumes that the arrival time of blocks at my computer is the same as the posting of blocks to the chain, which is a possible source of systematic error. However, the interface is said to be low-latency, and provided the latency is significantly lower than 1 minute the effect should be minimal given our bin size. (Timing data is built into the blocks, but it’s wildly inconsistent for reasons which are not clear to me. I have not used it.) Additionally, there are a few cases where I apparently either missed blocks or received duplicates. My programming skill is likely at fault. This only happened a few times, and I rejected any time intervals involving non-consecutive blocks. Finally, the sample size is not terribly large, so this measurement is not terribly sensitive and could potentially miss small-scale selfish mining efforts.

Nonetheless, provided the potential sources of error in this measurement are not causing spurious results, we can thus currently conclude that the timing behavior of newly mined blocks is consistent with a blockchain that is being mined with fair methods.

I encourage interested parties to freely repeat and refine this test in the future to see if the situation changes. If you’re interested enough to want to keep a continuous watch, one easy method would be to keep a running average of the time between the last ~1000 blocks, and see if the number of blocks separated by less than 1/5 of that average exceeds about 225. If it did, it would not be a definite proof of selfish mining, but it would be a >99% statistical anomaly.

[DISCLAIMER: I am not involved with Bitcoin, I don’t own any Bitcoin, and I am not interested in changing either of these things at the moment. Proselytization of the “Bitcoin is awesome!” or “Bitcoin is horrible!” varieties is going to be wasted, so try to avoid it in the comments. But I’m fascinated by the Bitcoin system from a mathematical and computational perspective, and am relatively well-versed in its technical details. Whether it’s a good or bad idea from a practical or economic perspective I leave to the early adopter crowd to find out.]

Is Bitcoin Currently Experiencing a Selfish Miner Attack? by Matthew Springer is licensed under a Creative Commons Attribution 4.0 International License.

All of you know about the experiments going on at the LHC, where particles are accelerated to an energy which is equivalent to an electron being accelerated through a potential difference of trillions of volts (which is what a “trillion electron volts” – a TeV is). During the ensuing collisions between particles, high-energy TeV photons are produced. Of course everything is emitting light in the form of blackbody radiation all the time. Human beings emit mostly long-wavelength infrared, hot stoves emit shorter-wavelength infrared and red light, hotter objects like the sun emit across a broad range of wavelengths which include the entire visible spectrum. Here, from Wikipedia, is the spectrum of the sun:

This graph is given in terms of wavelength. For light, energy corresponds to frequency, and frequency is inversely proportional to wavelength. Longer wavelength, lower frequency. A TeV is a gigantic amount of energy, which corresponds to a gigantically high frequency and thus a wavelength that would be pegged way the heck off the left end of this chart pegged almost but not quite exactly at 0 on the x axis. Let me reproduce the same blackbody as the Wikipedia diagram, but cast in terms of frequency:

Here the x-axis is in hertz, and the y-axis is spectral irradiance in terms of watts per square meter *per hertz*. (That makes a difference – it’s not just the Wikipedia graph with the x-axis relabeled although it gives the same watts-per-square-meter value when integrated over the same bandwidth region.)

Ok, so what’s the frequency of a 1 TeV photon? Well, photon energy is given by E = hf, where h is Planck’s constant and f is the frequency. Plugging in, a 1 TeV photon has an frequency of about 2.4 x 10^{26} Hz. That’s way off the right end of the graph. Thus you might think the answer is zero – the sun never emits such high-energy photons. But then again that tail never quite reaches zero, and there’s a lot of TeVs per watt, and there’s a lot of square meters on the sun…

So to find out more exactly, let’s take a look at the actual equation which gave us that chart: Planck’s law for blackbody radiation:

So you’d integrate that from 2.4 x 10^{26} Hz to infinity if you wanted to find how many watts per square meter the sun emits at those huge frequencies. (Here k is Boltzmann’s constant, which is effectively the scale factor that converts from temperature to energy.) That’s kind of an ugly integral though, but we can simplify it. That term? It’s indescribably big. The hf term is 1 TeV, and the kT is about 0.45 eV (which is a “typical” photon energy emitted by the sun), so the exponential is on the order of e^{2200000000000}. (The number of particles in the observable universe is maybe 10^{80} or so, for comparison.) Subtracting 1 from that gigantic number is absolutely meaningless, so we can drop it and end up with:

which means the answer in watts per square meter is

where “a” is the 1 TeV lower cutoff (in Hz). That exponential term now has a negative sign, so it’s on the order of e^{-2200000000000}. I’d say this is a safe place to stop and say “The answer is zero, the sun has never and will never emit photons of that energy through blackbody processes.” But let’s press on just to be safe.

That expression above can be integrated pretty straightforwardly. I let Mathematica do it for me:

So that’s an exponential term multiplying a bunch of stuff. That bunch of stuff is a big number, because “a” is a big number and h is a tiny number in the denominator. I plug in the numbers and get that the stuff term is about 10^{93} watts per square meter, and you have to multiply that by the 10^{18} or so square meters on the surface of the sun. That’s a very big number, but it’s not even in the same sport as that e^{-2200000000000} term. Multiplying those terms together doesn’t even dent the e^{-2200000000000} term. It’s still zero for all practical purposes

Which is a lot of work to say that our initial intuition was correct. 1 TeV from blackbody processes in the sun? Forget it.

Now blackbody processes aren’t the only things going on in the sun. I don’t think there are too many TeV scale processes of other types, but stars can be weird things sometimes. I’d be curious to know if astrophysicists would know of other processes which might bump the TeV rate to something higher.

[*Personal note: I’ve been absent on ScienceBlogs since April, I think. Why? Writing my dissertation, defending, and summer interning. The upshot of all that is those things are done and I’m now Dr. Springer, and I have a potentially permanent position lined up next year. And now I might even have time to write some more!*]

It got reposted by a bunch of people and provoked a tremendous amount of discussion (for a math topic, anyway), much of which was somewhere in the continuum between merely wrong and psychedelically incoherent. It’s not a new subject – a version of the image got discussed on Stack Exchange last year – but it’s an interesting one and hey, it’s not all that often that the subtle properties of the set of real numbers get press on Facebook. Let’s do a taxonomy of the real numbers and see what we can figure out about pi and whether or not it has the properties stated in the picture.

These are the counting numbers: 0, 1, 2, 3, 4… There’s an infinity of them, but there are gaps. If you have 5 dollars and you give half of them to your friend, you’re stuck. The number you need is not a natural number. If we want to be able to deal with ratios of natural numbers, we need more numbers so we can deal with those gaps between the natural numbers. We can include the set of natural numbers with negative signs in from of them, and we have what’s called the integers: …-3, -2, -1, 0, 1, 2, 3… Later on I won’t worry about explicitly discussing negative numbers, but of course all of the subsequent sets include negative numbers.

These are the ratios of integers, or fractions. Divide 1 by 4 and you get the rational number 1/4. We can write it in decimal notation as 0.25. Divide 1 by 3 and you have the rational number 1/3 = 0.333… All rational numbers have a decimal representation that either terminates or repeats infinitely. In fact, it’s better to say that all rational numbers have a decimal representation that repeats infinitely: 1/4 = 0.25000000… and we just happen to have a notation that suppresses trailing zeros. Sometimes you have to go out quite a ways before the repeat happens, but it always does. 115/151= 0.761589403973509933774834437086092715231788079470198675496688741721854304635761589… All rationals have repeating decimal representations, and all repeating decimals represent rational numbers.

The rational numbers are *dense*. Between any two rational numbers, there is another rational number. Which immediately implies that between any two rational numbers, there are an infinite number of rational numbers. Pick any point on the number line, and you’re guaranteed that you can find a rational number as close as you want to it. But alas, you’re not guaranteed that every point on the number line is a rational number. Some of them aren’t.

The square root of 2 is the most famous example of an irrational number. It’s the number which, when squared, gives exactly 2. It’s equal to 1.41421356237…, but the decimal representation never repeats. This is because there are no two integers A and B such that (A/B)^{2} = 2. You can get as close as you want: 7/5 = 1.4 is kind of close, and 3363/2378 is much closer still, but you’ll never find a rational number whose square is exactly 2. This can be rigorously proven and means that the square root of 2 is irrational, and never repeats.

The square root of two is the solution to the equation . This is an example of a polynomial with integer coefficients. Another random example is , which happens to have the irrational number x = 1.84302… as one of its solutions. Numbers which are solutions to these kinds of polynomials are the algebraic numbers.

Does all this mean the decimal expansion of the square root of 2 includes any and every combination of digits? Maybe. Maybe not.

Not all irrational numbers can be written in terms of the solutions of polynomials with integer cofficients. The ones that can’t are called transcendental numbers. Pi is one of them. So is Euler’s number e = 2.71828… Transcendental numbers are all irrational*.*

In a precise but somewhat technical mathematical sense, “almost all” real numbers are irrational. Throw a dart at the real number line and you will hit an irrational number with probability 1. This makes some intuitive sense. If you just start mashing random digits after a decimal point, it seems reasonable that you won’t just happen to make an infinitely repeating sequence. It turns out that the same thing is true of the transcendental numbers. “Almost all” real numbers are transcendental. But at the present time, even with hundreds of years of brilliant mathematicians pouring unfathomable effort into the problem, our toolkit for dealing with transcendental numbers is pretty sparse. It’s very difficult to prove that specific numbers are transcendental, even if they pretty obviously seem to be. Is transcendental? Almost certainly, but nobody has proved it.

Here’s a number called Liouville’s constant which is proven to be transcendental: 0.110001000000000000000001000000… (It has 1s at positions corresponding to factorials, 0s elsewhere.) It was among the first numbers known to be transcendental and was in fact explicitly constructed as an example of a transcendental number. It’s irrational, of course. It is an “infinite, nonrepeating decimal”, as the Facebook picture puts it. But is my DNA in it? Heck no, my phone number’s not even in it. Infinite and nonrepeating is *not* synonymous with “contains everything”.

A normal number is one whose decimal representation contains every string of digits on average as often as you’d expect them to occur by chance. So the digit 4 occurs 1/10th of the time, the digit string 39 occurs 1/100th of the time, the digit string 721 occurs 1/1000th of the time, and so on. All normal numbers are irrational. Normal numbers satisfy Takei’s criteria. Any finite string of digits occurs in the decimal representation of a normal number with probability 1.

Is pi a normal number? Nobody knows. If our toolkit is sparse for proving things about transcendental numbers, it’s almost completely empty for proving anything about normal numbers. There are a few contrived examples. The number 0.123456789101112131415… is normal in base 10 at least, and in fact it contains every finite string of digits, because it was constructed so that it would. It also satisfies the properties which Takei’s image ascribes to pi, though it also shows that these criteria aren’t especially profound. A string that contains all numbers turns out to contain all numbers, which is true but not all that impressive.

But is this specific number normal in other bases? Nobody knows. Are there numbers that are normal in every base? Yes – again, “almost all” of them. Can I actually write out the first few digits of one? Nope. As far as I can tell, while examples of absolutely normal numbers have been given at in terms of algorithms, there’s not yet been anyone who’s been able to start generating the digits of a provably absolutely normal number. [Edit: I think in the comments we’ve found in the literature an example of the first few digits of a provably absolutely normal number.]

Mathematicians love proof. I’m a physicist. I love proof too, but I’m a lot more willing to work with intuition and experiment. Do the billions of digits of pi that we’ve calculated act as though they’re distributed in the “random” way that the digits of an absolutely normal number ought to be distributed? Yes. Just about everyone suspects pi is absolutely normal. Same for e and the square root of 2 and the rest of the famous irrationals of math other than the ones that are obviously not normal. Numerical evidence is not dispositive though, and has misled mathematicians before.

If pi is absolutely normal, than Takei’s image is true. If you can prove this conjecture, you will have boldly gone where no one has gone before.

]]>Let’s see if we can find some insight into a similar question: why are clouds white?

Clouds are made of water droplets. Pour yourself a glass of water. You’ll notice that the glass of water is not white. It is in fact perfectly clear. Well, glasses of water of big. Maybe tiny droplets are different. Dip a toothpick into the water and get the smallest droplet you can, and put it on a hard surface. You’ll be able to see the surface through the water with equal clarity. Even small drops of water are themselves clear.

So if clouds are water, and water is clear, why aren’t clouds clear? They look to us a lot like they reflect light. Light that shines on them bounces off, and when we’re looking at the bottom of clouds on a stormy day we see that most of the sunlight doesn’t make it through the clouds because it has reflected off the tops of the clouds.

Here’s a short New York Times piece attempting to answer the question. The answer given there is that droplets do scatter light through a process called Mie scattering, which is essentially just refraction. The direction of the incoming light gets bent and changed just as in the photograph of the droplet above. Crucially, Mie scattering is more or less independent of wavelength. If droplets scatter all colors of light, “all colors” is basically white.

That’s true, but not complete. Why should this make clouds *reflect* white? How is it that randomly-directed scattering can preferentially send the light back in the direction it came from?

Let’s look at the process in a little more detail. When light hits a drop, it gets redirected through refraction and scattering – mostly in the forward direction, but after this redirected light hits more drops the randomness of the orientations of the light and the drops washes out all information about the original direction of the incoming light. At this point, the direction of the light is random and unrelated to the direction of the incoming light. It seems paradoxical that this would end up causing the light to leave the cloud in the same direction it came in. The answer is the *random walk.*

Imagine you’re walking down the sidewalk and you flip a coin. Heads you take a step forward, tails you take a step back. You keep up this process for many flips of a coin. Your position over a thousand steps might look like this:

The point of an unbiased random walk is that you’re equally likely to go one way or another. Should you end up ten steps forward of where you started (call it y = 10), you’re equally likely to end up at position y = 0 and y = 20 at a given number of steps in the future. If the end of the sidewalk is at y = 10,000, and you’re sitting at y = 10, it is much, much more probable for you to end up back at y = 0 before you wobble your way to y = 10,000. For instance, here’s a 100,000 step random walk:

It returns to 0 several times in the first 20,000 steps, and then by the 100,000th step has only managed to wander off to around y = 500. Should we keep stepping, we’re still much more likely to wobble back down the 500 steps to 0 than we are to wander the 9,500 steps up to y = 10,000.

And *that’s* why clouds are white. Each drop, or at least each several drops, are essentially a step of a random walk for the incoming light waves[2]. If the cloud is very large compared to the size of the step (which is on the order of a few times the distance between drops), then the light is much more likely to wander back out to the same side of the cloud it came in than it is to wander all the way to the other side.

This concept generalizes to all kinds of light scattering objects. White sand is mostly clear silica particles, but this same scattering process bounces the light back diffusely. In an astrophysical context, reflection nebula work in similar ways:

Sometimes there’s a lot of insight to be gained by asking kid-style questions, even if the path to the answer is kind of random.

[1] You’re not alone if you think these look like stock market charts. There’s a pretty large body of academic literature that models financial markets as random walks with varying levels of bias.

[2] There’s a temptation to talk in terms of photons bouncing around. This temptation ought to be resisted. The process of scattering is entirely a classical wave phenomenon.

]]>Pick up a comb, rub it with your hair and you have got some electric charge. Now shake it and you are generating an electromagnetic wave. Am I right?

Yes indeed. So why don’t we see light emitted when we brush our hair? Let’s run some numbers. If you wiggle around an electric point charge, electromagnetic radiation is emitted. The power carried by this radiation is given by the Larmor formula:

Well, a comb isn’t a point charge. But if we’re just interested in an order-of-magnitude estimate, we can pretend it is. How much charge is on a comb? It’s probably a substantial overestimate, but the human body has a capacitance of around a hundred picofarads, which is part of why you can get slightly shocked when you rub your feet on carpet and touch a doorknob. For a purpose-built capacitor in an electronic circuit that’s pretty small, but a comb isn’t a purpose-built capacitor either so it’s not unreasonable to say that as an order of magnitude it has a capacitance of 100 picofarads. That doesn’t tell us how much charge it holds though, we also need to know the voltage. The voltage required to zap you with static electricity when you touch a doorknob is a surprisingly high number on the order of 10,000 volts, so we’ll say the comb is charged to that potential since a comb can hold enough charge to produce a spark. 100 picofarads multiplied by 10kV gives 1 microcoulomb.

That gives us the charge *e* for the Larmor formula. How about the acceleration *a*? Earth’s surface gravity is about 9.8 m/s^2, and I think a person can probably fling a comb around faster than that. Let’s be generous and call it 100 m/s^2. Plug all that into the formula:

So around a trillionth of a trillionth of a watt. That’s why combs don’t glow when you shake them. Well, the first of two reasons.

Say you then went to Wal-Mart, bought a bucket of electrons, and dumped a million coulombs worth of charge on your comb. (This would in fact blast the comb to bits, but let’s pretend.) Now you’ve got a pretty bright 2.2 watts, but it would in fact still be invisible. You’re waving the comb around a few times per second, and the resulting electromagnetic wave will tend to have a similar frequency. These are extremely long-wavelength radio electromagnetic waves, which are invisible to our eyes.

Nonetheless, this is more or less how actual radio devices work. Waves are generated by moving electrons back and forth within the metal wire antenna. However, the quantity of charge that’s being moved around is large (the bazillion conduction electrons in the metal are all moving back and forth), and the acceleration that can be produced on an individual conduction electron by an applied electric field is pretty large as well.

]]>A car driver going at some speed v suddenly finds a wide wall at a distance r. Should he apply brakes or turn the car in a circle of radius r to avoid hitting the wall?

My first thought was that surely the question wasn’t doable without more information, but it turns out that we do have enough to give a straightforward answer. Let’s take the “turns in a circle” and “slams on brakes” scenarios one at a time.

Velocity is a vector whose magnitude is the speed and whose direction is the direction of travel. If you turn, your speed remains the same but your direction of travel changes. So the velocity is changing even if the speed isn’t. A changing velocity is by definition an acceleration, and one of the key equations of first semester physics is the acceleration required to produce uniform circular motion. It turns out to be a function of the speed and the radius of the circle:

Since we don’t have any numbers to plug in or really anywhere else to go with this, we’re done with this part. The required acceleration to avoid the wall is equal to the square of the speed divided by the radius of the circle, which is just the initial distance to the wall.

This one is a little more involved. The direction of the velocity is not changing, but the speed is. Another of the key equations of freshman physics is the formula for position in uniformly accelerated motion. It’s:

where *a* is the acceleration, *v0* is the initial velocity, *x0* is the initial position, and *t* is the elapsed time. In this case we’d like to solve for a at the point where x = r (we define our coordinates such that x0 = 0). But we don’t know how much time has elapsed by the time the car reaches the wall, so we need the formula for velocity in uniformly accelerated motion, which we might write from memory or find by differentiating the position equation if we know calculus:

Now I’ll start subscripting the letter f on the specific time when the car reaches the wall. We know that we’ve come to a stop at at that time, so we have:

Which means

Don’t worry about the negative sign. *a* is itself negative (we’re decelerating), so *tf* will be positive as well. Now that we know how much time has elapsed when the motion is complete, we can plug that into our position formula:

Remembering that at the wall, *x* = *r* and that we defined *x0* = 0. You can do the algebra to solve for *a*, and you’ll find that

Which is (ignoring the minus sign that just tells us which way the acceleration is pointed) just half the acceleration we found for the turning scenario. So purely from a standpoint of the acceleration car tires can produce, braking works better than swerving.

After writing this post, I came across a ScienceBlogs post on Dot Physics a few years ago on the same subject. He approaches the problem in a different way, and I think it’s well worth reading both solution methods.

]]>The Theoretical Minimum: What You Need to Know to Start Doing Physics

When this book appeared in my mailbox I judged it by its cover and was a little concerned. The problem with the cover is the name of one of the authors: Leonard Susskind. He’s an extremely talented physicist and writer, to be sure, but he’s a string theorist. Worse, he’s one of the major names behind the string theory landscape idea. Though not a high-energy physicist myself and thus not really being terribly qualified to judge, I tend to classify the string theory landscape as somewhere between speculative and pseudoscience.

Beyond the cover, I am happy to report that my initial worries were absolutely incorrect. This is a charming and erudite instance of a genre with very few members – a pop-physics book with partial differential equations on a good fraction of the pages. The goal of the book according to the forward by Susskind (a physicist) and Hravovsky (an engineer) is to give a substantive but not-textbook-detailed introduction to physics. Not just to teach *about* physics, as is the typical pop-physics book’s goal, but to actually teach physics.

The title refers to a slightly notorious requirement the great Soviet physicist Lev Landau put on his students before they could join his group. There was a level of knowledge of physics he called the “theoretical minimum”, which for him meant exhaustive mastery of theoretical physics. In the more limited goal of this book, the theoretical minimum is to understand physics as it actually works mathematically – beyond just the Scientific American level. Not to the level where you’re actually solving graduate textbook problems, but to the level where you know what the concept of a Lagrangian actually entails.

More impressive still is that the book entirely resists the temptation to skip to the good stuff – quantum mechanics and so on. This is a book which is purely about classical mechanics. More volumes are planned on electromagnetism and quantum mechanics, but for now this is the true basics. These basics of course turn out to be built into the fabric of electrodynamics and quantum mechanics, aside from the minor fact of the vast importance of classical mechanics in the world of practical problems.

The succeeds admirably in its goal. It presents classical mechanics in all its glory, from forces to Hamiltonians to symmetry and conservation laws, in a casual but detailed style.

Hawking famously suggested that each equation halved the sales of a book, so the question here is whether or not you might be interested in reading The Theoretical Minimum if you haven’t learned calculus or don’t remember it. It’s a judgement call. I suspect you won’t get the whole experience if you haven’t at least seen calculus at some point in your life. But even a half-remembered course years ago is probably good enough – there’s a pretty substantial bit of mathematical refresher material presented in a visual and intuitive way. If in doubt, give it a try. On the other hand, a reader without any calculus background could probably pick up some of the flavor of the physics but I don’t think I recommend starting with this book.

I’m looking forward to the rest of the books in this series. They address a niche that sees very few solid attempts to fill.

*[Standard disclosure: the publisher sent me a free copy of the book to review. I am not otherwise compensated for this review.]*

Here’s the intensity (formally: power per area per unit solid angle per unit wavelength – whew!) of the radiation emitted by an object with the temperature of the sun, plotted as a function of wavelength in nanometers according to Planck’s law:

You’ll notice it also peaks around the same place as the spectral response of the human eye. Optimization!

Or is it? That previous equation was how much light the sun dumps out *per nanometer of bandwidth* at a given wavelength. But nothing stops us from plotting Planck’s law in terms of the frequency of the light:

In this case what’s on the y axis is power per area per unit solid angle *per frequency*. Ok, great. But notice it’s *not* just the previous graph with f given by c/λ. It’s a different graph, with different units. To see the difference, let’s see this radiance per frequency graph with the x-axis labeled in terms of wavelength:

Well. This is manifestly not the same graph as the radiance per nanometer. Its peak is lower, in the near infrared and outside the sensitivity curve of the human eye. This makes some sense – there’s not much frequency difference between light with wavelength of 1 kilometer and light with wavelength of 1 kilometer + 1 nanometer. But light of 100 nanometer wavelength has a frequency about 3 x 10^{13} Hz more than light with wavelength 101 nanometers.

So what gives? Is the eye most sensitive where the sun emits the most light or not? The simple fact of the matter is there’s no such thing as an equation that just gives “how much light the sun puts out at a given wavelength”. That’s simply not a well-defined quantity. What is well defined is how much light the sun puts out *per nanometer* or *per hertz*. In this sense our eye isn’t optimized so that its response peak matches the sun’s emission peak, because “the sun’s peak” isn’t really a coherent concept. The sensitivity of our eyes is probably more strongly determined by the available chemistry – long-wavelength infrared light doesn’t have the energy to excite most molecular energy levels, and short-wavelength ultraviolet light is energetic enough to risk destroying the photosensitive molecules completely.

This wavelength/frequency distribution function issue isn’t just a trivial point – it’s one of those things that actually gets physicists in trouble when they forget that one isn’t the same thing as the other. For a detailed discussion, I can’t think of a better one than this AJP article by Soffer and Lynch. Enjoy, and be careful out there with your units!

]]>Mark and I have been conducting a debate/discussion over gun control in the United States. For the first round, here’s his post and my response. Here’s his second round post, and this post is my response.

First, let me summarize where the debate stands. We have four main topics as set forth in Mark’s posts: gun violence in “ordinary” crime, gun violence in the context of mass shootings, suggestions for gun control, and miscellaneous ancillary arguments. Most of the points in the ancillary category were fairly comprehensively covered, and I think both of us are pretty satisfied with what has been said. The exception is the “good guys with guns” argument, which we’ll continue.

Mark classifies my responses to the ordinary crime and mass shooting topics as “no problem” arguments. This is incorrect. I am trying to quantify the problem, and to quantify the impact of the proposed solutions. If it turns out that both these quantities are so small as to be classified as “no problem” in the mind of the reader, well, the numbers are what they are. I myself reject the idea that there is no problem. But I also reject the idea that argument from anecdote is an effective guide to the truth. We want to ask whether or not there is a problem which is caused by the prevalence of guns, and if so whether or not gun control could do anything to ameliorate it.

Let’s dive right in to the general gun crime topic.

Mark quotes the Institute of Medicine in comparing the US to similar industrialized countries in terms of life expectancy found that our homicide rate is far in excess of comparable OECD countries, and significantly affects our life expectancy. The IOM study found our homicide rate to be 6.9 times higher than the other OECD countries, our gun homicide rate 19.5 times higher, and of the 23 countries in the study, the US was responsible for 80% of all firearm deaths.

There are two obvious questions. First, is the US comparable to those other OECD countries? Second, how much does gun control actually have to do with this?

The answer to the first question is an obvious no, and to demonstrate this we need look no farther than the very study linked. The US has higher than average death rates in almost every category from car accidents to disease, the highest rates of adolescent pregnancy, sexually transmitted diseases, diabetes, and so forth. (But not suicide, incidentally.) In fact, in the words of the study,

*On nearly all indicators of mortality, survival, and life expectancy, the United States ranks at or near the bottom among high-income countries.*

I’m not trying to insult my country – it’s a great place, much better in most of these categories than most of the rest of the world. However, comparisons to these 16 other top OECD nations are untenable. We aren’t comparable. We are different in almost every measurable respect involving health and mortality.

Well ok, guns obviously don’t give people diabetes or make teens pregnant, but “lots of guns, lots of violence” vs “not many guns, not much violence” might look less like correlation and more like causation. (At least relative to the not-very-comparable top of the OECD.) This conclusion is unwarranted and probably false. Here’s some reasons, some of which I have mentioned in my last post.

1. US vs. OECD entirely aside, we can’t even easily compare US vs. US over time without running into extreme confounding variables. Our murder rate has been precipitously falling over the last few decades even as gun laws have become much looser (I do not claim a causal relationship). The last time our murder rate was as low as it is now, we were literally in the Leave It To Beaver era.

2. Murder rates vary wildly within the US under identical gun control regimes. White Americans, for instance, kill each other at roughly OECD rates (albeit on the high end), and well below the rates of eastern Europe and the Baltics. I shouldn’t have to point out that epidermis reflectivity doesn’t have squat to do with this. It does, however, show that socioeconomic and cultural variables overwhelmingly determine rates of violence.

3. Sharp changes in gun laws haven’t done anything significant to the homicide rates of other countries. The best-studied case is post-Port Arthur Australia. The effect on overall homicide rates was somewhere between negligible and nonexistent. The effect on gun homicide rates was similar. Let’s take a look at the study Mark cites:

*Additional research, readily available suggests a significant drop in the rate of gun violence after the ban. This suggests to me, both in the specific intervention, and overall given their tight regulation of handguns, that Australia is quite a strong example of gun control working.*

I will reproduce a few of the graphs from this paper, unedited. First, gun homicides and non-gun homicides:

The statisticians in the audience who have not died of heart attacks at the statistical illiteracy of the pre- and post- trend lines will of course notice that the overall decline in violence and gun violence continued just as it was doing before the gun control was implemented. In fact, the rate of *non*-gun violence displays a much more dramatic (though also statistically spurious) change. And this is Australia, the best possible scenario for the success of of gun control. Gun control did nothing to the overall homicide rate. It didn’t even do anything to the gun homicide rate. (More graphs from the paper here, about accidental deaths and suicides, if you’re curious.)

4. Trying to account for confounding variables is extraordinarily difficult in this context, but a number of studies have attempted to do so. One study compares the prairie provinces of Canada with their bordering US states. In this case,

*Patterns of homicide in the United States and Canada were examined with a view to finding out whether the availability of firearms affects the homicide rate independently of the other social, demographic and economic factors in play. If this is the case, then low-homicide areas, which generally have fewer social and economic problems but the same access to firearms, should have a higher proportion of their homicides by firearms. This is not the case for the four border states.*

Other studies (commenter LH pointed out these two) have come to similar conclusions. Now I strongly suggest that you not read too much into these results – while if they are accurate they support my point, attempts to disentangle confounding variables are fraught with danger even when the result happens to land on my side.

In short, there is no good evidence that gun availability causes increased crime rates. There is extremely good evidence that socioeconomic variables are far and away the primary drivers of crime rates. Violence in general and gun violence in particular are real problems in the US, but gun control as a solution is so ill-supported as to verge on superstition.

While Mark and I are mostly focused on numerical metrics as to what effects gun control actually produces, it’s probably worth looking briefly at the practical problems of implementing it as well. Mark quotes former Australian prime minister John Howard writing on the Port Arthur gun control measures:

*In the end, we won the battle to change gun laws because there was majority support across Australia for banning certain weapons.*

Howard is right. In Australia, gun control was implemented with the overwhelming support of the population. This is not the case in the US. The change in support for gun control after Sandy Hook is marginal[1], and those opposed to it are *very* opposed to it and are voting with their wallets. The single week of December 17-23 likely saw almost a million new guns sold. Over the last month I’ve had occasion to be in five gun stores, and every one of them was completely sold out of every AR-15, every semi-automatic rifle of any description for that matter, every magazine holding >10 rounds, and every box of .223 ammo. Every online retailer I’ve checked is in the same boat. I personally have an outstanding parts order with Rock River Arms, and they’re backordered so badly they won’t even provide estimated lead times.

On to mass shootings. Both Mark and I as scientists run into some trouble here in that there is very little available systematic data of any kind. Trying to disentangle ordinary crime statistics from their confounding variables is hard enough, but the small-N statistics of mass murder are much harder still. We have noted that the Wikipedia lists of mass killings are similar in size in the US and Europe, and the US’s is slightly larger (119 vs. 100). This is worse on a per-capita basis because Europe has a higher population. But it is clear that confounding cultural and socioeconomic factors are in play as well. Mexico, for instance, has a homicide rate about 4 times that of the US but as far as I can tell has apparently never had a school shooting. (There have been a few “ordinary” murders at schools, but I have not been able to find any examples of a school shooting of the crazed-gunman variety.) Australia seems to have some success with their gun control regime in the specific case of mass violence, but their success is probably not replicable in the US which is (as I have pointed out) a very different place with 10 times the population and historically much higher levels of violence (gun and non-gun alike), to say nothing of the fact that we’re starting with gun ownership rates which are higher by a factor of 10.

We have a problem with mass violence. It’s a staggeringly rare problem, rarer than lightning strikes, but a dramatic and tragic one and one that deserves our best efforts to fix. The place to start is not a massive and likely completely ineffective reconstruction of a fundamental right exercised by nearly half the population of the country. As both Mark and I have both pointed out, government overreactions to tragedy tend not to turn out well in this country. We know for a fact that the last iteration of the assault weapons ban failed to prevent Columbine or to do anything significant to either ordinary or mass violence during the ten years it was in effect.

Instead, we should start with the obvious basics. Physical security of the entrances to schools would be my focus if I were a principal. Improved accessibility of mental health treatment is also a good idea (though this is a tall order and the verdict on its effectiveness is still out). The occasional presence of resource officers and/or the elimination of the silly “gun free zone” designation could also be a good deterrent. This last point we’ll discuss separately at the end of the post, as it’s quite controversial.

Mark makes a few suggestions for tighter gun laws. His primary suggestion is:

…since magazine-fed semi-automatic weapons are the weapons of choice in the last few dozen of these shootings that before sale the purchaser should get a bit more eyeball by authorities. Specifically in regards to the VT shooter, the Aurora Shooter, or the Giffords shooter, I suggested increased scrutiny for these purchases, law-enforcement taught training and competence testing for their use, and I also suggested the Canadian voucher system (as did Kristof immediately after Sandy Hook), which would require two other people to stand up for you and say you are responsible enough to possess such a machine.

As I pointed out last time, “magazine-fed semi-automatic weapons” is a near-synonym for “all guns”. Most shootings involve semi-automatic firearms because most firearms are semi-automatic. But that’s a side point, and doesn’t really affect his argument too much. (He’s not advocating a ban, but more on this later.)

Let’s start with the idea of a voucher system. If I want to buy a gun, I have to find two people who are willing to put their name to paper asserting that I’m not an obvious nut. Let me give three reasons I think this might be a bad idea, and two reasons I think it might work. First, even the most scuzzy two-bit crooks can round up two scuzzy two-bit friends to sign for them. Second, anyone’s good-faith assessment of another’s character could prove to be wrong. Third, it could be prone to abuse – are there exorbitant filing fees involved? Can New York decide a person needs twenty signatures? I would suggest that if you object to, say, voter ID laws then you can see how such a voucher system might be problematic. But there are a few reasons it might work in some cases. While Bugsy Siegel wouldn’t have a problem getting signatures, obvious dead-eyed psychopaths like James Eagan Holmes or Seung-Hui Cho might have found it a hurdle. Secondly, the second amendment does talk in terms of civic purpose. While the right to bear arms is obviously individual an individual right[2] and US law defines the militia as all able-bodied males between 17 and 45, the civic purpose of the second amendment might suggest that something like a voucher system in an otherwise permissive regulatory regime might fit the bill. I’d have to chew on the voucher idea for a while longer before deciding if I really think it’s a good idea, but on its face it seems much more in the spirit of the reason behind the right to keep and bear arms than do some other gun control suggestions.

Mark has also suggested greater scrutiny such as background checks for the private sale of guns. I’m much less sanguine about this. It would certainly accomplish nothing to prevent mass shootings – these weapons are usually purchased legally or stolen – but in the context of keeping guns out of the hands of crooks it seems like a reasonable place to start thinking. So we should ask ourselves what we might gain by implementing such a scheme. It’s an old staple of this debate to assert that criminals inherently aren’t inclined to have a lot of respect for gun laws. This can be countered by asserting that their respect for the law is irrelevant if there were no guns in the first place, but in terms of doing paperwork on transfers this response doesn’t work so well. Bugsy buys a gun for convicted felon Mugsy, cops trace the serial and ask Bugsy how Mugsy got the gun: “I dunno officer, he musta stole it”. In the mean time law-abiding gun owners are effectively forced into a registry and have to deal with the expensive bureaucratic morass of the FFL system. Maybe this could be sidestepped by some clever way of opening NICS to private parties other than FFLs, and such proposals ought to be heard out. Once somebody proposes one, anyway.

Training and competence testing? I’m all for people being trained and competent, but that has nothing to do with crime and violence and formal training is pretty expensive. I’d hate to see it made into an effective “no poor people need apply” restriction. Safe storage? Fantastic, especially for people with kids, but the same caveats apply.

Finally, we should discuss the ban vs. paperwork hoops issue:

*Every time you talk gun regulation at all it seems to become a ban in the pro-gun side’s mind. However, at no point, for any currently available weapon, have I suggested a ban. Just paperwork. It’s not the end of the world people.*

This is true, and fair enough as it goes. We gun-rights types are justifiably a bit jumpy about this sort of thing. It would be nice if Mark were the one writing the various laws being proposed in congress and various state legislatures. Unfortunately it’s people like Dianne “Turn ‘em all in” Feinstein and Carolyn “Shoulder thing that goes up” McCarthy and Andrew “Confiscation could be an option” Cuomo. It’s great for the two of us to discuss our Platonic ideals of the way things ought to be, but we also have to remember that we’re dealing with members of the world’s second oldest (and least reputable) profession. Since their stated intent is to take a mile, I’m not very willing to give them any free inches without an airtight case as to effectiveness and respect for the rights of the law-abiding.

Finally let’s return to the idea of stopping shootings via “good guys with guns”. Quoting Mark:

In the vast majority of cases, mass shootings are stopped when the perpetrator is shot…by themselves. Do we have evidence of police or armed citizens interrupting even one of the mass shootings in the last 20 years? Do we have any evidence of good guys with guns making a dent except after the shooting is done? Nope.

The “Nope” is a link to a Mother Jones article which actually lists five cases in which good guys with guns did just that. Mother Jones’ point is that each of the five cases listed magically don’t count because the citizens involved were current or former law enforcement or military, not (say) some dentist who just decided to get a concealed carry permit. I’m not sure that this tells us much more than that people with experience are more likely to get permits and that ordinary citizens’ permits are not generally valid in the places where mass shootings occur, but in any case it kills the argument that armed citizens can’t possibly accomplish anything positive. While active uniformed police haven’t actually shot many mass killers, it is probably more than suspicious coincidence that the perpetrators tend to shoot themselves right when police arrive (Lanza and Cho are prominent examples). This is also alleged to have happened in the Clackamas shooting when a citizen with a concealed carry permit drew his weapon, but as this is not independently verifiable Mark (not unreasonably) dismisses it and I won’t try to build a case around it. Mark also mentions the fact that an officer was present at the initial stage of the Columbine attack but failed to stop the shooting. This is roughly as out-of-date as insisting that passenger resistance to hijackers is futile because it failed to stop 9/11 – at the time it was generally believed that these were hostage situations, and that the proper response was to wait until it was all sorted out much later. This mistake is no longer made.

It is possible, and it has happened, that in the process of trying to stop a mass killer a person carrying could get themselves killed. As Mark says

*It’s not as easy as it looks in the movies, and the usual creepy fantasist gun lover who buys into this myth is not John McCain, he’s Walter Mitty.*

Ok, ok, I can’t resist: Walter Mitty would probably fantasize about being Die Hard hero John McClane, not the senior senator from Arizona. But I’m at a loss to see how this is an argument against resistance. Am I extra-dead if I get killed while trying and failing to resist? All that’s being asked is that the situation be an improvement on an unopposed mass shooter, who is at any rate hardly Hans Gruber either. (Neither are ordinary criminals. See here for an example which is simultaneously horrifying and hilarious.) Same thing for the Mother Jones hysterics here:

*They also make it more difficult for law enforcement officers to do their jobs. “In a scenario like that,” McMenomy told me recently, “they wouldn’t know who was good or who was bad, and it would divert them from the real threat.”*

In the billions of man hours that millions of permit holders spend carrying ever year, this has literally never happened. This should not be a surprise. Defensive shootings almost exclusively take place at very short ranges and are over in seconds. As I said in the last post, it’s not possible for me to claim that police or armed citizens are a panacea. The statistical data is badly inadequate. But what data we do have indicates that the concept is plausible in principle.

All right, it’s about time to conclude this Part 2. In two-sentence summary: Gun violence is bad. Gun *laws* have very little to do with it.

[1] A policy is not automatically good or bad based on how it polls, of course. And sometimes public opinion doesn’t make a lot of internal sense anyway. The assault weapons ban polls rather poorly (sub 50% in the Gallup poll), but universal background checks poll very well even though none of the mass shooters in recent years acquired their weapons through private sale. Go figure.

[2] Even the four dissenting justices in DC v. Heller agree. They disagree as to the scope of this right, but agree that it is an individual right. The first lines of the dissent:

*The question presented by this case is not whether the Second Amendment protects a “collective right” or an “individual right.” Surely it protects a right that can be enforced by individuals. But a conclusion that the Second Amendment protects an individual right does not tell us anything about the scope of that right.*

One of the major reasons for this is the fact that toys are pretty simple. They have just a few moving parts for the computer to keep track of during the rendering process. People have many more. Every hair on a person’s head really consists of many moving parts, since it can bend anywhere along its length to various degrees. And there are around a hundred thousand hairs which interact with each other, applying force to the other hairs they touch. Since it’s pretty much impossible to place every computer-animated hair by hand, if you want convincing hair in your animated films you need a good model for how the physics of hair works.

By the time Toy Story 3 came around, faster computers and better knowledge of how hair behaves made the problem of animating humans a lot more tractable:

Now we’re one step farther in our understanding of hair, with an interesting research article in Physical Review Letters:

Phys. Rev. Lett. 108, 078101 (2012)

Raymond E. Goldstein, Patrick B. Warren, and Robin C. Ball

Shape of a Ponytail and the Statistical Physics of Hair Fiber Bundles

They’ve attempted to create a model of why ponytails have the shape they do. They postulate that the energy of a bundle of hair is a function of the average curvature of the hairs (ie, the overall shape of the ponytail), the potential energy due to the gravitational field of the earth, and and average force per length due to the statistical properties of the individual hairs – their points of contact, waviness, split ends, whatever. The equation from the paper is:

Where the κ term is the average curvature, the φ term is the gravitational potential, and the u term is an average over the statistics of the individual hairs. How well does it work? Pretty well:

Pretty good, especially for a model which is effectively two-dimensional.

You might wonder why anyone would bother researching this. (This particular work even won an Ig Nobel prize.) Like many seemingly weird bits of science, there’s actually quite a bit of practical point to it. Many-body problems are hard, and any advance that allows you to avoid having to do a full-blown simulation has the potential to be extremely useful. I mentioned computer animation as a flashy example, but this research is useful in any kind of bundled-fiber system from fiber-optics telecommunications to the medical treatment of the fiber bundles in your body.

For a five-second hairstyle, that’s not a bad day’s work.

]]>