mspringer https://scienceblogs.com/ en Is Bitcoin Currently Experiencing a Selfish Miner Attack? https://scienceblogs.com/builtonfacts/2014/01/11/is-bitcoin-currently-experiencing-a-selfish-miner-attack <span>Is Bitcoin Currently Experiencing a Selfish Miner Attack?</span> <div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"><p>Probably not.</p> <p>All right, now that you know my conclusion, let's see how to get there with data. First, some background.</p> <p>Let me give very quick overview of Bitcoin in this context. (There are <a href="https://en.bitcoin.it/wiki/Introduction">many comprehensive overviews elsewhere</a>.) Bitcoin is an ongoing ledger of transactions of along the lines of "This guy had 5 Bitcoins, and he sent 2 of them to that guy. Now he has 3 Bitcoins." The transaction ledger is public, which prevents people from sending coins they already sent - the ledger rejects invalid transactions. Everyone on the network has a copy of the ledger. New transactions are appended to the ledger in the form of blocks containing a bunch of transactions. The ledger is formally called the blockchain, because it's a chain of blocks which contain descriptions of transactions. There's a bunch of cryptography which prevents people from making transactions involving coins they don't control, but we don't care about that for now. What counts here is how to add new blocks to the blockchain.</p> <p>Anybody can add new blocks, but you don't want any one person to have full control over this. The process of adding new blocks is called mining because the person who adds a block is currently rewarded with 25 Bitcoins. Each miner puts a valid block together and attempts to append it to the end of the chain. They do this by running a random number generator on their computer which spits out random numbers at a colossal rate, and the first miner whose random number generator spits out an appropriate number is the miner whose block is appended to the chain. Then everybody starts work on the next block, hoping to be the one to win the RNG lottery this time. (The details of this process are a bit involved, but the required output of the RNG is tuned such that a block will be appended about every 10 minutes on average. The RNG involves a cryptographic hash, which is why the number-generation process is called hashing.)</p> <p>The "selfish miner" attack, <a href="http://arxiv.org/abs/1311.0243">proposed by Ittay Eyal and Emin Gun Sirer of Cornell</a>, is a way that a dishonest miner could finesse the protocol and win more blocks than their percentage of the overall hash rate would indicate. In this attack, a miner finds a new block but doesn't immediately distribute it to the network so everyone can get to work on the next one. Instead, the miner begins work on the block which would follow their unreleased block. If they find the next block in the chain, they keep going. When they see that the rest of the miners have found a block, the selfish miners quickly release all the work they've done - which might be several blocks, which means their blocks get added to the chain because in the event of a conflict the longest chain wins. The rest of the miners never had a shot at those previously hidden blocks, so some of their hashing power was wasted. The selfish miners could thus generate blocks disproportionately faster than their percentage of the total hashing power would indicate. If the selfish miners can generate more than one block before the next fair-mined block is found, they always win because they have the longer chain.</p> <p>Selfish miners can do even better if they can manage to win more of the "ties" where they find the first block but the fair miners post a block before the selfish miners find their second block. If the selfish miners are able to quickly release their single block before the honest miner's block can propagate through the network, they'll always be at an advantage relative to the honest miners. This takes some work, because they have to have lots of nodes which can react quicker than honest blocks can propagate. But even if the selfish miners can only get their single blocks accepted half the time, they'll still come out ahead if they have more than 1/4 of the hashing power. And if fact even if the selfish single blocks never beat the honest blocks in their distribution throughout the network, the selfish miners will still come out ahead if they have more than 1/3 of the total hashing power. A salient question is thus how to detect whether or not such an attack is in progress.</p> <p>Note that mining is essentially a lottery. The miners generate a ton of random hashes, and when one of them happens to win, that block is valid and may be posted to the blockchain. Each hash is essentially a ticket to a lottery with 1 in a quadrillion odds. The creation of each new block is a new lottery, but the odds are the same and there's no pause between blocks. Thus, each win is an independent event. The number of independent random events in a given time interval is <a href="https://en.wikipedia.org/wiki/Poisson_distribution">Poisson distributed</a>, and the time between events in a Poisson process is <a href="https://en.wikipedia.org/wiki/Exponential_distribution">exponentially distributed</a>.</p> <p>But dishonest mining relies on quickly publishing as soon as one of those honestly-generated blocks is found. This means the creation of the next block is no longer uncorrelated with the creation of the block before. Block creation would not be independently distributed. Thus a dishonest miner will appear in the statistics of new blocks. If blocks are being generated in clusters, there'll be a spike in the distribution of time-between-blocks near t = 0, in comparison to the smooth falloff of the exponential distribution. This t = 0 spike would be a characteristic signature of the presence of dishonest mining.</p> <p>So let's take a look at the exponential distribution for a rate of 1 per 10 minutes (the nominal block creation rate):</p> <p style="text-align: center;">$latex \displaystyle f(x;\lambda) = \mathrm \lambda e^{-\lambda x} &amp;s=2$</p> <div style="width: 370px;display:block;margin:0 auto;"><a href="/files/builtonfacts/files/2014/01/bitcoin3.png"><img class="size-full wp-image-2018" alt="Exponential distribution for a rate of 0.1 per minute, x-axis in minutes" src="/files/builtonfacts/files/2014/01/bitcoin3.png" width="360" height="227" /></a> Exponential distribution for a rate of 0.1 per minute, x-axis in minutes </div> <p>λ is the average rate at which events happen, here 0.1 per minute. Of course with independent events something is just as likely to happen in minute number 35 as it is minute 1, but the exponential distribution measures the probability of the when the <em>first</em> event will happen after you start your timer. This happening in minute 35 means it didn't happen in any of the previous minutes, which is pretty unlikely, so the exponential distribution trails off as you'd expect. Honest mining should follow this distribution. Dishonest mining involves releases of blocks immediately subsequent to a previous block, so more events will happen in the first minute than the exponential distribution suggests.</p> <p>To test this against the actual blockchain as it's created, I wrote a Python script to monitor the timing of blocks as they're released into the blockchain. I let this script run for 202 blocks (a little over a day), and if you want to review the data yourself <a href="http://pastebin.com/0PWzrgSG">here's the CSV file</a>. (Note the 'messiness' paragraph below for caveats.) I have binned out this data into one-minute increments and plotted a histogram. (The technically knowledgeable will note that Bitcoin block rate is not strictly constant because of the growth of the hash rate. It varies by up to about 10% over 2-week intervals, though not nearly by so much over the shorter interval I measured. In this spike-detection context it doesn't matter terribly much either way.) Here's the histogram of the actual data, with a bin size of 1 minute:</p> <div style="width: 370px;display:block;margin:0 auto;"><a href="/files/builtonfacts/files/2014/01/bitcoin2.png"><img class="size-full wp-image-2016" alt="Histogram of times between consecutive blocks, in seconds" src="/files/builtonfacts/files/2014/01/bitcoin2.png" width="360" height="230" /></a> Histogram of times between consecutive blocks, in minutes </div> <p>Now let's figure out the exponential distribution. To be as clean as possible, I'm converting the actual measured time-between-blocks into a fraction of the average time between blocks. Ie, "2" on the axis means "2 times the average time between blocks over the interval I measured". I'm overlaying this with an exponential distribution of rate 1, and scaling the whole thing such that the areas under the curve and the histogram are both equal to the total number of blocks in the sample. The bins are 1/5 of the average block time. The result is below:</p> <div style="width: 370px;display:block;margin:0 auto;"><a href="/files/builtonfacts/files/2014/01/bitcoin1.png"><img class="size-full wp-image-2017" alt="Histogram and exponential distribution, x-axis in minutes" src="/files/builtonfacts/files/2014/01/bitcoin1.png" width="360" height="230" /></a> Histogram and exponential distribution, x-axis in fractions of average block time </div> <p>What do we see? The histogram matches the exponential distribution pretty well. Most crucially, the first bin is not notably higher than the distribution predicts. According to the exponential distribution, some 18.12% of blocks should be found in the first 1/5 of the average block time. In fact, we see that 31 out of 202 blocks, or about 15.3%, are actually found in that period of time. Given the small sample size, we can't really be that precise with the percentage. Using the <a href="http://www.measuringusability.com/wald.htm">binomial confidence interval</a>, we can only say that with 95% confidence something between 11% and 21% blocks are actually being created in that first 1/5 of the average block time. But the expected 18.12% is comfortably within that range. Given that we expect about 18% in a fair mining scenario and we can rule out anything greater than about 21% with pretty high confidence, concerned Bitcoiners can perhaps breathe a little easier.</p> <p>Some messiness: I calculated the times based on the arrival time of the blocks at my computer, with blocks being received through the <a href="https://blockchain.info/api/api_websocket">blockchain.info websockets API</a>. This assumes that the arrival time of blocks at my computer is the same as the posting of blocks to the chain, which is a possible source of systematic error. However, the interface is said to be low-latency, and provided the latency is significantly lower than 1 minute the effect should be minimal given our bin size. (Timing data is built into the blocks, but it's wildly inconsistent for reasons which are not clear to me. I have not used it.) Additionally, there are a few cases where I apparently either missed blocks or received duplicates. My programming skill is likely at fault. This only happened a few times, and I rejected any time intervals involving non-consecutive blocks. Finally, the sample size is not terribly large, so this measurement is not terribly sensitive and could potentially miss small-scale selfish mining efforts.</p> <p>Nonetheless, provided the potential sources of error in this measurement are not causing spurious results, we can thus currently conclude that the timing behavior of newly mined blocks is consistent with a blockchain that is being mined with fair methods.</p> <p>I encourage interested parties to freely repeat and refine this test in the future to see if the situation changes. If you're interested enough to want to keep a continuous watch, one easy method would be to keep a running average of the time between the last ~1000 blocks, and see if the number of blocks separated by less than 1/5 of that average exceeds about 225. If it did, it would not be a definite proof of selfish mining, but it would be a &gt;99% statistical anomaly.</p> <p>[DISCLAIMER: I am not involved with Bitcoin, I don't own any Bitcoin, and I am not interested in changing either of these things at the moment. Proselytization of the "Bitcoin is awesome!" or "Bitcoin is horrible!" varieties is going to be wasted, so try to avoid it in the comments. But I'm fascinated by the Bitcoin system from a mathematical and computational perspective, and am relatively well-versed in its technical details. Whether it's a good or bad idea from a practical or economic perspective I leave to the early adopter crowd to find out.]</p> <p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img style="border-width: 0;" alt="Creative Commons License" src="http://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br /> Is Bitcoin Currently Experiencing a Selfish Miner Attack? by <a href="http://scienceblogs.com/builtonfacts" rel="cc:attributionURL">Matthew Springer</a> is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/" rel="license">Creative Commons Attribution 4.0 International License</a>.</p> </div> <span><a title="View user profile." href="/author/mspringer" lang="" about="/author/mspringer" typeof="schema:Person" property="schema:name" datatype="">mspringer</a></span> <span>Sat, 01/11/2014 - 04:58</span> Sat, 11 Jan 2014 09:58:59 +0000 mspringer 121052 at https://scienceblogs.com How often does the sun emit 1 TeV photons? https://scienceblogs.com/builtonfacts/2013/11/27/how-often-does-the-sun-emit-1-tev-photons <span>How often does the sun emit 1 TeV photons?</span> <div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"><p>I had an interesting question posed to me recently: how frequently does the sun emit photons with an energy greater than 1 TeV?</p> <p>All of you know about the experiments going on at the LHC, where particles are accelerated to an energy which is equivalent to an electron being accelerated through a potential difference of trillions of volts (which is what a "trillion electron volts" - a TeV is). During the ensuing collisions between particles, high-energy TeV photons are produced. Of course everything is emitting light in the form of blackbody radiation all the time. Human beings emit mostly long-wavelength infrared, hot stoves emit shorter-wavelength infrared and red light, hotter objects like the sun emit across a broad range of wavelengths which include the entire visible spectrum. Here, from Wikipedia, is the spectrum of the sun:</p> <p> </p> <div style="width: 570px;display:block;margin:0 auto;"><a href="https://en.wikipedia.org/wiki/File:Solar_Spectrum.png"><img class=" wp-image-1972 " alt="Solar spectrum." src="/files/builtonfacts/files/2013/11/Solar_Spectrum.png" width="560" height="417" /></a> Solar spectrum. </div> <p>This graph is given in terms of wavelength. For light, energy corresponds to frequency, and frequency is inversely proportional to wavelength. Longer wavelength, lower frequency. A TeV is a gigantic amount of energy, which corresponds to a gigantically high frequency and thus a wavelength that would be pegged way the heck off the left end of this chart pegged almost but not quite exactly at 0 on the x axis. Let me reproduce the same blackbody as the Wikipedia diagram, but cast in terms of frequency:</p> <div style="width: 370px;display:block;margin:0 auto;"><a href="/files/builtonfacts/files/2013/11/spectrum.png"><img class="size-full wp-image-1975" alt="spectrum" src="/files/builtonfacts/files/2013/11/spectrum.png" width="360" height="208" /></a> Same spectrum, in terms of frequency </div> <p>Here the x-axis is in hertz, and the y-axis is spectral irradiance in terms of watts per square meter <em>per hertz</em>. (That <a href="/node/121045">makes a difference</a> - it's not just the Wikipedia graph with the x-axis relabeled although it gives the same watts-per-square-meter value when integrated over the same bandwidth region.)</p> <p>Ok, so what's the frequency of a 1 TeV photon? Well, photon energy is given by E = hf, where h is Planck's constant and f is the frequency. Plugging in, a 1 TeV photon has an frequency of about 2.4 x 10<sup>26</sup> Hz. That's way off the right end of the graph. Thus you might think the answer is zero - the sun never emits such high-energy photons. But then again that tail never quite reaches zero, and there's a lot of TeVs per watt, and there's a lot of square meters on the sun...</p> <p>So to find out more exactly, let's take a look at the actual equation which gave us that chart: Planck's law for blackbody radiation:</p> <p style="text-align: center;">$latex \displaystyle B(f) = \frac{ 2 h f^3}{c^2} \frac{1}{e^\frac{h f}{kT} - 1}&amp;s=2$</p> <p style="text-align: left;">So you'd integrate that from 2.4 x 10<sup>26</sup> Hz to infinity if you wanted to find how many watts per square meter the sun emits at those huge frequencies. (Here k is Boltzmann's constant, which is effectively the scale factor that converts from temperature to energy.) That's kind of an ugly integral though, but we can simplify it. That $latex e^\frac{h f}{kT}$ term? It's indescribably big. The hf term is 1 TeV, and the kT is about 0.45 eV (which is a "typical" photon energy emitted by the sun), so the exponential is on the order of e<sup>2200000000000</sup>. (The number of particles in the observable universe is maybe 10<sup>80</sup> or so, for comparison.) Subtracting 1 from that gigantic number is absolutely meaningless, so we can drop it and end up with:</p> <p style="text-align: center;">$latex \displaystyle B(f) = \frac{ 2 h f^3}{c^2} e^{-\frac{h f}{kT}}&amp;s=2$</p> <p style="text-align: left;">which means the answer in watts per square meter is</p> <p style="text-align: center;">$latex \displaystyle I = \int_{a}^{\infty}\frac{ 2 h f^3}{c^2} e^{-\frac{h f}{kT}} \, df&amp;s=2$</p> <p style="text-align: left;">where "a" is the 1  TeV lower cutoff (in Hz). That exponential term now has a negative sign, so it's on the order of e<sup>-2200000000000</sup>. I'd say this is a safe place to stop and say "The answer is zero, the sun has never and will never emit photons of that energy through blackbody processes." But let's press on just to be safe.</p> <p style="text-align: left;">That expression above can be integrated pretty straightforwardly. I let Mathematica do it for me:</p> <p style="text-align: center;">$latex \displaystyle I = e^{-\frac{a h}{k T}}\frac{2 k T (a^3 h^3+3 a^2 h^2 k T+6 a h k^2 T^2+6 k^3 T^3 )}{c^2 h^3}&amp;s=2$</p> <p style="text-align: left;">So that's an exponential term multiplying a bunch of stuff.  That bunch of stuff is a big number, because "a" is a big number and h is a tiny number in the denominator. I plug in the numbers and get that the stuff term is about 10<sup>93</sup> watts per square meter, and you have to multiply that by the 10<sup>18</sup> or so square meters on the surface of the sun. That's a very big number, but it's not even in the same sport as that e<sup>-2200000000000</sup> term. Multiplying those terms together doesn't even dent the  e<sup>-2200000000000</sup> term. It's still zero for all practical purposes</p> <p style="text-align: left;">Which is a lot of work to say that our initial intuition was correct. 1 TeV from blackbody processes in the sun? Forget it.</p> <p style="text-align: left;">Now blackbody processes aren't the only things going on in the sun. I don't think there are too many TeV scale processes of other types, but stars can be weird things sometimes. I'd be curious to know if astrophysicists would know of other processes which might bump the TeV rate to something higher.</p> <p style="text-align: left;">[<em>Personal note: I've been absent on ScienceBlogs since April, I think. Why? Writing my dissertation, defending, and summer interning. The upshot of all that is those things are done and I'm now Dr. Springer, and I have a potentially permanent position lined up next year. And now I might even have time to write some more!</em>]</p> </div> <span><a title="View user profile." href="/author/mspringer" lang="" about="/author/mspringer" typeof="schema:Person" property="schema:name" datatype="">mspringer</a></span> <span>Wed, 11/27/2013 - 09:46</span> Wed, 27 Nov 2013 14:46:14 +0000 mspringer 121051 at https://scienceblogs.com Everything in Pi... maybe. https://scienceblogs.com/builtonfacts/2013/04/12/everything-in-pi-maybe <span>Everything in Pi... maybe.</span> <div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"><p>George Takei posted the following thing to Facebook recently:</p> <p><a href="/files/builtonfacts/files/2013/04/pi.jpg"><img class="aligncenter size-full wp-image-1952" alt="pi" src="/files/builtonfacts/files/2013/04/pi.jpg" width="500" height="750" /></a></p> <p>It got reposted by a bunch of people and provoked a tremendous amount of discussion (for a math topic, anyway), much of which was somewhere in the continuum between merely wrong and psychedelically incoherent. It's not a new subject - a version of the image got discussed on <a href="http://math.stackexchange.com/questions/216343/does-pi-contain-all-possible-number-combinations">Stack Exchange</a> last year - but it's an interesting one and hey, it's not all that often that the subtle properties of the set of real numbers get press on Facebook. Let's do a taxonomy of the real numbers and see what we can figure out about pi and whether or not it has the properties stated in the picture.</p> <h3>The Natural Numbers and Integers</h3> <p>These are the counting numbers: 0, 1, 2, 3, 4... There's an infinity of them, but there are gaps. If you have 5 dollars and you give half of them to your friend, you're stuck. The number you need is not a natural number. If we want to be able to deal with ratios of natural numbers, we need more numbers so we can deal with those gaps between the natural numbers. We can include the set of natural numbers with negative signs in from of them, and we have what's called the integers: ...-3, -2, -1, 0, 1, 2, 3... Later on I won't worry about explicitly discussing negative numbers, but of course all of the subsequent sets include negative numbers.</p> <h3>The Rational Numbers</h3> <p>These are the ratios of integers, or fractions. Divide 1 by 4 and you get the rational number 1/4. We can write it in decimal notation as 0.25. Divide 1 by 3 and you have the rational number 1/3 = 0.333... All rational numbers have a decimal representation that either terminates or repeats infinitely. In fact, it's better to say that all rational numbers have a decimal representation that repeats infinitely: 1/4 = 0.25000000... and we just happen to have a notation that suppresses trailing zeros. Sometimes you have to go out quite a ways before the repeat happens, but it always does. 115/151= 0.761589403973509933774834437086092715231788079470198675496688741721854304635761589... All rationals have repeating decimal representations, and all repeating decimals represent rational numbers.</p> <p>The rational numbers are <em>dense</em>. Between any two rational numbers, there is another rational number. Which immediately implies that between any two rational numbers, there are an infinite number of rational numbers. Pick any point on the number line, and you're guaranteed that you can find a rational number as close as you want to it. But alas, you're not guaranteed that every point on the number line is a rational number. Some of them aren't.</p> <h3>The Irrational Numbers Part 1: The Algebraic Numbers</h3> <p>The square root of 2 is the most famous example of an irrational number. It's the number which, when squared, gives exactly 2. It's equal to 1.41421356237..., but the decimal representation never repeats. This is because there are no two integers A and B such that (A/B)<sup>2</sup> = 2. You can get as close as you want: 7/5 = 1.4 is kind of close, and 3363/2378 is much closer still, but you'll never find a rational number whose square is exactly 2. This can be rigorously proven and means that the square root of 2 is irrational, and never repeats.</p> <p>The square root of two is the solution to the equation $latex x^2 - 2 = 0$. This is an example of a polynomial with integer coefficients. Another random example is $latex x^6 - 3x^2 -29 = 0$, which happens to have the irrational number x = 1.84302... as one of its solutions. Numbers which are solutions to these kinds of polynomials are the algebraic numbers.</p> <p>Does all this mean the decimal expansion of the square root of 2 includes any and every combination of digits?  Maybe. Maybe not.</p> <h3>The Irrational Numbers Part 2: The Transcendental Numbers</h3> <p>Not all irrational numbers can be written in terms of the solutions of polynomials with integer cofficients. The ones that can't are called transcendental numbers. Pi is one of them. So is Euler's number e = 2.71828... Transcendental numbers are all irrational<em>.</em></p> <p>In a precise but somewhat technical mathematical sense, "almost all" real numbers are irrational. Throw a dart at the real number line and you will hit an irrational number with probability 1. This makes some intuitive sense. If you just start mashing random digits after a decimal point, it seems reasonable that you won't just happen to make an infinitely repeating sequence. It turns out that the same thing is true of the transcendental numbers. "Almost all" real numbers are transcendental. But at the present time, even with hundreds of years of brilliant mathematicians pouring unfathomable effort into the problem, our toolkit for dealing with transcendental numbers is pretty sparse. It's very difficult to prove that specific numbers are transcendental, even if they pretty obviously seem to be. Is $latex \pi + e $ transcendental? Almost certainly, but nobody has proved it.</p> <p>Here's a number called Liouville's constant which is proven to be transcendental: 0.110001000000000000000001000000... (It has 1s at positions corresponding to factorials, 0s elsewhere.) It was among the first numbers known to be transcendental and was in fact explicitly constructed as an example of a transcendental number. It's irrational, of course. It is an "infinite, nonrepeating decimal", as the Facebook picture puts it. But is my DNA in it? Heck no, my phone number's not even in it. Infinite and nonrepeating is <em>not</em> synonymous with "contains everything".</p> <h3>The Normal Numbers</h3> <p>A normal number is one whose decimal representation contains every string of digits on average as often as you'd expect them to occur by chance. So the digit 4 occurs 1/10th of the time, the digit string 39 occurs 1/100th of the time, the digit string 721 occurs 1/1000th of the time, and so on. All normal numbers are irrational. Normal numbers satisfy Takei's criteria. Any finite string of digits occurs in the decimal representation of a normal number with probability 1.</p> <p>Is pi a normal number? Nobody knows. If our toolkit is sparse for proving things about transcendental numbers, it's almost completely empty for proving anything about normal numbers. There are a few contrived examples. The number 0.123456789101112131415... is normal in base 10 at least, and in fact it contains every finite string of digits, because it was constructed so that it would. It also satisfies the properties which Takei's image ascribes to pi, though it also shows that these criteria aren't especially profound. A string that contains all numbers turns out to contain all numbers, which is true but not all that impressive.</p> <p>But is this specific number normal in other bases? Nobody knows. Are there numbers that are normal in every base? Yes - again, "almost all" of them. Can I actually write out the first few digits of one? Nope. As far as I can tell, while <a href="http://www.glyc.dc.uba.ar/santiago/papers/absnor.pdf">examples of absolutely normal numbers</a> have been given at in terms of algorithms, there's not yet been anyone who's been able to start generating the digits of a provably absolutely normal number. [Edit: I think in the comments we've found in the literature an example of the first few digits of a provably absolutely normal number.]</p> <p>Mathematicians love proof. I'm a physicist. I love proof too, but I'm a lot more willing to work with intuition and experiment. Do the billions of digits of pi that we've calculated act as though they're distributed in the "random" way that the digits of an absolutely normal number ought to be distributed? Yes. Just about everyone suspects pi is absolutely normal. Same for e and the square root of 2 and the rest of the famous irrationals of math other than the ones that are obviously not normal. Numerical evidence is not dispositive though, and has misled mathematicians before.</p> <p>If pi is absolutely normal, than Takei's image is true. If you can prove this conjecture, you will have boldly gone where no one has gone before.</p> </div> <span><a title="View user profile." href="/author/mspringer" lang="" about="/author/mspringer" typeof="schema:Person" property="schema:name" datatype="">mspringer</a></span> <span>Fri, 04/12/2013 - 02:09</span> Fri, 12 Apr 2013 06:09:26 +0000 mspringer 121050 at https://scienceblogs.com Why are clouds white? https://scienceblogs.com/builtonfacts/2013/04/01/why-are-clouds-white <span>Why are clouds white?</span> <div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"><p>Why is the sky blue? It's a classic question - probably <em>the</em> classic question of the genre of explanatory popular physics. The famous short version of the answer is that Rayleigh scattering by air molecules affects short-waveength light more than long-wavelength light, and so blue light tends to get scattered in random directions to create the diffuse blue we know and love. But like almost every answer physics can give, the answer leads to more questions. Why does Rayleigh scattering scatter short-wavelength light more strongly? This is a fairly involved question to answer from first principles, and your average physics major won't study the issue in depth until probably the junior year. The process of digging deeper and deeper into questions is how physics advances, and even thinking about old solved problems can lead to new insight.</p> <p>Let's see if we can find some insight into a similar question: why are clouds white?</p> <p style="text-align: left;"><a href="https://en.wikipedia.org/wiki/File:Cumulus_clouds_panorama.jpg"><img class="aligncenter size-full wp-image-1934" alt="640px-Cumulus_clouds_panorama" src="/files/builtonfacts/files/2013/03/640px-Cumulus_clouds_panorama.jpg" width="640" height="191" /></a></p> <p style="text-align: left;">Clouds are made of water droplets. Pour yourself a glass of water. You'll notice that the glass of water is not white. It is in fact perfectly clear. Well, glasses of water of big. Maybe tiny droplets are different. Dip a toothpick into the water and get the smallest droplet you can, and put it on a hard surface. You'll be able to see the surface through the water with equal clarity. Even small drops of water are themselves clear.</p> <p style="text-align: center;"><a href="https://en.wikipedia.org/wiki/File:Raindrop_on_a_fern_frond.jpg"><img class="aligncenter size-full wp-image-1935" alt="303px-Raindrop_on_a_fern_frond" src="/files/builtonfacts/files/2013/03/303px-Raindrop_on_a_fern_frond.jpg" width="303" height="240" /></a></p> <p style="text-align: left;">So if clouds are water, and water is clear, why aren't clouds clear? They look to us a lot like they reflect light. Light that shines on them bounces off, and when we're looking at the bottom of clouds on a stormy day we see that most of the sunlight doesn't make it through the clouds because it has reflected off the tops of the clouds.</p> <p style="text-align: left;">Here's a <a href="https://www.nytimes.com/2013/03/12/science/why-do-clouds-appear-white.html">short New York Times piece</a> attempting to answer the question. The answer given there is that droplets do scatter light through a process called Mie scattering, which is essentially just refraction. The direction of the incoming light gets bent and changed just as in the photograph of the droplet above. Crucially, Mie scattering is more or less independent of wavelength. If droplets scatter all colors of light, "all colors" is basically white.</p> <p style="text-align: left;">That's true, but not complete. Why should this make clouds <em>reflect</em> white? How is it that randomly-directed scattering can preferentially send the light back in the direction it came from?</p> <p style="text-align: left;">Let's look at the process in a little more detail. When light hits a drop, it gets redirected through refraction and scattering - mostly in the forward direction, but after this redirected light hits more drops the randomness of the orientations of the light and the drops washes out all information about the original direction of the incoming light. At this point, the direction of the light is random and unrelated to the direction of the incoming light. It seems paradoxical that this would end up causing the light to leave the cloud in the same direction it came in. The answer is the <em>random walk.</em></p> <p style="text-align: left;">Imagine you're walking down the sidewalk and you flip a coin. Heads you take a step forward, tails you take a step back. You keep up this process for many flips of a coin. Your position over a thousand steps might look like this:</p> <p style="text-align: left;"><a href="/files/builtonfacts/files/2013/03/random1.png"><img class="aligncenter size-full wp-image-1938" alt="random1" src="/files/builtonfacts/files/2013/03/random1.png" width="360" height="226" /></a></p> <p style="text-align: left;">Or like this[1]:<a href="/files/builtonfacts/files/2013/03/random2.png"><img class="aligncenter size-large wp-image-1939" alt="random2" src="/files/builtonfacts/files/2013/03/random2.png" width="360" height="213" /></a></p> <p style="text-align: left;">The point of an unbiased random walk is that you're equally likely to go one way or another. Should you end up ten steps forward of where you started (call it y = 10), you're equally likely to end up at position y = 0 and y = 20 at a given number of steps in the future. If the end of the sidewalk is at y = 10,000, and you're sitting at y = 10, it is much, much more probable for you to end up back at y = 0 before you wobble your way to y = 10,000. For instance, here's a 100,000 step random walk:</p> <p style="text-align: left;"><a href="/files/builtonfacts/files/2013/03/100k.png"><img class="aligncenter size-full wp-image-1941" alt="100k" src="/files/builtonfacts/files/2013/03/100k.png" width="360" height="211" /></a>It returns to 0 several times in the first 20,000 steps, and then by the 100,000th step has only managed to wander off to around y = 500. Should we keep stepping, we're still much more likely to wobble back down the 500 steps to 0 than we are to wander the 9,500 steps up to y = 10,000.</p> <p style="text-align: left;">And <em>that's</em> why clouds are white. Each drop, or at least each several drops, are essentially a step of a random walk for the incoming light waves[2]. If the cloud is very large compared to the size of the step (which is on the order of a few times the distance between drops), then the light is much more likely to wander back out to the same side of the cloud it came in than it is to wander all the way to the other side.</p> <p style="text-align: left;">This concept generalizes to all kinds of light scattering objects. White sand is mostly clear silica particles, but this same scattering process bounces the light back diffusely. In an astrophysical context, reflection nebula work in similar ways:</p> <p style="text-align: center;"><a href="/files/builtonfacts/files/2013/03/reflection.jpg"><img class="aligncenter wp-image-1943" alt="reflection" src="/files/builtonfacts/files/2013/03/reflection.jpg" width="255" height="336" /></a></p> <p style="text-align: left;">Sometimes there's a lot of insight to be gained by asking kid-style questions, even if the path to the answer is kind of random.</p> <p style="text-align: left;">[1] You're not alone if you think these look like stock market charts. There's a pretty large body of academic literature that models financial markets as random walks with varying levels of bias.</p> <p style="text-align: left;">[2] There's a temptation to talk in terms of photons bouncing around. This temptation ought to be resisted. The process of scattering is entirely a classical wave phenomenon.</p> </div> <span><a title="View user profile." href="/author/mspringer" lang="" about="/author/mspringer" typeof="schema:Person" property="schema:name" datatype="">mspringer</a></span> <span>Mon, 04/01/2013 - 02:53</span> <div class="field field--name-field-blog-categories field--type-entity-reference field--label-inline"> <div class="field--label">Categories</div> <div class="field--items"> <div class="field--item"><a href="/channel/physical-sciences" hreflang="en">Physical Sciences</a></div> </div> </div> Mon, 01 Apr 2013 06:53:24 +0000 mspringer 121049 at https://scienceblogs.com Light from a Hairbrush https://scienceblogs.com/builtonfacts/2013/03/15/1913 <span>Light from a Hairbrush</span> <div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"><p>Question from a reader:</p> <blockquote><p>Pick up a comb, rub it with your hair and you have got some electric charge. Now shake it and you are generating an electromagnetic wave. Am I right?</p></blockquote> <p>Yes indeed. So why don't we see light emitted when we brush our hair? Let's run some numbers. If you wiggle around an electric point charge, electromagnetic radiation is emitted. The power carried by this radiation is given by the <a href="https://en.wikipedia.org/wiki/Larmor_formula">Larmor formula</a>:</p> <p style="text-align: center;">$latex \displaystyle P = \frac{e^2 a^2}{6 \pi \epsilon_0 c^3}&amp;s=1$</p> <p>Well, a comb isn't a point charge. But if we're just interested in an order-of-magnitude estimate, we can pretend it is. How much charge is on a comb? It's probably a substantial overestimate, but the <a href="https://en.wikipedia.org/wiki/Body_capacitance">human body has a capacitance</a> of around a hundred picofarads, which is part of why you can get slightly shocked when you rub your feet on carpet and touch a doorknob. For a purpose-built capacitor in an electronic circuit that's pretty small, but a comb isn't a purpose-built capacitor either so it's not unreasonable to say that as an order of magnitude it has a capacitance of 100 picofarads. That doesn't tell us how much charge it holds though, we also need to know the voltage. The voltage required to zap you with static electricity when you touch a doorknob is a surprisingly high number on the order of 10,000 volts, so we'll say the comb is charged to that potential since a comb can hold enough charge to produce a spark. 100 picofarads multiplied by 10kV gives 1 microcoulomb.</p> <p>That gives us the charge <em>e</em> for the Larmor formula. How about the acceleration <em>a</em>? Earth's surface gravity is about 9.8 m/s^2, and I think a person can probably fling a comb around faster than that. Let's be generous and call it 100 m/s^2. Plug all that into the formula:</p> <p style="text-align: center;">$latex P = 2.2 \times 10^{-24}\, \mathrm{W}&amp;s=1$</p> <p style="text-align: left;">So around a trillionth of a trillionth of a watt. That's why combs don't glow when you shake them. Well, the first of two reasons.</p> <p>Say you then went to Wal-Mart, bought a bucket of electrons, and dumped a million coulombs worth of charge on your comb. (This would in fact blast the comb to bits, but let's pretend.) Now you've got a pretty bright 2.2 watts, but it would in fact still be invisible. You're waving the comb around a few times per second, and the resulting electromagnetic wave will tend to have a similar frequency. These are extremely long-wavelength radio electromagnetic waves, which are invisible to our eyes.</p> <p>Nonetheless, this is more or less how actual radio devices work. Waves are generated by moving electrons back and forth within the metal wire antenna. However, the quantity of charge that's being moved around is large (the bazillion conduction electrons in the metal are all moving back and forth), and the acceleration that can be produced on an individual conduction electron by an applied electric field is pretty large as well.</p> </div> <span><a title="View user profile." href="/author/mspringer" lang="" about="/author/mspringer" typeof="schema:Person" property="schema:name" datatype="">mspringer</a></span> <span>Fri, 03/15/2013 - 05:20</span> Fri, 15 Mar 2013 09:20:48 +0000 mspringer 121048 at https://scienceblogs.com Quick, hit the brakes! https://scienceblogs.com/builtonfacts/2013/02/27/quick-hit-the-brakes <span>Quick, hit the brakes!</span> <div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"><p>A reader emailed me a fun question from a physics exam he took, along these lines:</p> <blockquote><p>A car driver going at some speed v suddenly finds a wide wall at a distance r. Should he apply brakes or turn the car in a circle of radius r to avoid hitting the wall?</p></blockquote> <p>My first thought was that surely the question wasn't doable without more information, but it turns out that we do have enough to give a straightforward answer. Let's take the "turns in a circle" and "slams on brakes" scenarios one at a time.</p> <h4>Turns in a circle:</h4> <p>Velocity is a vector whose magnitude is the speed and whose direction is the direction of travel. If you turn, your speed remains the same but your direction of travel changes. So the velocity is changing even if the speed isn't. A changing velocity is by definition an acceleration, and one of the key equations of first semester physics is the acceleration required to produce uniform circular motion. It turns out to be a function of the speed and the radius of the circle:</p> <p style="text-align: center;">$latex \displaystyle a = \frac{v^2}{r} &amp;s=1$</p> <p>Since we don't have any numbers to plug in or really anywhere else to go with this, we're done with this part. The required acceleration to avoid the wall is equal to the square of the speed divided by the radius of the circle, which is just the initial distance to the wall.</p> <h4>Slams on brakes:</h4> <p>This one is a little more involved. The direction of the velocity is not changing, but the speed is. Another of the key equations of freshman physics is the formula for position in uniformly accelerated motion. It's:</p> <p style="text-align: center;">$latex \displaystyle x = \frac{1}{2}a t^2 + v_0 t + x_0 &amp;s=1$</p> <p>where <em>a</em> is the acceleration, <em>v0</em> is the initial velocity, <em>x0</em> is the initial position, and <em>t</em> is the elapsed time. In this case we'd like to solve for a at the point where x = r (we define our coordinates such that x0 = 0). But we don't know how much time has elapsed by the time the car reaches the wall, so we need the formula for velocity in uniformly accelerated motion, which we might write from memory or find by differentiating the position equation if we know calculus:</p> <p style="text-align: center;">$latex \displaystyle v = at + v_0 &amp;s=1$</p> <p>Now I'll start subscripting the letter f on the specific time when the car reaches the wall. We know that we've come to a stop at at that time, so we have:</p> <p style="text-align: center;">$latex \displaystyle 0 = at_f + v_0 &amp;s=1$</p> <p>Which means</p> <p style="text-align: center;">$latex \displaystyle t_f = -\frac{v_0}{a} &amp;s=1$</p> <p>Don't worry about the negative sign. <em>a</em> is itself negative (we're decelerating), so <em>tf</em> will be positive as well. Now that we know how much time has elapsed when the motion is complete, we can plug that into our position formula:</p> <p style="text-align: center;">$latex \displaystyle r = \frac{1}{2}a(-\frac{v_0}{a})^2 + v_0 (-\frac{v_0}{a}) &amp;s=1$</p> <p>Remembering that at the wall, <em>x</em> = <em>r</em> and that we defined <em>x0</em> = 0. You can do the algebra to solve for <em>a</em>, and you'll find that</p> <p style="text-align: center;">$latex \displaystyle a = -\frac{v_{0}^{2}}{2r} &amp;s=1$</p> <p>Which is (ignoring the minus sign that just tells us which way the acceleration is pointed) just half the acceleration we found for the turning scenario. So purely from a standpoint of the acceleration car tires can produce, braking works better than swerving.</p> <div style="width: 441px;display:block;margin:0 auto;"><a href="http://www.imcdb.org/vehicle_99079-Chevrolet-4100-1947.html"><img class=" wp-image-1906 " alt="From Back to the Future - Biff shoulda braked..." src="/files/builtonfacts/files/2013/02/manure.jpg" width="431" height="234" /></a> From Back to the Future - Biff shoulda braked... </div> <p>After writing this post, I came across a <a href="http://scienceblogs.com/dotphysics/2010/08/05/turn-or-go-straight-quick/">ScienceBlogs post on Dot Physics</a> a few years ago on the same subject. He approaches the problem in a different way, and I think it's well worth reading both solution methods.</p> </div> <span><a title="View user profile." href="/author/mspringer" lang="" about="/author/mspringer" typeof="schema:Person" property="schema:name" datatype="">mspringer</a></span> <span>Wed, 02/27/2013 - 04:47</span> Wed, 27 Feb 2013 09:47:50 +0000 mspringer 121047 at https://scienceblogs.com The Theoretical Minimum, by Susskind & Hrabovsky https://scienceblogs.com/builtonfacts/2013/02/12/the-theoretical-minimum-by-susskind-hrabovsky <span>The Theoretical Minimum, by Susskind &amp; Hrabovsky</span> <div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"><p><a href="http://www.amazon.com/gp/product/046502811X/ref=as_li_ss_il?ie=UTF8&amp;camp=1789&amp;creative=390957&amp;creativeASIN=046502811X&amp;linkCode=as2&amp;tag=buionfac-20"><img src="http://ecx.images-amazon.com/images/I/41MCv963n1L._BO2,204,203,200_PIsitb-sticker-arrow-click,TopRight,35,-76_AA300_SH20_OU01_.jpg" /></a></p> <p><a href="http://www.amazon.com/gp/product/046502811X/ref=as_li_ss_il?ie=UTF8&amp;camp=1789&amp;creative=390957&amp;creativeASIN=046502811X&amp;linkCode=as2&amp;tag=buionfac-20"><img border="0" src="http://ws.assoc-amazon.com/widgets/q?_encoding=UTF8&amp;ASIN=046502811X&amp;Format=_SL160_&amp;ID=AsinImage&amp;MarketPlace=US&amp;ServiceVersion=20070822&amp;WS=1&amp;tag=buionfac-20" /></a></p> <p><a href="http://www.amazon.com/gp/product/046502811X/ref=as_li_ss_tl?ie=UTF8&amp;camp=1789&amp;creative=390957&amp;creativeASIN=046502811X&amp;linkCode=as2&amp;tag=buionfac-20">The Theoretical Minimum: What You Need to Know to Start Doing Physics</a><img src="http://www.assoc-amazon.com/e/ir?t=buionfac-20&amp;l=as2&amp;o=1&amp;a=046502811X" width="1" height="1" border="0" alt="" style="border:none !important; margin:0px !important;" /></p> <p>When this book appeared in my mailbox I judged it by its cover and was a little concerned. The problem with the cover is the name of one of the authors: Leonard Susskind. He's an extremely talented physicist and writer, to be sure, but he's a string theorist. Worse, he's one of the major names behind the string theory landscape idea. Though not a high-energy physicist myself and thus not really being terribly qualified to judge, I tend to classify the string theory landscape as somewhere between speculative and pseudoscience.</p> <p>Beyond the cover, I am happy to report that my initial worries were absolutely incorrect. This is a charming and erudite instance of a genre with very few members - a pop-physics book with partial differential equations on a good fraction of the pages. The goal of the book according to the forward by Susskind (a physicist) and Hravovsky (an engineer) is to give a substantive but not-textbook-detailed introduction to physics. Not just to teach <em>about</em> physics, as is the typical pop-physics book's goal, but to actually teach physics.</p> <p>The title refers to a slightly notorious requirement the great Soviet physicist Lev Landau put on his students before they could join his group. There was a level of knowledge of physics he called the "theoretical minimum", which for him meant exhaustive mastery of theoretical physics. In the more limited goal of this book, the theoretical minimum is to understand physics as it actually works mathematically - beyond just the Scientific American level. Not to the level where you're actually solving graduate textbook problems, but to the level where you know what the concept of a Lagrangian actually entails.</p> <p>More impressive still is that the book entirely resists the temptation to skip to the good stuff - quantum mechanics and so on. This is a book which is purely about classical mechanics. More volumes are planned on electromagnetism and quantum mechanics, but for now this is the true basics. These basics of course turn out to be built into the fabric of electrodynamics and quantum mechanics, aside from the minor fact of the vast importance of classical mechanics in the world of practical problems.</p> <p>The succeeds admirably in its goal. It presents classical mechanics in all its glory, from forces to Hamiltonians to symmetry and conservation laws, in a casual but detailed style.</p> <p>Hawking famously suggested that each equation halved the sales of a book, so the question here is whether or not you might be interested in reading The Theoretical Minimum if you haven't learned calculus or don't remember it. It's a judgement call. I suspect you won't get the whole experience if you haven't at least seen calculus at some point in your life. But even a half-remembered course years ago is probably good enough - there's a pretty substantial bit of mathematical refresher material presented in a visual and intuitive way. If in doubt, give it a try. On the other hand, a reader without any calculus background could probably pick up some of the flavor of the physics but I don't think I recommend starting with this book.</p> <p>I'm looking forward to the rest of the books in this series. They address a niche that sees very few solid attempts to fill.</p> <p> </p> <p><em>[Standard disclosure: the publisher sent me a free copy of the book to review. I am not otherwise compensated for this review.]</em></p> </div> <span><a title="View user profile." href="/author/mspringer" lang="" about="/author/mspringer" typeof="schema:Person" property="schema:name" datatype="">mspringer</a></span> <span>Tue, 02/12/2013 - 06:58</span> Tue, 12 Feb 2013 11:58:00 +0000 mspringer 121046 at https://scienceblogs.com The Human Eye, Optimized For Sunlight. Maybe. https://scienceblogs.com/builtonfacts/2013/01/29/the-human-eye-optimized-for-sunlight-maybe <span>The Human Eye, Optimized For Sunlight. Maybe.</span> <div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"><p>The human eye is sensitive to a portion of the electromagnetic spectrum that we call visible light, which extends from around 400 to 700 nanometer wavelength, peaking in the general vicinity of greenish light at 560 nanometers:</p> <p style="text-align: left;"><a href="https://en.wikipedia.org/wiki/File:Eyesensitivity.png"><img class="aligncenter size-full wp-image-1833" title="Eyesensitivity2" src="/files/builtonfacts/files/2013/01/Eyesensitivity2.png" alt="" width="335" height="294" /></a>Here's the intensity (formally: power per area per unit solid angle per unit wavelength - whew!) of the radiation emitted by an object with the temperature of the sun, plotted as a function of wavelength in nanometers according to Planck's law:</p> <p style="text-align: center;">$latex B_\lambda(T) =\frac{2 hc^2}{\lambda^5}\frac{1}{ e^{\frac{hc}{\lambda k_\mathrm{B}T}} - 1}&amp;s=1$</p> <div style="width: 370px;display:block;margin:0 auto;"><a href="/files/builtonfacts/files/2013/01/wavelengthGraph.png"><img class="size-full wp-image-1834" title="wavelengthGraph" src="/files/builtonfacts/files/2013/01/wavelengthGraph.png" alt="" width="360" height="207" /></a> <p>Spectral radiance (W/(sr m^3)) vs. wavelength (nm)</p> </div> <p>You'll notice it also peaks around the same place as the spectral response of the human eye. Optimization!</p> <p>Or is it? That previous equation was how much light the sun dumps out <em>per nanometer of bandwidth</em> at a given wavelength. But nothing stops us from plotting Planck's law in terms of the frequency of the light:</p> <p style="text-align: center;">$latex B_\nu(T) = \frac{ 2 h \nu^3}{c^2} \frac{1}{e^\frac{h\nu}{k_\mathrm{B}T} - 1}&amp;s=1$</p> <div style="width: 370px;display:block;margin:0 auto;"><a href="/files/builtonfacts/files/2013/01/frequencyGraph.png"><img class="size-full wp-image-1837" title="frequencyGraph" src="/files/builtonfacts/files/2013/01/frequencyGraph.png" alt="" width="360" height="203" /></a> <p>Spectral radiance (W/(sr m^2 Hz)) vs. frequency (Hz)</p> </div> <p>In this case what's on the y axis is power per area per unit solid angle <em>per frequency</em>. Ok, great. But notice it's <em>not</em> just the previous graph with f given by c/λ. It's a different graph, with different units. To see the difference, let's see this radiance per frequency graph with the x-axis labeled in terms of wavelength:</p> <div style="width: 370px;display:block;margin:0 auto;"><a href="/files/builtonfacts/files/2013/01/frequencyGraph2.png"><img class="size-full wp-image-1838" title="frequencyGraph2" src="/files/builtonfacts/files/2013/01/frequencyGraph2.png" alt="" width="360" height="206" /></a> <p>Spectral radiance (W/(sr m^2 Hz)) vs. wavelength (nm)</p> </div> <p>Well. This is manifestly not the same graph as the radiance per nanometer. Its peak is lower, in the near infrared and outside the sensitivity curve of the human eye. This makes some sense - there's not much frequency difference between light with wavelength of 1 kilometer and light with wavelength of 1 kilometer + 1 nanometer. But light of 100 nanometer wavelength has a frequency about 3 x 10<sup>13</sup> Hz more than light with wavelength 101 nanometers.</p> <p>So what gives? Is the eye most sensitive where the sun emits the most light or not? The simple fact of the matter is there's no such thing as an equation that just gives "how much light the sun puts out at a given wavelength". That's simply not a well-defined quantity. What is well defined is how much light the sun puts out <em>per nanometer</em> or <em>per hertz</em>. In this sense our eye isn't optimized so that its response peak matches the sun's emission peak, because "the sun's peak" isn't really a coherent concept. The sensitivity of our eyes is probably more strongly determined by the available chemistry - long-wavelength infrared light doesn't have the energy to excite most molecular energy levels, and short-wavelength ultraviolet light is energetic enough to risk destroying the photosensitive molecules completely.</p> <p>This wavelength/frequency distribution function issue isn't just a trivial point - it's one of those things that actually gets physicists in trouble when they forget that one isn't the same thing as the other. For a detailed discussion, I can't think of a better one than <a href="http://www.phys.ufl.edu/~hagen/phz4710/readings/AJPSofferLynch.pdf">this AJP article by Soffer and Lynch</a>. Enjoy, and be careful out there with your units!</p> </div> <span><a title="View user profile." href="/author/mspringer" lang="" about="/author/mspringer" typeof="schema:Person" property="schema:name" datatype="">mspringer</a></span> <span>Tue, 01/29/2013 - 03:05</span> Tue, 29 Jan 2013 08:05:51 +0000 mspringer 121045 at https://scienceblogs.com Gun Control Debate with Mark, Pt. 2 https://scienceblogs.com/builtonfacts/2013/01/28/gun-control-debate-with-mark-pt-2 <span>Gun Control Debate with Mark, Pt. 2</span> <div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"><p>This post is political. As always, physics readers who don't care about politics are encouraged to skip it. I've got an actual physics post going up tomorrow.</p> <p>Mark and I have been conducting a debate/discussion over gun control in the United States. For the first round, here's <a href="http://scienceblogs.com/denialism/2013/01/09/a-gun-control-debate-with-matt-springer/">his post</a> and <a href="http://scienceblogs.com/builtonfacts/2013/01/14/gun-control-debate-with-mark-hoofnagle-pt-1/">my response</a>. Here's his <a href="http://scienceblogs.com/denialism/2013/01/18/gun-control-part-ii-my-response-to-matt-springer/">second round post</a>, and this post is my response.</p> <p>First, let me summarize where the debate stands. We have four main topics as set forth in Mark's posts: gun violence in "ordinary" crime, gun violence in the context of mass shootings, suggestions for gun control, and miscellaneous ancillary arguments. Most of the points in the ancillary category were fairly comprehensively covered, and I think both of us are pretty satisfied with what has been said. The exception is the "good guys with guns" argument, which we'll continue.</p> <p>Mark classifies my responses to the ordinary crime and mass shooting topics as "no problem" arguments. This is incorrect. I am trying to quantify the problem, and to quantify the impact of the proposed solutions. If it turns out that both these quantities are so small as to be classified as "no problem" in the mind of the reader, well, the numbers are what they are. I myself reject the idea that there is no problem. But I also reject the idea that argument from anecdote is an effective guide to the truth. We want to ask whether or not there is a problem which is caused by the prevalence of guns, and if so whether or not gun control could do anything to ameliorate it.</p> <p>Let's dive right in to the general gun crime topic.</p> <p>Mark quotes the <a href="http://www.nap.edu/catalog.php?record_id=13497">Institute of Medicine in comparing the US to similar industrialized countries</a> in terms of life expectancy found that our homicide rate is far in excess of comparable OECD countries, and significantly affects our life expectancy. The IOM study found our homicide rate to be 6.9 times higher than the other OECD countries, our gun homicide rate 19.5 times higher, and of the 23 countries in the study, the US was responsible for 80% of all firearm deaths.</p> <p>There are two obvious questions. First, is the US comparable to those other OECD countries? Second, how much does gun control actually have to do with this?</p> <p>The answer to the first question is an obvious no, and to demonstrate this we need look no farther than the very study linked. The US has higher than average death rates in almost every category from car accidents to disease, the highest rates of adolescent pregnancy, sexually transmitted diseases, diabetes, and so forth. (But not suicide, incidentally.) In fact, in the words of the study,</p> <p style="padding-left: 30px;"><em>On nearly all indicators of mortality, survival, and life expectancy, the United States ranks at or near the bottom among high-income countries.</em></p> <p>I'm not trying to insult my country - it's a great place, much better in most of these categories than most of the rest of the world. However, comparisons to these 16 other top OECD nations are untenable. We aren't comparable. We are different in almost every measurable respect involving health and mortality.</p> <p>Well ok, guns obviously don't give people diabetes or make teens pregnant, but "lots of guns, lots of violence" vs "not many guns, not much violence" might look less like correlation and more like causation. (At least relative to the not-very-comparable top of the OECD.) This conclusion is unwarranted and probably false. Here's some reasons, some of which I have mentioned in my last post.</p> <p>1. US vs. OECD entirely aside, we can't even easily compare US vs. US over time without running into extreme confounding variables. Our murder rate has been precipitously falling over the last few decades even as gun laws have become much looser (I do not claim a causal relationship). The last time our murder rate was as low as it is now, <a href="http://articles.washingtonpost.com/2012-12-19/lifestyle/35929227_1_homicide-rate-randolph-roth-gun-control">we were literally in the Leave It To Beaver era</a>.</p> <p>2. <a href="https://www.fbi.gov/about-us/cjis/ucr/crime-in-the-u.s/2011/crime-in-the-u.s.-2011/offenses-known-to-law-enforcement/expanded-offense-data">Murder rates vary wildly within the US</a> under identical gun control regimes. White Americans, for instance, kill each other at roughly OECD rates (albeit on the high end), and well below the rates of eastern Europe and the Baltics. I shouldn't have to point out that epidermis reflectivity doesn't have squat to do with this. It does, however, show that socioeconomic and cultural variables overwhelmingly determine rates of violence.</p> <p>3. Sharp changes in gun laws haven't done anything significant to the homicide rates of other countries. The best-studied case is post-Port Arthur Australia. The effect on overall homicide rates was somewhere between negligible and nonexistent. The effect on gun homicide rates was similar. Let's take a look at the study Mark cites:</p> <p style="text-align: left; padding-left: 30px;"><em>Additional research, <a href="http://www.ncbi.nlm.nih.gov/pubmed/17170183">readily available</a> suggests a significant drop in the rate of gun violence after the ban. This suggests to me, both in the specific intervention, and overall given their tight regulation of handguns, that Australia is quite a strong example of gun control working.</em></p> <p style="text-align: left;">I will reproduce a few of the graphs from this paper, unedited. First, gun homicides and non-gun homicides:</p> <p style="text-align: left;"><a href="/files/builtonfacts/files/2013/01/gungraph3.png"><img class="aligncenter size-full wp-image-1826" title="gungraph3" src="/files/builtonfacts/files/2013/01/gungraph3.png" alt="" width="515" height="202" /></a></p> <p style="text-align: left;">The statisticians in the audience who have not died of heart attacks at the statistical illiteracy of the pre- and post- trend lines will of course notice that the overall decline in violence and gun violence continued just as it was doing before the gun control was implemented. In fact, the rate of <em>non</em>-gun violence displays a much more dramatic (though also statistically spurious) change. And this is Australia, the best possible scenario for the success of of gun control. Gun control did nothing to the overall homicide rate. It didn't even do anything to the gun homicide rate. (More graphs <a href="/files/builtonfacts/files/2013/01/gungraph3.png">from the paper here</a>, about accidental deaths and suicides, if you're curious.)</p> <p>4. Trying to account for confounding variables is extraordinarily difficult in this context, but a number of studies have attempted to do so. <a href="https://encrypted.google.com/url?sa=t&amp;rct=j&amp;q=a%20comparison%20of%20violent%20and%20firearm%20crime%20rates%20in%20the%20canadian%20prairie%20provinces&amp;source=web&amp;cd=1&amp;ved=0CDYQFjAA&amp;url=http%3A%2F%2Fwww.garrybreitkreuz.com%2Fpublications%2FLibraryReport_PrairieCrimeRates_2005_03_07.doc&amp;ei=_q8FUenjB6LE2gXa4IGABQ&amp;usg=AFQjCNGY12pQlmHhJuGIg6nyZ72mhZ0shQ&amp;sig2=OqL9tYUo1Wpvu5pgFxta2A&amp;cad=rja">One study</a> compares the prairie provinces of Canada with their bordering US states. In this case,</p> <p style="padding-left: 30px;"><em>Patterns of homicide in the United States and Canada were examined with a view to finding out whether the availability of firearms affects the homicide rate independently of the other social, demographic and economic factors in play.  If this is the case, then low-homicide areas, which generally have fewer social and economic problems but the same access to firearms, should have a higher proportion of their homicides by firearms.  This is not the case for the four border states.</em></p> <p><a href="ftp://psyftp.mcmaster.ca/dalywilson/sshrc2004/wilkinsonCrime.pdf">Other</a> <a href="http://psych.mcmaster.ca/dalywilson/iiahr2001.pdf">studies</a> (commenter LH pointed out these two) have come to similar conclusions. Now I strongly suggest that you not read too much into these results - while if they are accurate they support my point, attempts to disentangle confounding variables are fraught with danger even when the result happens to land on my side.</p> <p>In short, there is no good evidence that gun availability causes increased crime rates. There is extremely good evidence that socioeconomic variables are far and away the primary drivers of crime rates. Violence in general and gun violence in particular are real problems in the US, but gun control as a solution is so ill-supported as to verge on superstition.</p> <p>While Mark and I are mostly focused on numerical metrics as to what effects gun control actually produces, it's probably worth looking briefly at the practical problems of implementing it as well. Mark quotes former Australian prime minister John Howard writing on the Port Arthur gun control measures:</p> <p style="padding-left: 30px;"><em>In the end, we won the battle to change gun laws because there was majority support across Australia for banning certain weapons.</em></p> <p>Howard is right. In Australia, gun control was implemented with the overwhelming support of the population. This is not the case in the US. The change in support for gun control after Sandy Hook <a href="http://www.gallup.com/poll/1645/guns.aspx">is marginal</a>[1], and those opposed to it are <em>very</em> opposed to it and are voting with their wallets. The single week of December 17-23 likely saw almost a <a href="https://www.fbi.gov/about-us/cjis/nics/nics-firearms-checks-top-10-highest-days-weeks">million new guns sold</a>. Over the last month I've had occasion to be in five gun stores, and every one of them was completely sold out of every AR-15, every semi-automatic rifle of any description for that matter, every magazine holding &gt;10 rounds, and every box of .223 ammo. Every online retailer I've checked is in the same boat. I personally have an outstanding parts order with <a href="http://rockriverarms.com/">Rock River Arms</a>, and they're backordered so badly they won't even provide estimated lead times.</p> <p>On to mass shootings. Both Mark and I as scientists run into some trouble here in that there is very little available systematic data of any kind. Trying to disentangle ordinary crime statistics from their confounding variables is hard enough, but the small-N statistics of mass murder are much harder still. We have noted that the Wikipedia lists of mass killings are similar in size in the US and Europe, and the US's is slightly larger (119 vs. 100). This is worse on a per-capita basis because Europe has a higher population. But it is clear that confounding cultural and socioeconomic factors are in play as well. Mexico, for instance, has a homicide rate about 4 times that of the US but as far as I can tell has apparently never had a school shooting. (There have been a few "ordinary" murders at schools, but I have not been able to find any examples of a school shooting of the crazed-gunman variety.) Australia seems to have some success with their gun control regime in the specific case of mass violence, but their success is probably not replicable in the US which is (as I have pointed out) a very different place with 10 times the population and historically much higher levels of violence (gun and non-gun alike), to say nothing of the fact that we're starting with gun ownership rates which are higher by a factor of 10.</p> <p>We have a problem with mass violence. It's a staggeringly rare problem, rarer than lightning strikes, but a dramatic and tragic one and one that deserves our best efforts to fix. The place to start is not a massive and likely completely ineffective reconstruction of a fundamental right exercised by nearly half the population of the country. As both Mark and I have both pointed out, government overreactions to tragedy tend not to turn out well in this country. We know for a fact that the last iteration of the assault weapons ban failed to prevent Columbine or to do anything significant to either ordinary or mass violence during the ten years it was in effect.</p> <p>Instead, we should start with the obvious basics. Physical security of the entrances to schools would be my focus if I were a principal. Improved accessibility of mental health treatment is also a good idea (though this is a tall order and the verdict on its effectiveness is still out). The occasional presence of resource officers and/or the elimination of the silly "gun free zone" designation could also be a good deterrent. This last point we'll discuss separately at the end of the post, as it's quite controversial.</p> <p>Mark makes a few suggestions for tighter gun laws. His primary suggestion is:</p> <blockquote><p><em>...since magazine-fed semi-automatic weapons are the weapons of choice in the last few dozen of these shootings that before sale the purchaser should get a bit more eyeball by authorities. Specifically in regards to the VT shooter, the Aurora Shooter, or the Giffords shooter, I suggested increased scrutiny for these purchases, law-enforcement taught training and competence testing for their use, and I also suggested the Canadian voucher system (as did Kristof immediately after Sandy Hook), which would require two other people to stand up for you and say you are responsible enough to possess such a machine.</em></p></blockquote> <p>As I pointed out last time, "magazine-fed semi-automatic weapons" is a near-synonym for "all guns". Most shootings involve semi-automatic firearms because most firearms are semi-automatic. But that's a side point, and doesn't really affect his argument too much. (He's not advocating a ban, but more on this later.)</p> <p>Let's start with the idea of a voucher system. If I want to buy a gun, I have to find two people who are willing to put their name to paper asserting that I'm not an obvious nut. Let me give three reasons I think this might be a bad idea, and two reasons I think it might work. First, even the most scuzzy two-bit crooks can round up two scuzzy two-bit friends to sign for them. Second, anyone's good-faith assessment of another's character could prove to be wrong. Third, it could be prone to abuse - are there exorbitant filing fees involved? Can New York decide a person needs twenty signatures? I would suggest that if you object to, say, voter ID laws then you can see how such a voucher system might be problematic. But there are a few reasons it might work in some cases. While Bugsy Siegel wouldn't have a problem getting signatures, obvious dead-eyed psychopaths like James Eagan Holmes or Seung-Hui Cho might have found it a hurdle. Secondly, the second amendment does talk in terms of civic purpose. While the right to bear arms is obviously individual an individual right[2] and <a href="http://www.law.cornell.edu/uscode/text/10/311">US law defines the militia</a> as all able-bodied males between 17 and 45, the civic purpose of the second amendment might suggest that something like a voucher system in an otherwise permissive regulatory regime might fit the bill. I'd have to chew on the voucher idea for a while longer before deciding if I really think it's a good idea, but on its face it seems much more in the spirit of the reason behind the right to keep and bear arms than do some other gun control suggestions.</p> <p>Mark has also suggested greater scrutiny such as background checks for the private sale of guns. I'm much less sanguine about this. It would certainly accomplish nothing to prevent mass shootings - these weapons are usually purchased legally or stolen - but in the context of keeping guns out of the hands of crooks it seems like a reasonable place to start thinking. So we should ask ourselves what we might gain by implementing such a scheme. It's an old staple of this debate to assert that criminals inherently aren't inclined to have a lot of respect for gun laws. This can be countered by asserting that their respect for the law is irrelevant if there were no guns in the first place, but in terms of doing paperwork on transfers this response doesn't work so well. Bugsy buys a gun for convicted felon Mugsy, cops trace the serial and ask Bugsy how Mugsy got the gun: "I dunno officer, he musta stole it". In the mean time law-abiding gun owners are effectively forced into a registry and have to deal with the expensive bureaucratic morass of the FFL system. Maybe this could be sidestepped by some clever way of opening NICS to private parties other than FFLs, and such proposals ought to be heard out. Once somebody proposes one, anyway.</p> <p>Training and competence testing? I'm all for people being trained and competent, but that has nothing to do with crime and violence and formal training is pretty expensive. I'd hate to see it made into an effective "no poor people need apply" restriction. Safe storage? Fantastic, especially for people with kids, but the same caveats apply.</p> <p>Finally, we should discuss the ban vs. paperwork hoops issue:</p> <p style="padding-left: 30px;"><em>Every time you talk gun regulation at all it seems to become a ban in the pro-gun side’s mind. However, at no point, for any currently available weapon, have I suggested a ban. Just paperwork. It’s not the end of the world people.</em></p> <p>This is true, and fair enough as it goes. We gun-rights types are justifiably a bit jumpy about this sort of thing. It would be nice if Mark were the one writing the various laws being proposed in congress and various state legislatures. Unfortunately it's people like Dianne "<a href="https://en.wikipedia.org/wiki/Political_positions_of_Dianne_Feinstein#Gun_politics">Turn 'em all in</a>" Feinstein and Carolyn "<a href="https://encrypted.google.com/url?sa=t&amp;rct=j&amp;q=shoulder%20thing%20that%20goes%20up&amp;source=web&amp;cd=1&amp;cad=rja&amp;sqi=2&amp;ved=0CDAQtwIwAA&amp;url=http%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DospNRk2uM3U&amp;ei=McoCUeeiEMPi2QW8yYDIBA&amp;usg=AFQjCNElx1ct6HH_s2CtERqr3X-9tZV5Bg&amp;sig2=BhOjZB3ovHhrbxkk29IfOg">Shoulder thing that goes up</a>" McCarthy and Andrew "<a href="http://www.mediaite.com/online/new-york-gov-andrew-cuomo-on-gun-control-confiscation-could-be-an-option/">Confiscation could be an option</a>" Cuomo. It's great for the two of us to discuss our Platonic ideals of the way things ought to be, but we also have to remember that we're dealing with members of the world's second oldest (and least reputable) profession. Since their stated intent is to take a mile, I'm not very willing to give them any free inches without an airtight case as to effectiveness and respect for the rights of the law-abiding.</p> <p>Finally let's return to the idea of stopping shootings via "good guys with guns". Quoting Mark:</p> <blockquote><p><em>In the vast majority of cases, mass shootings are stopped when the perpetrator is shot…by themselves. Do we have evidence of police or armed citizens interrupting even one of the mass shootings in the last 20 years? Do we have any evidence of good guys with guns making a dent except after the shooting is done? Nope.</em></p></blockquote> <p>The "Nope" is <a href="http://www.motherjones.com/politics/2012/12/armed-civilians-do-not-stop-mass-shootings">a link to a Mother Jones article</a> which actually lists five cases in which good guys with guns did just that. Mother Jones' point is that each of the five cases listed magically don't count because the citizens involved were current or former law enforcement or military, not (say) some dentist who just decided to get a concealed carry permit. I'm not sure that this tells us much more than that people with experience are more likely to get permits and that ordinary citizens' permits are not generally valid in the places where mass shootings occur, but in any case it kills the argument that armed citizens can't possibly accomplish anything positive. While active uniformed police haven't actually shot many mass killers, it is probably more than suspicious coincidence that the perpetrators tend to shoot themselves right when police arrive (Lanza and Cho are prominent examples). This is also <a href="[https://en.wikipedia.org/wiki/Clackamas_Town_Center_shooting">alleged to have happened in the Clackamas shooting</a> when a citizen with a concealed carry permit drew his weapon, but as this is not independently verifiable Mark (not unreasonably) dismisses it and I won't try to build a case around it. Mark also mentions the fact that an officer was present at the initial stage of the Columbine attack but failed to stop the shooting. This is roughly as out-of-date as insisting that passenger resistance to hijackers is futile because it failed to stop 9/11 - at the time it was generally believed that these were hostage situations, and that the proper response was to wait until it was all sorted out much later. This mistake is no longer made.</p> <p>It is possible, and it has happened, that in the process of trying to stop a mass killer a person carrying could get themselves killed. As Mark says</p> <p style="padding-left: 30px;"><em>It’s not as easy as it looks in the movies, and the usual creepy fantasist gun lover who buys into this myth is not John McCain, he’s Walter Mitty.</em></p> <p>Ok, ok, I can't resist: Walter Mitty would probably fantasize about being Die Hard hero John McClane, not the senior senator from Arizona. But I'm at a loss to see how this is an argument against resistance. Am I extra-dead if I get killed while trying and failing to resist? All that's being asked is that the situation be an improvement on an unopposed mass shooter, who is at any rate hardly Hans Gruber either. (Neither are ordinary criminals. <a href="https://www.youtube.com/watch?v=dKvrbKgqkA8">See here</a> for an example which is simultaneously horrifying and hilarious.) Same thing for the Mother Jones hysterics here:</p> <p style="padding-left: 30px;"><em>They also make it more difficult for law enforcement officers to do their jobs. "In a scenario like that," McMenomy told me recently, "they wouldn't know who was good or who was bad, and it would divert them from the real threat."</em></p> <p>In the billions of man hours that millions of permit holders spend carrying ever year, this has literally never happened. This should not be a surprise. Defensive shootings almost exclusively<a href="http://www.theppsc.org/Staff_Views/Aveni/OIS.pdf"> take place at very short ranges and are over in seconds</a>. As I said in the last post, it's not possible for me to claim that police or armed citizens are a panacea. The statistical data is badly inadequate. But what data we do have indicates that the concept is plausible in principle.</p> <p>All right, it's about time to conclude this Part 2. In two-sentence summary: Gun violence is bad. Gun <em>laws</em> have very little to do with it.</p> <p> </p> <p>[1] A policy is not automatically good or bad based on how it polls, of course. And sometimes public opinion doesn't make a lot of internal sense anyway. The assault weapons ban polls rather poorly (sub 50% in the Gallup poll), but universal background checks poll very well even though none of the mass shooters in recent years acquired their weapons through private sale. Go figure.</p> <p>[2] Even the four dissenting justices in DC v. Heller agree. They disagree as to the scope of this right, but agree that it is an individual right. The <a href="http://www.law.cornell.edu/supct/html/07-290.ZD.html">first lines of the dissent</a>: <em></em></p> <p style="padding-left: 30px;"><em>The question presented by this case is not whether the Second Amendment protects a “collective right” or an “individual right.” Surely it protects a right that can be enforced by individuals. But a conclusion that the Second Amendment protects an individual right does not tell us anything about the scope of that right.</em></p> </div> <span><a title="View user profile." href="/author/mspringer" lang="" about="/author/mspringer" typeof="schema:Person" property="schema:name" datatype="">mspringer</a></span> <span>Mon, 01/28/2013 - 03:59</span> <div class="field field--name-field-blog-categories field--type-entity-reference field--label-inline"> <div class="field--label">Categories</div> <div class="field--items"> <div class="field--item"><a href="/channel/physical-sciences" hreflang="en">Physical Sciences</a></div> </div> </div> Mon, 28 Jan 2013 08:59:33 +0000 mspringer 121044 at https://scienceblogs.com The Physics of the Ponytail https://scienceblogs.com/builtonfacts/2013/01/22/the-physics-of-the-ponytail <span>The Physics of the Ponytail</span> <div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"><p>The first major computer-animated film was Toy Story. It had a few human characters, most prominently Andy (who spends most of the film wearing a hat) and Sid (who sports a buzz cut). The focus of the film is on the plastic toys.</p> <p>One of the major reasons for this is the fact that toys are pretty simple. They have just a few moving parts for the computer to keep track of during the rendering process. People have many more. Every hair on a person's head really consists of many moving parts, since it can bend anywhere along its length to various degrees. And there are around a hundred thousand hairs which interact with each other, applying force to the other hairs they touch. Since it's pretty much impossible to place every computer-animated hair by hand, if you want convincing hair in your animated films you need a good model for how the physics of hair works.</p> <p>By the time Toy Story 3 came around, faster computers and better knowledge of how hair behaves made the problem of animating humans a lot more tractable:</p> <p style="text-align: center;"><img class="aligncenter" src="http://methodmagazine.com/wp-content/uploads/2010/06/toy_story_3_andy.jpg" alt="" width="465" height="236" /></p> <p style="text-align: left;">Now we're one step farther in our understanding of hair, with an interesting research article in Physical Review Letters:</p> <p><a href="http://prl.aps.org/abstract/PRL/v108/i7/e078101">Phys. Rev. Lett. 108, 078101 (2012)<br /></a>Raymond E. Goldstein, Patrick B. Warren, and Robin C. Ball<br /> Shape of a Ponytail and the Statistical Physics of Hair Fiber Bundles</p> <p>They've attempted to create a model of why ponytails have the shape they do. They postulate that the energy of a bundle of hair is a function of the average curvature of the hairs (ie, the overall shape of the ponytail), the potential energy due to the gravitational field of the earth, and and average force per length due to the statistical properties of the individual hairs - their points of contact, waviness, split ends, whatever. The equation from the paper is:</p> <p><a href="/files/builtonfacts/files/2013/01/hairEq.png"><img class="aligncenter size-full wp-image-1812" title="hairEq" src="/files/builtonfacts/files/2013/01/hairEq.png" alt="" width="390" height="75" /></a>Where the κ term is the average curvature, the φ term is the gravitational potential, and the u term is an average over the statistics of the individual hairs. How well does it work? Pretty well:</p> <p><a href="/files/builtonfacts/files/2013/01/hair2.png"><img class="aligncenter size-full wp-image-1813" title="hair2" src="/files/builtonfacts/files/2013/01/hair2.png" alt="" width="373" height="303" /></a>Pretty good, especially for a model which is effectively two-dimensional.</p> <p>You might wonder why anyone would bother researching this. (This particular work even won an <a href="http://www.improbable.com/ig/winners/#ig2012">Ig Nobel prize</a>.) Like many seemingly weird bits of science, there's actually quite a bit of practical point to it. Many-body problems are hard, and any advance that allows you to avoid having to do a full-blown simulation has the potential to be extremely useful. I mentioned computer animation as a flashy example, but this research is useful in any kind of bundled-fiber system from fiber-optics telecommunications to the medical treatment of the fiber bundles in your body.</p> <p>For a five-second hairstyle, that's not a bad day's work.</p> <p> </p> </div> <span><a title="View user profile." href="/author/mspringer" lang="" about="/author/mspringer" typeof="schema:Person" property="schema:name" datatype="">mspringer</a></span> <span>Tue, 01/22/2013 - 07:15</span> Tue, 22 Jan 2013 12:15:44 +0000 mspringer 121043 at https://scienceblogs.com