I'm giving an exam at 9:00 this morning-- neither snow, nor more snow, nor blowing snow, nor single-digit temperatures shall stay the progress of shaping young minds. Anyway, to keep things lively while I'm proctoring the test, here's a poll question inspired by the exam:
What's your favorite calculational shortcut?
Today's test is on basic quantum mechanics-- photoelectric effect, Compton effect, the Bohr model of hydrogen, and simple solutions of the Schrödinger equation-- and as such, features a lot of problems that are made easier by knowing some shortcut or another. Sometimes, these are numerical facts-- for example, that Planck's constant times the speed of light is 1240 ev-nm-- and sometimes they're mathematical techniques-- such as knowing that an odd function integrated over a symmetric interval about the origin is zero-- but there are a whole host of little facts that can dramatically shorten a problem, and reduce your chances of making a mistake.
So, what's your favorite trick for making a long problem shorter?
For sheer labor-saving potential, I'd probably have to go with the odd/even function trick-- there are a number of really horrible-looking integrals that come up when you do the sort of quantum mechanics that deals with actual wavefunctions, and many of them can be eliminated entirely by looking at the symmetry of the functions involved. You can arrive at the same answer by actually doing out the integral, of course, but it's really easy to make a mistake-- none of the students used the symmetry trick on a recent homework problem about finding expectation values for a gaussian wavepacket, and every one of them ended up getting a wrong answer.
I also get a lot of use out of hc=1240 eV-nm. It's not that it's all that difficult to remember the numerical values of Planck's constant and the speed of light, but it takes a lot of problems from things that require a scientific calculator to work out down to arithmetic that I can easily do in my head, at least as long as I don't have to convert back to joules...
- Log in to post comments
My absolute favorite, is no longer accompanied by the brain cells that carried the memory for context. But with some time to kill at the end of the day, the professor who had been lecturing us on some introductory topic in solid state physics asked if were were interested in seeing the "ten minute solution to the hydrogen atom?"
Sure, sez we, and the chalk starts flying. Inconvenient terms arise and are cast aside without remorse: asymmetries? higher orders? bah! simplify! simplify! simplify!
And then suddenly... 3
Professor turns about, with a smile for the blank faces - in what's left of my memory, none saw what was coming next.
Turning back to the board, he struck out the 3 and replaced it with Ï ... and the solution fell out one or two lines later.
So 3 = π is my favorite. Wish I could remember how the actual argument went. Something starting from coulomb potential and uncertainty relations? Long gone.
Mine is, a year is pi * 10^7 seconds (roughly).
Mine is, a year is pi * 10^7 seconds (roughly).
Yeah, that's a good one. Crucial for understanding Vernor Vinge's A Deepness in the Sky, too.
The way I prefer to put it is: '1 nanocentury = pi seconds'.
I've seen that attiributed to Tom Duff at Bell Labs but I've never looked to see if anyone else noticed it first.
Energy at room temperature is 1/40 eV.
Energy of an "average" chemical bond is 100 kCal/mol.
Feynman's common trick of differentiating wrt a parameter in an integral. (Thereby leading to generating functions in quantum
mechanics for those integrals that don't happen to be odd over
a symmetric interval.)
Check the units to figure out where the solution has gone awry. (Corollary: don't "throw out h-bar" and other constants just because
"the equation is then in dimensionless units".)
BTW, my favorite shortcut is basically the entire concept of light refraction -- you can approach it at any level you need and have time for.
Depending on what level of detail you need, you can work from the 'thin lens' approximations through 'Snell's law ray tracing' on up through various diffraction behaviors and aberrations (and of course quickly past my level of knowledge) and so forth. It just depends what level of detail and performance you need to accomplish the task.
I also get a lot of use out of hc=1240 eV-nm
That's funny--in high-energy physics we use hbar c = 200 MeV-fm. Second favorite: if charged particle in a magnetic field B curves at radius R, its momentum (int GeV/c) is 0.3 B R.
In any integral or large computation with trig functions, rewrite with z=e^{i*theta}. We are used to working with rational functions to a degree that we simply are not used to working with trig identities.
Particularly nice for integrals from 0 to 2*pi, of course.
Mine is, a year is pi * 10^7 seconds (roughly).
An important corallary to this is that an experiment-year (I often hear it as an "accelerator year" but it's an upper limit for almost any experiment) is 10^7 seconds. No matter how hard you try, you'll never get a duty cycle for your experiment better than about 30%.
For my favorite computational trick from the experimental side, I'd have to go with fractional changes: if y=x^n then Delta y/y = n Delta x / x and that usually n is of order unity so we can more or less ignore it. This is such a basic method of thinking that it hardly feels like a trick, yet it's something I had to learn.
Let's say I want to stabilize the temperature of some device to 0.01 degrees. That tells me that I need to pick sensitive components to be good to 10^-5 (0.01 K / 300 K). Of course if I'm measuring a voltage across a semiconductor junction that has exponential dependence upon the temperature then these sorts of fractional sensitivities can be wrong.
The charge on the electron is about 1/6 x 10^(-20), which can be handy.
When I was a student, rather a long time ago, I observed that a lot of the time that a professor asked for a numerical answer, it was either 0, 1 or singularity/infinity/etc, which reduced the search space somewhat.
This is my favorite, because it's funny...
This comes from Tim McCaskey, currently teaching high school somewhere in the Chicago area, even though he should instead be wasting his physics degree by being a musician. He says:
In fourteen hundred and ninety-two
Columbus sailed the ocean blue.
Divide that son-of-a-bitch in two
And that's how many watts are in a horsepower.
8065 cm^-1 per eV. I used that all the time trying to get my chemist's brain to parse papers from the eV-loving gamma ray detection community.
And by 1/6 x 10^(-20), I did, of course, mean 1/6 x 10 ^(-18)
I like .6/10 ~ 1.6/10, it quite useful for km to miles conversions.
I've always been fond of the oh-so-versatile Taylor expansion, a.k.a. "everything can be approximated as a harmonic oscillator" a.k.a. "everything's a Gaussian distribution."
Feynman's L2 circular approximation for the Gaussian probability integral.
Err, this is distinctly less sophisticated than the others above, but doing the digital root check on arithmetic is a wonder for me.
After that, the rule of 72.
2^10 ~ 10^3
pi ~ sqrt(10)
e^(i*pi) = -1
Warp Speed = (velocity in units of C)^(1/3)
Age of universe ~ 0.5 x 10^18 seconds
speed of light ~ 299,792.458 kilometers / second ~ 300,000 k/sec
electron mass = 2.00827494 Ã 10^-30 pounds
~ 2 Ã 10^-30 pounds (just figured this one out a few seconds ago)
Mass of sun ~ 1.99 x 10^30 kg ~ 2 x 10^33 g
Earth Mass ~ 5.97 x 10^24 kg ~ 6 x 10^27 g
astronomical unit (AU) ~ 1.4959787 x 10^11
meters ~ 1.5 x 10^11 m
Earth's mean orbital speed ~ 30 kilometers per second
electron radius (Compton) ~ 1.1094252 Ã 10^-13 inches ~ 0.1 picoinches
electron rest mass ~ 9.10956 x 10^-28
grams ~ 10^-27 g
nuclear magneton ~ 5.051 x 10^ -27
joule per tesla = 5.051 x 10^-31
joule per gauss ~ 5 x 10^-31
joule per gauss
I've seen Feynman use most of these, back when I was coauthoring with him.
He loved asking handwaving theorists who had just filled a blackboard: "Have you put the numbers in?" he'd then do so, in his head, solving differential equations in his head if necessary.
He once sat silently for 2 hours in a PhD oral defense, then finally said, right at the end (like Columbo on his way out the door): "Have you put the numbers in?"
The PhD candidate protested that he didn't need to, since Equation #103 derfivesd from #1202, etc.
Feynman put in the approximations for electron mass, speed of light, fine structure constantr, radius of the universe, and the like.
"I see," he concluded. "By your final equation, the radius of the universe is roughly 1 centimeter."
He was sent back to work his dissertation for a year or two.
I've wondered ever since. Suppose the radius of the universe really is roughly 1 centimeter? Sorry. Thinking like a Science Fiction author again...
Jonathan Vos Post:
[...]
Warp Speed = (velocity in units of C)^(1/3)
[...]
[...]
[...]
I've seen Feynman use most of these
I bet he did a lot of warp speed calculations. While playing the bongos, no doubt.
Well, yes, Chad. He liked Star Trek. It made him laugh. Gene Roddenberry and Majel and Georhe Takei et al visited the Caltech campus. It was Caltech students who came up with the Warp Speed formula.
Feynman was a tremendously gifted self-taught bongo drummer. He'd say "pick two integers under 20."
Someone would say, for instance, "Okay, 7 and 17."
He'd then play a 7 against 17 beat by hand. Tape recorders and oscilliscopes confirmed it.
I've written about 3/4 of a novel, "Axiomatic Magic" set in a universe where both science and magic work, where he's the amateur sleuth trying to solve the magical murder of the world's top Group Theorist, with John Horton Conway a prime suspect.
He was the greatest of my mentors. Shows how much is lost in translation...
I tend to start big calculations with a check of the units, but that may be because I taught high school for a couple of years, and I had to protest about units about once a week.
My other favorite trick is to try to see if some conservation law (usually energy) will give a shortcut. Its a neat trick when it works.
I love the odd function integration one too, as well as a related one:
If you have some arbitrary integral in QM, often times you can set bits of it proportional to Ylm's (Scienceblogs needs latex). Since Ylm's form a basis it is easy to tell when integrals are zero. Last semester I used this to save a significant amount of time when I had to evaluate a bunch of integrals for the perturbation due to the Stark effect
It pops up again in E&M when you are solving for the potential due to a sphere (symmetric around phi or theta, I don't remember which). Either way, your unsimplified answer is an infinite sum with some coefficients in front which you have to determine. When you do the integral to find the coefficients, if you are clever, you set your integral up to be a product of legendre polynomials. Since the legendre polynomials are orthogonal, only the integrals that have like polynomial orders are going to stick around. The rest are zero.
This means that what used to be an infinite sum is now a finite sum
I always liked c/g = 1 year (more or less).
Always good for working the twin paradox.
(Oops, that's a different entry :-) ).
Any number that is divisible by three is made up of single digit units that sum to a number that is divisible by three. This comes in handy why you're trying to determine if a protein coding sequence has the appropriate number of nucleotides.
Here's one that shows my low temperature physics heritage. The Wiedemann-Franz law comes in really handy for designing experiments to run in dilution refrigerators. If we call the thermal resistance R_T and the electrical resistance R, then R_T times T = 6 (in SI units, K^2/W) when R = 150 nanoOhms.
So, what's your favorite trick for making a long problem shorter?
Assume 1 = 10 .
That is, if you're walking through a problem on the back of a napkin, and it seems to be roughly around the correct order of magnitude, it's then worth digging in for a more formal solution.
Cheez Louise, Chad, you've made "most active" Seedbloggers with this poll. I wish I had something to add beyond "second you on the transform to complex numbers for Fourier analysis".
I am completely at your mercy. Do with me as you wish.
Keplers 3rd law = (Period in Years)^2 = (SMA in AU)^3
1 year = 10^7.5 seconds
c = 1 ft/nanosecond
Mass of earth = 6.6 sextillion tons (kind of rolls off the tongue)
x-tillion = 10^(3 (x+1) )
1 ton = 10^6 grams
hbar = 10^-27 Js
and (Can't believe nobody used this one PI = 22/7)
Oh yeah, I forgot one.
g = 10 m/s^2
It's been a very long time since I took quantum, and someone already mentioned differentiating inside an integral wrt a parameter... However, there was one trick I felt clever for figuring out in QM -- when we were asked to solve the Schrödinger equation with a delta function potential, I realized you can represent the delta function in the equation by d/dx(|x|).
Danil, I encountered a hydrogen atom derivation along the lines of what you describe while studying for the GRE. I don't remember any factors of 3 turning into factors of pi, but that's most likely because the guy who showed it to me wasn't concerned with factors of 2, pi, etc. to begin with. He showed it to us as a way to deal with GRE favorites like "What's the equivalent of the Bohr radius for positronium?", i.e. what happens to the wave function, ground state energy, etc, if you change the proton mass, charge, etc. The idea was that you take some liberties with the uncertainty principle and say p=hbar/2r. Plug that into E=p^2/2m - k/r, and minimize with respect to r. That gives you the Bohr radius to within factors of pi, 2, etc. I don't remember if there's an easy way to get the excited state energies out of it or not. In any case it served me well on the GRE, and actually came in handy on a quantum midterm last week as well, so it's got my vote as a good shortcut to know. I've also always been fond of the trick of calculating Gamma(1/2) by taking the product of a Gaussian in x and another in y and switching to polar coordinates.
lots of good ones here, but god bless raising and lowering operators! Algebra instead of calculus? BRILLIANT! Well somebody has to calculate/measure dipole matrix elements of rubidium say, and the calculation is hard as it is not hydrogen, but if you take those as parameters....
I have to second the suggestion for doing harder integrals by differentiating wrt a parameter inside an easier integral. I loved that the first time I ever saw it. (And, this was of course one of Feynmann's favorite tricks.)
My favorite single-use trick is how to find the definite integral for a gaussian from negative infinity to infinity: square the integral and then convert to polar coordinates. I never would have thought of that in a million years.
sin x = x
and the Mother of All Approximations...
f(x) = f(a) + (x-a) f'(a)