We cleared a bunch of space in our deep storage area over the summer, and one of the things we found was a box full of old student theses from the 1950's and 1960's. The library already had copies of them, but I thought it was sort of cool to have a look into the past of the department, so we put them up on a shelf in the office. Yesterday, I was glancing over this, and spotted a thin volume, pictured in the "featured image" above, which was a Master's thesis from 1960 (when we used to give MS degrees in physics...) titled "A Monte Carlo Study of Neutron Scintillation Detection with a Hydrogenous Crystal" by Edward Lantz and picked it up to look at.
If you're not familiar with the jargon, it might not seem like something worth a look, but the title is referring to the Monte Carlo method, which uses random numbers to predict the results of simulated experiments. You assign probabilities to the possible outcomes at each step of the simulation, then use a random number to pick one of those. You repeat this lots of times, and compile the results to get average trajectories and the like.
This is, as you might imagine, somewhat computationally intensive. So when I saw "Monte Carlo simulation" and "1960" on the spine of this thesis, I said "What the hell? How did they do Monte Carlo simulations in 1960? With dice?"
I was being a little unfair to 1960- they did have computers for this purpose, specifically an IBM 704 computer, capable of up to 4000 operations per second (it's not clear whether this was at Union or elsewhere-- the thesis has some gaps in its reporting of relevant information like that). They cite a 1956 publication from the Nuclear Division of the American Radiator and Standard Sanitary Corp. as the source of the code for their simulation. Reading this was a fun reminder of how different things used to be in the not all that distant past-- the author was a NASA scientist during the Apollo era, and was about the age of my own grandparents.
While they did have access to a computer for this work, there were some major differences in the approach. Unlike theorists of today, they didn't write a specialized program to do the analysis, because:
In recent years, it has been discovered that the development of a program, such as this, which not only works satisfactorily but also gives correct answers, is a formidable and time consuming job even for an experienced digital computer programmer.
Thus if one is not an experiences programmer, and does not have the resources to have the theory programmed, he must resort to the second, and not wholly undesirable, method. this is to use a proven general program and to get the output in as close to the desired form as possible by making minor changes.
So, basically, they used the output of a more general Monte Carlo program and re-interpreted its results in terms of the properties of the neutrons. The program they were using "prints out the number of neutrons from each type of collision for each energy interval of the incident neutron and each material." They combine this with information about the position and direction of each individual neutron, "which can be obtained by converting the binary information which is stored on one of the tapes" (!!!) to determine the results.
They repeated this 30,100 times (well, 100 runs of a simulation involving 301 neutrons), and compiled the results by hand to determine the number of neutrons they would be able to detect using this material as a scintillation detector. Most of the incident neutrons wet through without hitting anything, but they still ended up analyzing the results of 2340 collisions between incident neutrons and hydrogen atoms in their simulated sample. This would be a ridiculously small sample for such a project today, but let me repeat, they analyzed the collisions BY HAND.
In large part because of these historical computing limitations, the acknowledgements are a really interesting read:
M. S. Ketchum put the desired cross sections and energy intervals into the [General Monte Carlo] program. She laboriously key punched and set up numerous problems, deciphered computer memory stop locations and came up with corrections, merged output tapes, and reran problems.
My wife counted and sorted thousands of collisions, and typed and retyped rough drafts.
I'll try to keep this stuff in mind the next time I find myself cursing at flaky VPython simulations. Because, really, as irritating as a lot of modern computing can be, it's so much easier than it used to be that it's not even funny.
(Also, the obituary linked above suggested that he remained married to his collision-counting wife until his death in 2011, which is kind of nice to know...)
Another interesting factor has to do with the document preparation. Not only does he acknowledge his wife's typing, but:
Rose Kabalian typed the multilith masters so we would not have to read blurred carbons.
While Word can be maddening, I will try to remember to be thankful that I don't have to read blurred carbon copies of important documents, because Oh. My. God. Also, check out the hand-drawn data graph:
I'm just barely old enough to have been required to learn traditional drafting in woodshop in the 8th grade-- I think it was only a couple of years later that they discontinued that in favor of a "technology" course that didn't involve actually doing anything. So I have some idea of just how much that sort of plotting would suck. And that's one of at least 15 (sorry, XV-- they're labeled with Roman numerals for that extra touch of class) such figures.
So, there you go. A tangible relic from the days when non-experienced programmers walked uphill through the snow to the lab to convert binary information from tapes and sort collisions by hand. Kids these days, be grateful for what you've got...
- Log in to post comments
In recent years, it has been discovered that the development of a program, such as this, which not only works satisfactorily but also gives correct answers, is a formidable and time consuming job even for an experienced digital computer programmer.
That's just as true today as it was in 1960. Most of my code writing efforts involve taking building blocks that other people wrote, and tweaking/adapting them to my specific purpose, rather than writing the code from scratch. Other people have established methods for computing a Fourier transform, or extracting data from a telemetry stream, or any of the hundreds of other things I do in the course of my work. So there is no need for me to reinvent the wheel.
Even among the people I know who specialize in simulations tend to build on earlier codes rather than writing them from scratch. They typically don't have time to write and (more importantly) debug that volume of code. Once in a while somebody will code an improvement to some core piece of the algorithm, but the goal is to get something that will elucidate the physics of the problem.
It's easy to forget how much difference modern computing equipment makes. I once came across a book of actuarial tables involving all kinds of life contingent annuities involving myriad combinations of different ages of primary and secondary annutants etc. ect. The publication date of this book was as I recall around 1780. It was staggering to contemplate how much human computational effort was involved in the production of this book.
You keep referring to "they." A team wrote this? I've never written a thesis, but isn't the work done by one person, or is the author referring to assistants?
Wes @3: The author of the thesis acknowledges assistance from at least three other people (and that's just in the portions Chad quoted). Also, while details vary by field, it is common in the sciences for a thesis student to perform the work in collaboration with her advisor--particularly in experimental work, where single-author papers are rare. Even for theoretical papers, at least in my field, the authors on the published papers are typically $STUDENT and $ADVISOR, or $STUDENT et al. (with $ADVISOR's name appearing somewhere in the author list). A few fields perversely insist on alphabetic author lists, but even there, $STUDENT and $ADVISOR will have their names in the appropriate positions. A bachelor's or master's thesis would typically be based on one or two papers; for a Ph.D. thesis (at least in my field) it's typically 3-6.
When I speak of punch cards, central processor rooms with refrigerator-size memory drives, and state-of-the-art modems with phone cradles working at the blinding speed of 300 bps their eyes glaze over.
On the other hand it is fun to watch a kid try to dial a rotary-dial phone by pushing the numbers. Interestingly those rotary-dial units still work on many phone systems and, being genuine Ma Bell produced devices, last forever. So tough they are a formidable weapon. Try that with an I-phone. I keep a "black-dial" around just for the laughs.
You will be equally surprised to know in 1960's they did Hatree Fock calculations for many elements of the periodic table and these still serves as a standard. I was surprised while I was coding the same in Python
By the time I got to college both COBOL and FORTRAN were widely adopted but I discovered when I got to graduate school in 1970 that while quite a few graduate students could write FORTRAN code, almost none of the faculty could. This made for a somewhat anomalous situation - I found later - for those of us working in computational physics. The social contract between faculty and student was very different than it was in either experimental or theoretical (non-computational) physics.
Now that brought back some memories! Time to dust off the Marchant again.
The student was lucky that he could get the code. Sharing software was often considered to be like one student giving another student homework answers.
The Metropolis Monte Carlo algorithm was introduced in a 1953 paper:
http://en.wikipedia.org/wiki/Equation_of_State_Calculations_by_Fast_Com…
You can perhaps guess what they were doing (and why they had the computing time) if you know that Edward Teller was one of the authors!