Academia
As promised, an answer to a question from a donor to this year's DonorsChoose Blogger Challenge. Sarah asks:
Chad, can I get a post about how you (or scientists in general) come up with ideas for experiments? You've covered some of the gory detail with the lab info posts, but I think it would be useful for your readers to see where the ideas come from.
The answer is obvious: Ideas come from Schenectady. Which, not coincidentally, is where I live...
More seriously, in my area of experimental physics, I think there are two main ways people come up with ideas for new experiments:…
Melissa at Confused at a Higher Level has a nice post on the tension between faculty research and teaching:
Malachowski writes, "We all know that working with undergraduates is time consuming and in some cases it slows down our research output, but work with undergraduates should be supported, celebrated, and compensated at a high level. For most of us, the process involved in research with students is as important as the product." If colleges adopt a narrow definition of scholarly productivity measured only by publications, they may unintentionally provide incentives for faculty not to…
Ok, not a bar, more like an information literacy class.
I thought I'd bring to everyone's attention a presentation by two of my York University Libraries colleaques, web librarian William Denton and instruction librarian Adam Taves.
It was at Access in Winnipeg a week or so ago:
After Launching Search and Discovery, Who Is Mission Control?
Reference librarians are whiny and demanding.
Systems librarians are arrogant and rude.
Users are clueless and uninformed.
A new discovery layer means that they need to collaborate to build it and then -- the next step -- integrate it into teaching and…
As a sort of follow-up to yesterday's post asking about incompetent teachers, a poll on what you might call the "Peter Threshold," after the Peter Principle. Exactly how many incompetent members can an organization tolerate?
The acceptable level of incompetence in any organization (that is, the fraction of employees who can't do their jobs) is:Market Research
This was prompted by one commenter's estimate that 30% of business managers are incompetent, which seems awfully high to be acceptable, particularly in the business world where, we're told, incompetents are regularly fired without…
As mentioned in the previous post, there has been a lot of interesting stuff written about education in the last week or so, much of it in response to the manifesto published in the Washington Post, which is the usual union-busting line about how it's too difficult to fire the incompetent teachers who are ruining our public schools. Harry at Crooked Timber has a good response, and links to some more good responses to this.
I'm curious about a slightly different question, though, which is in the post title. There's a lot of talk about how incompetent teachers are dragging the system down, but…
There have been a bunch of interesting things written about education recently that I've been too busy teaching to comment on. I was pulling them together this morning to do a sort of themed links dump, when the plot at the right, from Kevin Drum's post about school testing jumped out at me. This shows test scores for black students in various age groups over time, but more importantly, it demonstrates one of my pet peeves about Excel.
If you look at the horizontal axis of this plot, it shows regularly spaced intervals. If you actually read the labels, though, you'll see that they're anything…
The latest Cites & Insights (v10i11) is out and in it Walt Crawford explores some of the recent developments in the blogging landscape in a section called The Zeitgeist: Blogging Groups and Ethics. It's a very good overview and analysis of what's going on both in the science and librarian blogospheres.
It's well worth checking out. Some highlights:
Blogging Groups and Ethics
Do you blame Roy Tennant when the Annoyed Librarian writes posts that undermine librarianship and libraries?
I'm guessing you don't. Whoever the Library Journal incarnation of the Annoyed Librarian might or might…
Melissa at Confused at a Higher Level offers some thoughts on the relative status of experimental vs. theoretical science, spinning off a comprehensive discussion of the issues at Academic Jungle. I flagged this to comment on over the weekend, but then was too busy with SteelyKid and football to get to it. since I'm late to the party, I'll offer some slightly flippant arguments in favor of experiment or theory:
Argument 1: Experimentalists are better homeowners. At least in my world of low-energy experimental physics, many of the skills you are expected to have as an experimental physicist…
It's that time of year again, which is to say "October, when we raise money for DonorsChoose." As you may or may not know, DonorsChoose is an educational charity which has teachers propose projects that would make their classrooms better, and invites donors to contribute to the projects of their choice. Every October since ScienceBlogs launched, we have done a fundraiser for them here, and this year's entry is now live:
While the warm-fuzzy sensation of doing a good deed for school children in poor districts may be enough to get some people to donate, I'll also sweeten the pot a little with…
Via Tom, a site giving problem-solving advice for physics. While the general advice is good, and the friendly, Don't-Panic tone is great, I do have a problem with one of their steps, Step 7: Consider Your Formulas:
Some professors will require that you memorize relevant formulas, while others will give you a "cheat sheet." Either way, you have what you need. Memorization might sound horrible, but most physics subjects don't have that many equations to memorize. I remember taking an advanced electromagnetism course where I had to memorize about 20 different formulas. At first it seemed…
Thanks to Razib, I've managed to separate out Hispanic graduation rates in our new favorite graph (cf. and also):
I didn't put this on the graph, but immigration history does make a difference here. Hispanics born in the US have essentially the same high school graduation rate as everyone else, go to college more often than those born elsewhere (somewhat higher than among African Americans), and have comparable rates of attending grad schools as foreign-born Hispanics, both slightly lower than African Americans.
And thanks to other suggestions in the comments, here're the same data…
In answer to requests from the previous post on graduation rates, here's the same data broken down by race. African Americans still lag whites in graduation rates, but have made impressive gains in high school graduation rates, though graduation appears more likely to be delayed. African Americans are making impressive gains in grad school, but only quite recently. I may try to look at income later, but that's trickier to handle. The societal decline in number of people completing grad school in any subject is declining regardless. I haven't figured out how to separate non-Hispanic whites…
Via Steve Hsu, a GNXP post about the benefits of elite college educations, based largely on a graph of income vs. US News ranking. While the post text shows some of the dangers of taking social-science data too literally (the points on the graph in question are clearly binned, so I would not attribute too high a degree of reality to a statement like "The marginal benefit of getting into the next highest ranked school is actually higher the higher the rank of your current school. In other words, Yale grads should really really want to go to Harvard"), the apparent effect is pretty significant…
While answering a question for Science and Religion Today ("Is it of greater importance for America to have more scientific experts or less scientific illiteracy" – short answer: both, but if I must, I'd choose scientific literacy), I started toying around with these data on graduation rates in different generations:
Based on the General Social Survey, I plotted the percent saying they completed at least high school, college or junior college, and grad school against their birth year. The drop off for college and high school right at the end is probably just a sign that some people take…
So, what do we make of the NRC Rankings?
What drives the different rankings, and what are the issues and surprises?
First, the R-rankings really are reputational - they are a bit more elaborate than just asking straight up, but what they reduce to is direct evaluation by respondents without evaluating quantitative indicators.
Doug at nanoscale puts it well - the S-Rankings are really generally better indicators...
A new index W = R - S has been named the "hard work" index.
BTW - you can't take (R+S)/2 and call it a rank - you need to rank the resulting score and count the ordinal position…
The Dog Zombie has an interesting post discussing women in vet med--and why there are so many. She notes that her school is only 12% male, versus more of an even distribution in med schools, and the recent discussion of gender imbalance in science blogging. This is interesting to me, as my personal vet is male, as are almost all of the vets we collaborate with for our research. Of course, the gender distribution of veterinarians in academia may well be more gender-balanced (or even male-skewed) than those currently in vet school or recently graduated.
DZ posits some possible reasons for this…
The NRC rankings are out.
Penn State Astronomy is ranked #3 - behind Princeton and Caltech.
W00t!
PSU doing the mostest with the leastest.
The Data Based Assessment of Graduate Programs by the National Research Council, for 2010, is out, reporting on the 2005 state of the program.
The full data set is here
EDIT: PhDs.org has a fast rank generator by field.
Click on the first option (NRC quality) to get R-rankings, next button ("Research Productivity") to get the S-rankings, or assign your own weights to get custom ranking.
Astronomy S-Rankings:
Princeton
Caltech
Penn State
Berkeley…
Two aspects of the NRC rankings are that a) it took so long that the results are dated and people will selectively choose to use or ignore them as suits best (and then rely on the 1995 rankings instead I gather)
and, b) the process was so hard and unpleasant it will never be done again...
Hmm, that sounds familiar.
We can fix that.
See, the arduous part of the NRC was the data mining - gathering the metrics after they'd been defined.
It took a long time and required iterations and debate.
But, this is precisely the sort of thing that can be automated.
At least in large part.
eg. the…
With about 100,000 metrics collected on 5,000 or so program, there are bound to be errors.
In particular, a lot of the metrics are of the form:
out of N people, how many, k, do/do not have the property we are measuring
This is then reported as percentages.
These percentages must be of the form: (1 - k/N)*100
where k, N are integers.
Yet they are clearly not.
There are several explanations for this, all of which are likely correct:
First of all, there are what look like clear transcription errors; reversed digits, or duplicate or omitted digits. Somebody entered these numbers by hand on…