Methodology

Toby Martin 2015, The Cruciform Brooch and Anglo-Saxon England This is the definitive study of English cruciform brooches. Now and then a study comes along that is so comprehensive, and so well argued, that nobody will ever be likely to even try to eclipse it. It is my firm belief that future work on English cruciform brooches will strictly be footnotes to Toby Martin. He has collected and presented a huge material, asked interesting questions of it, and dealt with it competently using state-of-the-art methods. I'd be happy to hand this book as a model to any archaeologist anywhere who…
At Uncertain Principles, Chad opines that "research methods" look different on the science-y side of campus than they do for his colleagues in the humanities and social sciences: When the college revised the general education requirements a few years ago, one of the new courses created had as one of its key goals to teach students the difference between primary and secondary sources. Which, again, left me feeling like it didn't really fit our program-- as far as I'm concerned, the "primary source" in physics is the universe. If you did the experiment yourself, then your data constitute a…
Here we continue our examination of the final report (PDF) of the Investigatory Committee at Penn State University charged with investigating an allegation of scientific misconduct against Dr. Michael E. Mann made in the wake of the ClimateGate media storm. The specific question before the Investigatory Committee was: "Did Dr. Michael Mann engage in, or participate in, directly or indirectly, any actions that seriously deviated from accepted practices within the academic community for proposing, conducting, or reporting research or other scholarly activities?" In the last two posts, we…
When you're investigating charges that a scientist has seriously deviated from accepted practices for proposing, conducting, or reporting research, how do you establish what the accepted practices are? In the wake of ClimateGate, this was the task facing the Investigatory Committee at Penn State University investigating the allegation (which the earlier Inquiry Committee deemed worthy of an investigation) that Dr. Michael E. Mann "engage[d] in, or participate[d] in, directly or indirectly, ... actions that seriously deviated from accepted practices within the academic community for…
Way back in early February, we discussed the findings of the misconduct inquiry against Michael Mann, an inquiry that Penn State University mounted in the wake of "numerous communications (emails, phone calls, and letters) accusing Dr. Michael E. Mann of having engaged in acts that included manipulating data, destroying records and colluding to hamper the progress of scientific discourse around the issue of global warming from approximately 1998". Those numerous communications, of course, followed upon the well-publicized release of purloined email messages from the Climate Research Unit (…
Session description: Much of the science that goes out to the general public through books, newspapers, blogs and many other sources is not professionally fact checked. As a result, much of the public's understanding of science is based on factual errors. This discussion will focus on what scientists and journalists can do to fix that problem, and the importance of playing a pro-active role in the process. The session was led by Rebecca Skloot (@RebeccaSkloot), Sheril Kirshenbaum (@Sheril_), and David Dobbs (@David_Dobbs). Here's the session's wiki page. Getting the Science Right: Importance…
As I was driving home from work today, I was listening to Marketplace on public radio. In the middle of a story, reported by Nancy Marshall Genzer, about opponents of health care reform, there was an interesting comment that bears on the nature of economics as a scientific discipline. From the transcript of the story: The Chamber of Commerce is taking a bulldozer to the [health care reform] bill. Yesterday, the Washington Post reported the Chamber is hiring an economist to study the legislation. The goal: more ammunition to sink the bill. Ewe Reinhardt teaches economics at Princeton. He…
Back before I was sucked into the vortex of paper-grading, an eagle-eyed Mattababy pointed me to a very interesting post by astronomer Mike Brown. Brown details his efforts to collaborate with another team of scientists who were working on the same scientific question he was working on, what became of that attempted collaboration, and the bad feelings that followed when Brown and the other scientists ended up publishing separate papers on the question. Here's how Brown lays it out: You would think that two papers that examine the same set of pictures from the Cassini spacecraft to map…
Over at Starts with a Bang, Ethan Siegel expressed exasperation that Nature and New Scientist are paying attention to (and lending too much credibility to) an astronomical theory Ethan views as a non-starter, Modified Netwonian Dynamics (or MOND): [W]hy is Nature making a big deal out of a paper like this? Why are magazines like New Scientist declaring that there are cracks in dark matter theories? Because someone (my guess is HongSheng Zhao, one of the authors of this paper who's fond of press releases and modifying gravity) is pimping this piece of evidence like it tells us something.…
In a recent post, Candid Engineer raised some interesting questions about data and ethics: When I was a graduate student, I studied the effects of many different chemicals on a particular cell type. I usually had anywhere from n=4 to n=9. I would look at the data set as a whole, and throw out the outlying points. For example, if I had 4 data points with the values 4.3, 4.2, 4.4, and 5.5, I would throw out the 5.5. Now that I am older, wiser, and more inclined to believe that I am fully capable of acquiring reproducible data, I am more reluctant to throw away the outlying data points. Unless…
You may recall my examination earlier this month of a paper by Johnson and Stricker published in the Journal of Medical Ethics. In my view, it was not a terribly well-argued or coherent example of a paper on medical ethics. Now, judging from an eLetter to the journal from Anne Gershon, the president of the Infectious Diseases Society of America (IDSA), there is reason to question the factual accuracy of that paper, too. The Johnson and Stricker paper promised an exploration of ethical issues around an antitrust investigation launched by the Connecticut Attorney General examining the IDSA's…
There's an interesting article in the Telegraph by Eugenie Samuel Reich looking back at the curious case of Jan Hendrik Schön. In the late '90s and early '00s, the Bell Labs physicist was producing a string of impressive discoveries -- most of which, it turns out, were fabrications. Reich (who has published a book about Schön, Plastic Fantastic: How the Biggest Fraud in Physics Shook the Scientific World) considers how Schön's frauds fooled his fellow physicists. Her recounting of the Schön saga suggests clues that should have triggered more careful scrutiny, if not alarm bells. Of…
At White Coat Underground, PalMD considers an article from the Journal of Medical Ethics. The article (L. Johnson, R. B. Stricker, "Attorney General forces Infectious Diseases Society of America to redo Lyme guidelines due to flawed development process," Journal of Medical Ethics 2009; 35: 283-288. doi:10.1136/jme.2008.026526) is behind a paywall, but Pal was kind enough to send me a copy. Pal writes: I have a strong interest in medical ethics, although I'm not an ethicist myself. Still, I'm generally familiar with the jargon and the writing styles. This piece reads like no ethics article I…
In the wake of some recent deaths in Edmonton of teenagers who took Ecstasy, DrugMonkey gets irritated with a doctor who made some proclamation to the press: I'm particularly exercised over an article which quotes Charles Grob, M.D. (UCLA page): Charles Grob believes there is a strong chance that a deadly batch of adulterated pills is making the rounds in and around Edmonton, though health officials and law-enforcement groups have issued no such public warning. Dr. Grob, a professor of psychiatry at UCLA, was the first U. S. researcher to conduct human tests of methylenedioxymethamphetamine…
By email, following on the heels of my post about the Merck-commissioned, Elsevier-published fake journal Australasian Journal of Bone and Joint Medicine, a reader asked whether the Journal of American Physicians and Surgeons (JPandS) also counts as a fake journal. I have the distinct impression that folks around these parts do not hold JPandS in high esteem. However, it seems like there's an important distinction between a fake journal and a bad one. Kathleen Seidel of the Neurodiversity Weblog wrote a meticulous examination of JPandS and of the professional society, the Association of…
One arena in which members of the public seem to understand their interest in good and unbiased scientific research is drug testing. Yet a significant portion of the research on new drugs and their use in treating patients is funded by drug manufacturers -- parties that have an interest in more than just generating objective results to scientific questions. Given how much money goes to fund scientific research in which the public has a profound interest, how can we tell which reports of scientific research findings are biased? This is the question taken up by Bruce M. Psaty in a Commentary…
Some commenters on my last post seem to be of the view that it is perfectly fine for scientists to pull numbers out of thin air to bolster their claims, at least under some circumstances. I think it's a fair question to ask: In which circumstances are you comfortable giving scientists the go-ahead to make up their numbers? One suggestion was that scientists ought to be "permitted to use ordinary language meanings of words and colloquialisms in their non-peer reviewed discourse" -- or at least, to do so in discussions on blogs. I take it this assumes that "ordinary language meanings of words…
When scientists make claims with numbers they have clearly pulled out of thin air. For example: Ultimately, success is only about 0.01% based on 'strategies' like those espoused on this blog, and 99.99% on simply doing good science and explaining a good plan well. Is commenter Dave making a subtle joke here? Or what? Because the way science is supposed to be played, making up frequencies (or probabilities, for that matter) is Just Not Done.
Everyday Practice of Science: Where Intuition and Passion Meet Objectivity and Logic. by Frederick Grinnell Oxford University Press 2009 Scientists are not usually shy when it comes to voicing their frustration about the public's understanding of how science works, or about the deficits in that understanding. Some lay this at the feet of an educational system that makes it too easy for students to opt out of science coursework, while others blame the dearth of science coverage in our mass media. Rather than casting about for a villain, cell biologist Frederick Grinnell has written a book…
Orac takes issue with a pair of posts I wrote yesterday about the National Center on Complementary and Alternative Medicine (NCCAM). I gather he thinks I've been far too trusting as far as the information provided on the NCCAM website, and that I'm misrepresenting the issues the critics of NCCAM have with the center. If my posts communicated that they were giving the straight dope on NCCAM and the objections to it, then I blew it; that wasn't at all what was intended. Rather, I wanted to have a look at the ethical issues that arise from such an official effort to examine medical treatments…