Don't get sick in July?

Blogging on Peer-Reviewed Research

Dave Munger and others have been spearheading an effort to promote the acceptance of a specific logo that science bloggers (ScienceBloggers, included) can use to let the reader know that the topic of a blog post is a discussion of real, peer-reviewed research. Use of the logo, which I've used for this post, means a blogger is not just commenting on research that's been reported in the media, but rather has gone, so to speak, straight to the horse's mouth to look up the original peer-reviewed journal article. It's a worthy effort, and I plan on going back through the last few months of blogging and tagging appropriate posts, such as this one where I discussed a recent article showing that having a positive mental attitude probably does not impact cancer survival.

There's another peer-reviewed paper that I've been meaning to discuss for about a month and a half now, but somehow it's gotten buried or pushed aside. Just as I was going to mention it last week, for instance, other topics came up that interested me more, at least at the time. Yesterday's inauguration of the BPR3 effort tweaked me to finally dig this paper out of the stack of Things That I Should Really Blog about and actually, you know, blog about it.

There's a common saying in academic medical centers that you may have heard before: "Never get sick in July." The reason, of course, is that sometime between June 24 and July 1 is when most residency programs start. This means that every July freshly minted interns who less than a month ago were in medical school are set loose on an unsuspecting patient population, while interns and junior residents suddenly find themselve in charge for the first time. Actually, it shouldn't be as bad as that, if the supervision is adequate, but the question is whether there really is an increase in complications in July and August, the earliest months in the academic year. It turns out that a group at my old alma mater, the University of Michigan, took a look at just this question for surgical patients. The article got a fair amount of publicity in September, when it first came out. The article appeared in the Annals of Surgery and was entitled Seasonal Variation in Surgical Outcomes as Measured by the American College of Surgeons-National Surgical Quality Improvement Program (ACS-NSQIP), whose abstract follows:

Objective: We hypothesize that the systems of care within academic medical centers are sufficiently disrupted with the beginning of a new academic year to affect patient outcomes.

Methods: This observational multiinstitutional cohort study was conducted by analysis of the National Surgical Quality Improvement Program-Patient Safety in Surgery Study database. The 30-day morbidity and mortality rates were compared between 2 periods of care: (early group: July 1 to August 30) and late group (April 15 to June 15). Patient baseline characteristics were first compared between the early and late periods. A prediction model was then constructed, via stepwise logistic regression model with a significance level for entry and a significance level for selection of 0.05.

Results: There was 18% higher risk of postoperative morbidity in the early (n 9941) versus the late group (n 1=313) (OR 1.18, 95%, CI 1.07-1.29, P 0.0005, c-index 0.794). There was a 41% higher risk for mortality in the early group compared with the late group (OR 1.41, CI 1.11-1.80, P < 0.005, c-index 0.938). No significant trends in patient risk over time were noted.

Conclusion: Our data suggests higher rates of postsurgical morbidity and mortality related to the time of the year. Further study is needed to fully describe the etiologies of the seasonal variation in outcomes.

A reasonable question to ask is whether there is any data before this demonstrated any seasonal adjustments in surgical complications that might be related to the new crops of interns that show up every year. The authors speculate that this is probably because existing quality metrics have not until recently been sufficiently standardized and adjusted for risk based on preexisting conditions to allow reliable month-to-month comparisons on a large scale. Recently, however, the American College of Surgeons-National Surgical Quality Improvement Program (ACS-NSQIP) has changed that. This system uses a set of defined comorbidities and endpoints, along with a much more rigorous risk adjustment system to allow a valid comparison of results among hospitals, and it accounts for some seasonal variables that might confound an analysis and either mask or accentuate seasonal variations in outcomes. In this study, the authors analyzed data from over 60,000 patients in 14 academic medical centers and 4 large, private, community-based hospitals over three years. They conclude that there is indeed an 18% higher rate of complications "early" in the year (July and August) as compared to "late" in the year (April 15 to June 15, most likely chosen because chief residents tend to start disappearing to go to fellowships during the last two weeks of the year) and a 41% higher chance of mortality. They also found statistically significant differences in mean OR time, meantime between getting the patient in the room and making the incision, and time under general anaesthesia, all to the worse in the early group.

Although the effort taken to do this study was impressive and this represents the first study that I'm aware of that seems to support a "July effect," I'm not sure that it is as strong an indicator as the authors would lead you to believe. Certainly it's not as strong as some news reports played it, some of which in essence repeated the classic "don't get sick in July" warning. One reason for my skepticism is shown in Figure 2, which plots the mortality rate versus the month of the year:

i-a31ff33fde7d2d9592e94762abfd1eed-Fig2.jpg

Note that there are two large spikes, one in July and one in December (the latter of which is even higher than July), and one lesser spike in March. Given this variation, I'm not sure why they tried to perform a linear regression on the data; there's no reason to think that, even if there is a decrease in mortality as the year goes on, it would necessarily be a linear relationship. Indeed, if I were to guess, I'd think it would probably approach a lower boundary asymptotically. The authors also did the same regression in Figure 1, which graphed the morbidity rate over the course of the academic year as well, with even less convincing results given that the apparent line was much flatter.

All this means is that, as the authors acknowledge, the relationship, if one exists is either (1) more complex than simply being due to seasonal variations in the experience of the residents or (2) not adequately documented by the present data, as superior as it is to prior data. They are correct, however, that this data could be an indication that disruptions in hospital routine are the major cause of seasonal variations in morbidity and mortality rates. Lots of attending staff is on vacation in December for the Christmas and New Year holidays, and the most senior residents also tend to go on vacation during those times. Another factor is that patients tend not to want to have surgery around the holidays if it can be safely delayed. The same may be true for the summer months. Obviously big cancer operations and the like aren't going to be delayed, but it's usually fairly safe to delay having an inguinal hernia repaired or an elective cholecystectomy, for example. Consequently, it's not unreasonable to speculate that more urgent cases during these times of the year might lead to more complications, although one would hope that the robust risk adjustment in ACS-NSQIP would allow that relationship to be teased out. The problem is that the system has a very specific definition of what "urgent" means and doesn't necessarily capture "semiurgent" cases, in which the operation doesn't necessarily occur within 12 hours of the patient's admission.

Finally, the obvious control group, again as acknowledged by the authors, is not in this study, namely a group of hospitals without residency programs. The most difficult task in doing such a comparison would be that community hospitals tend to do many fewer big cases, less high risk surgery, and a lot more of the more common and uncomplicated "bread and butter" surgical cases. Indeed, they usually refer the complex cases to the big academic medical centers, mainly because most community hospitals, aside from the really big ones (most of which, if big enough, are affiliated with a medical school and have residents), are simply not equipped to handle such complex cases. Even so, with enough cases entered into the database, it should become possible to do such a comparison. It will, however, be difficult and complex.

Fortunately, ACS-NISQIP is an ongoing project that continues to collect outcomes data. As the database grows, it should be possible to isolate single variables, such as resident experience, that are associated with differences in outcomes. One thing I can say for sure, though: My anal sphincter tone is definitely much tighter in July than it is in May and June when the new interns start.

More like this

June is almost over. If you work in an academic medical center, as I do, that can mean only one thing. The new interns are coming, and existing residents will soon be advancing to the next level. The joy! The excitement! The trepidation! And it's not all just the senior residents and the faculty…
You've probably heard the oft-repeated charge of "alternative" medicine advocates. If you get into a debate or conversation with one, you can almost count on seeing or hearing it before too long. Indeed, we heard a variant of this very claim yesterday coming from über-woomeister supreme Deepak…
In recent years, there has been a lot of interest in improving surgical outcomes. One strain of research tends to examine the "volume-outcome" relationship, which in essence asked the question if the volume of cases that a surgeon or hospital does has a relationship outcome. In other words, are…
In science- and evidence-based medicine, the evaluation of surgical procedures represents a unique challenge that is qualitatively different from the challenges in medical specialties. Perhaps the most daunting of these challenges is that it is often either logistically impossible or unethical to…

Perhaps it's my engineering bias, but that data set in figure 2 seems more like it would fit a sine curve.

Did the authors do any comparison of elective versus non-elective surgery outcomes?

Regarding Figure 2, it seems that if you only looked at July through March, the trend would be relatively flat, indicating no change. The bulk of the effective change was in the latter months. This, to me, does not indicate a "learning curve", reflecting a gradual improvement in skills and knowledge. Instead, this suggests other factors are indeed involved.

I find it reassuring that the"allopathic establishment" is interested in evaluating real data and is honest about its certainties and uncertainties. Where are the woos? the alties?
Respectfully,
your resident allien reptile
Robert

By robert Estrada (not verified) on 30 Oct 2007 #permalink

Cancel all holidays and see what happens...

By The Grinch (not verified) on 30 Oct 2007 #permalink

DLC It's not just engineers. I'm seeing a sine wave fit as well.

I read the post somewhat diagonally, so I may have missed the answers to some of my questions in the process, but here goes:

I'm seeing a single peak around Dec. with a couple of outliers, and possibly with somewhat asymmetric tails. But at this data resolution, that's just so much eyeballing. Do they have a weekly breakdown, and does it look prettier? What're the error bars on those data points? Are the values adjusted for known risk factors such as outside temperature? How many years and how many hospitals are included in the sampling? What, for that matter, is the theoretical reason for the choice of a linear regression?

- JS

In the UK the equivalent month is August, as traditionally the programmes start on August 1st.

If you timed your transatlantic holiday really badly (in either direction) you could potentially hit both the UK and US "black months". Food for thought.

Hey, thanks for the link to the peer-reviewed icons! It's an excellent idea & I'll be using the icons from now on.

Attitude may not matter. What is clearly associated with survival in prostate cancer is marital status. You can see the data in the About:publications section. Is this about attitude, adult supervision, laundry detergent? What is it about marriage that means we'll leave longer if we have cancer? God only know!