Effect Measure

Back to benzene in soda

It’s been a while since we visited the FDA’s benzene-in-soft-drinks failure (see here, here, here and here). No time like the present.

Serious questions remain over how America?s food safety watchdog handled the presence of benzene residues in soft drinks, a senior ex-official has said, after tests showed some drinks still contained the chemical 15 years after the industry agreed to cut it.

The source told BeverageDaily.com it was “embarrassing” the Food and Drug Administration had failed to eradicate benzene residues from all drinks.

His comments come as newly released meeting memos show at least one soft drinks maker, Kraft Foods, called for more guidance on benzene in soft drinks from the US Food and Drug Administration (FDA) earlier this year.

The developments put pressure on the FDA nine months after a BeverageDaily.com investigation revealed the watchdog had again found some drinks containing more benzene than is allowed in US drinking water. (FoodNavigator)

I rather doubt it is embarrassing to the FDA. This administration has no shame and failure to regulate is not an embarrassment, it is a badge of honor they where proudly to all their fundraisers.

Now they are in danger of getting on the wrong side of the food industry, too, as the food industry entreats the FDA to give them guidance. The FDA can’t even do its corruption competently.

Meanwhile, both the FDA and the food industry are telling the public not to worry. You have to drink a lot of benzene-laced soda to have even a tiny risk of cancer. Which is true. For any particular individual. The problem is that with tends of billions of bottles consumed around the world, the risk doesn’t have to be large. A one in a million risk is nothing for an individual but it is 1000 cancers for a billion individuals. As the ex-FDA official said:

“There is a difference here between a small and unavoidable risk, and a small but avoidable risk,” the ex-agency source said.

“The FDA seems to think the risk is quite low here. On the other hand, it’s in a [non-essential] product nobody needs, and it doesn?t have to be there. They claim they can reformulate.”

We previously discussed the rats, cancer and testing issue here, here and here for those interested in why we worry about small amounts of a carcinogen in food and drink.


  1. #1 Lab Cat
    November 28, 2006

    Surely it is more embarassing that the soda drink companies haven’t sorted out benzene in sodas. The FDA can’t do much, short of regulation, without their help.

  2. #2 revere
    November 28, 2006

    Lab Cat: The FDA needs to work with the food industry to establish the practices that will stop generating benzene in soda and other soft drinks. If some of the many companies solve the problem (which is solvable) and keep it a secret, only using it themselves, there will continue to be benzene in soda. The task is probably best solved in an open and collaborative way with the FDA being the linchpin.

    The food industry always asks for FDA help that stops short of regulation. That’s not unreasonable if it works, but in this case regulation may be required. And that will still force the FDA to work this out in order to frame a regulation.

  3. #3 Lindsay Beyerstein
    November 29, 2006

    How much is “a lot” of soda? I drank a two-liter torpedo of Diet Pepsi yesterday. I know that’s a lot, even for me, but I was on deadline. I’ve been drinking one or two cans a day for about ten years. Is that enough to be worried about cancer?

  4. #4 revere
    November 29, 2006

    Lindsay: There are a couple of issues here. One is that we know that benzene is a human carcinogen, not just an animal carcinogen. Another is that we don’t know the dose response relationship of benzene to cancer, i.e., on average, what is the risk for given exposures to benzene. We know that it is very small for the levels we are considering here, 100 ppb and below. That’s on the individual side.

    Then there is the public health side, the risk to the population. Since we have trouble estimating the risk (it is unmeasureable epidemiologically; too much noise) let’s say it is one in a million for a chronic heavy drinker of benzene laced soda. Since dose responses at low levels are essentially linear (think of the DR function as an arbitrary polynomial function; at very low dose the higher order terms drop out, leaving only the constant and first order term), figuring the dose from your one two liter bottle is one day’s fraction of your lifetime risk, which we are assuming is 1 in a million, but could be higher, say one in one hundred thousand but you are taking a one day’s fraction of that. So at one in a million, with literally billions of soft drinks going down daily around the world (and the benzene problem is widespread in sodas containing ascorbic acid and sodium benzoate), then that’s thousands of cancers. It’s like the lottery. Your chance of winning is vanishingly small, but someone wins it.

    That’s not a very good answer to your good question, but the bottom line is that while individual risks from this are undoubtedly very small, aggregate risks are not. This is not a risk benefit trade-off problem. You got no benefit from the benzene in your soda (if your soda had benzene in it; not all do).

  5. #5 Lindsay Beyerstein
    November 30, 2006

    Thanks for explaining, Revere.

  6. #6 per
    December 2, 2006

    one of your premises is false. It is reckless to say that “since 90% – 99% of chemicals are not carcinogens”; this is unsupported by fact. We may not know whether 90-99% of chemicals are carcinogens; but the results of the NTP bioassay show that >50% of all chemicals tested (n>500) are carcinogenic.

    As you should also be aware, with the advent of modern detection methods, you can detect benzene in more or less all foods. I am looking forward to see if you suggest we ban all sources of benzene.


  7. #7 revere
    December 2, 2006

    per: We test the high index of suspicion chemicals in the NTP program. No one knows the proportion of the 100,000 or so chemicals out there that are carcinogens, but most estimates I have seen are that it is less than 10%. Since most chemicals have never been tested it is impossible to know but on biological grounds, carcinogenesis is fairly specific and unusual so this isn’t an outlandish idea by any means. No one I know thinks most chemicals are carcinogens. It is also hard to understand why maintaining that most chemicals are not carcinogens is reckless, unless you object to the idea that controlling carcinogens doesn’t mean modern industrial civilization has to come to a screeching halt. I am guessing that you aren’t in favor of controlling carcinogens and I would be interested in your ideas of how or if we should regulate them.

    I don’t think there is evidence that there is benzene in all foods, but the issue is quantity, the dose response relationship and whether it is avoidable or not. In sodas it is fairly easily avoidable by reformulation. Benzene is a natural compound and occurs naturally. Radiation is also naturally found. Yet no one suggests that sources of radiation which have no benefit and are avoidable be ignored. For small exposures to benzene the risks are also small. Unfortunately, like the lottery, small risks aggregated over large populations produce dead bodies. When I buy a can of soda I am not also wishing to buy a cancer lottery ticket.

    BTW, I am not much of a fan of the peroxisome proliferator argument about mode of action for many chemicals. In my view it is an attempt to say the chemical causes cancer in animals but not in humans. I am aware of the scientific controversy and you no doubt are aware of Melnick’s and others objections.

  8. #8 per
    December 2, 2006

    I appreciate that you are responding to my argument.

    “No one knows the proportion of the 100,000 or so chemicals out there that are carcinogens, but most estimates I have seen are that it is less than 10%.”

    well, we are exposed to vast numbers of chemicals, once you start getting to low concentrations, and as you point out, almost all of these have not been tested.

    The only relevant data set i know of is the NTP database, which is currently running at >50% positive on >500 chemicals. Err, that is both a fact, and in the literature. If you are going to come up with “estimates” of 10%, I point out to you that I don’t know where these estimates come from, and this has been discussed in the literature. You still have to explain why our only large and robust publically accessible database is >50% positive. The extrapolation down to 10% requires significant assumptions, or pure hand-waving.

    If you maintain that most chemicals are not carcinogens, you are asserting a fact when you do not have direct evidence, and in the teeth of the only direct evidence available. I consider that to be reckless.

    “Unfortunately, like the lottery, small risks aggregated over large populations produce dead bodies.”
    in context, this is an assertion without factual basis. You are assuming that for chemical carcinogens, we can extrapolate from high dose to low dose and see a linear response. You must know that this is untested at incidence levels of <1%, and that you are talking about incidence levels of 10-4%. Your statement is therefore speculation.

    I didn't raise peroxisome proliferators, but i would be interested if you could point me in the direction of some specific criticisms of the species difference argument in this case.


  9. #9 per
    December 2, 2006

    sorry, I interfered with the html.

    “Unfortunately, like the lottery, small risks aggregated over large populations produce dead bodies.”
    in context, this is an assertion without factual basis. You are assuming that for chemical carcinogens, we can extrapolate from high dose to low dose and see a linear response. You must know that this is untested at incidence levels of less than 1%, and you are talking about incidence levels of ten to the -6. Your statement about dead bodies is speculation.


  10. #10 revere
    December 2, 2006

    per: The risks at low dose are linear (technically, affine). Assume any shape dose response you want, expressed as a polynomial in dose. At low dose the only terms of significance will be the constant term and the first order term, the higher order terms being negligible. It is only non-linearities at higher doses where the testing occurs that is of relevance because of the extrapolation method.

    Yes, I am assuming benzene risks have no threshold. That is the conventional view which you may dispute if you wish. As a public health person, however, I think prudence dictates assuming what the biology suggests. Since experimental and epidemiological confirmation of carcinogenicity at low dose is not feasible, we in public health adopt a precautionary approach on this (not synonomous with the precautionary principle, BTW). You are free to argue “let’s take a chance.” I laid out my arguments in the previous posts whose links I gave.

    You can find some arguments on peroxisome proliferators in papers by Ronald Melnick (NIEHS and NTP) in Environmental Health Perspectives at various times. Search with PubMed. You can also see the conclusions of the UN’s International Agency for Research on Cancer (IARC) in their monograph on TCE and PCE, two chemicals where the peroxisome proliferator argument has been advanced most strenuously by the chemical industry.

  11. #11 per
    December 2, 2006

    “Yes, I am assuming benzene risks have no threshold.”
    an assumption is different from the statement of fact that low dose risks will produce bodies.
    “The risks at low dose are linear”
    I believe this also to be an assumption, as opposed to an experimentally-defined fact.


  12. #12 per
    December 2, 2006

    just for convenience, some pdfs.
    That bioassays predict a high proportion of carcinogens:
    some of the many naturally occurring plant products that are carcinogenic, and found naturally, and in coffee, are listed in:

  13. #13 revere
    December 2, 2006

    per: I addressed the NTP results in an earlier comment. The low dose linearity is not an assumption, for the reasons I gave you. At low doses the only things left are the constant and first order terms of the polynomial. Since any monotonic function on the real line we would be dealing with as a dose response function is expressible as a polynomial, it covers all monotonic dose response functions near the origin. Sometimes linearity is incorrectly used to mean the function crosses the x-axis at a point greater than zero. That is a threshold assumption, not a linearity assumption, although, as I say, it is often wrongly phrased that way. Usually context allows you to ascertain the meaning.

    BTW, the Gold-Ames documents you cite are one pole of a polarized debate. I am not at the other pole but I am not near them either. Ames has said elsewhere that one his main concerns is the US competitive place in the global economy. In my view, this seriously colors his science. I also have a bias. It is a public health bias and it colors how I view the science.

    Regarding the “no threshold assumption,” you were the one who pointed out that there can be no experimental determination of this, one way or the other. The reason it is the conventional view in public health is two fold: the biology of stochastic processes, and public health prudence.

  14. #14 per
    December 2, 2006

    It looks like we are agreed that there is no experimental data in this area. You are assuming that the response in monotonic; if it is not monotonic, your assumption of linearity collapses.

    I cited the Gold article since you asked in one of your articles for information about the carcinogens present in coffee. You have cited estimates that carcinogens are ~1-10% of all chemicals but have not given references or any better data; I gave you one example from the literature which says that a reliable estimate based on experimental data is >50%. However, if you are unable to provide any experimental data to justify your 1-10%, i will understand.

    I did not dispute your view of the conventional wisdom; i merely point out that asserting linearity as a statement of fact (that multiplying low doses of benzene x people exposed will give deaths) is wrong; you are making assumptions.

    If you say that the “biology of stochastic processes” justifies the assumption of linearity, or no threshold approaches, all I can say is that I am unconvinced. I will look forward to you providing a reference to an experiment which shows linearity of a biological response through a mere ten logs.

  15. #15 revere
    December 2, 2006

    per: Let me try to be clearer. The problem isn’t monotonicity. I gave you that. If it isn’t monotonic but U-shaped then you are in much worse shape as the risk at low doses is higher than the risks at higher ones. The Gold articles are not estimates of the proportion of chemicals in the everyday environment that are carcinogenic because they are not a random sample of those chemicals. They are a high index of suspicion sample, and, by your own admission (including, I believe mutagens with carcinogens) they still represent only about 50%. Even if every one of those were a carcinogen, and they include many natural substances, I don’t know what it would prove except that for chemicals, like radiation, we are better off with less exposure than more when the exposure is avoidable and carries no other benefit.

    You still don’t seem to grasp what is meant by linearity mathematically. A linear equation in dose would be f(x) = a + b*x +(higher order terms which are negligible at low dose). Strictly speaking that’s affine, not linear, but it doesn’t matter for this discussion. Some people incorrectly use linear for meaning there is a threshold, i.e., that the DR curve crosses the x-axis at a positive dose. Threshold DRs can still be affine, so even there it isn’t the issue. The “linear” issue is how you extrapolate from high dose to low dose. If you extrapolate with a straight line from some high dose to the origin, that’s truly a linear DR curve. But most extrapolation methods are either supralinear or sublinear, depending upon the extrapolation model. Thus the widely used Crump and Guess multihit multistate model is usually sublinear, although, again, at low dose it is (locally) linear for the reasons I already gave. On the other hand, other biologically plausible models of extrapolation, e.g., a Weibull model, can give risks that are 10,000 times the multihit model for some chemicals (TCE is the example).

    There are not the only complexities, here. There are PBPK versions and stochastic models, etc. I don’t think we need to get into those details since they don’t affect the principal point, which is that biologically, carcinogenesis is a very unusual biological response (not like toxicity, for example), and if you assume that the damage that leads to a transformation to malignant phenotype is reproduced with each cell division (there is plenty of biological evidence for this), then the no threshold idea has a great deal of support. It also is prudent, from a public health perspective.

    You do not wish to give up on your objections, no matter what I say in return, so I think we are best ending this discussion at this point as we have each said as much as can be said in this context. I don’t mind at all having these kinds of exchanges and I thank you for the time and effort you put in to this one. But I think we have reached the stage of diminishing returns.

  16. #16 per
    December 2, 2006

    I remain very confused. You stated clearly, and without reservation, that “the risks at low dose are linear”, and in answer to my query as to whether linearity is an assumption or a proven fact, you stated,
    1) “The low dose linearity is not an assumption”
    2) “there can be no experimental determination of this”

    I can draw a relationship where dose-response is a sine wave; or where low doses reduce the risk of cancer. Both possibilities bust the idea of linearity, and since there is no data in this range, we cannot exclude these possibilities. But you have made the clear statement that the relationship is linear when there is no data. I fail to see how you can do that, and I think my understanding of linearity is just fine.

    I accept that you have a view that the NTP database is biased by the choice of chemicals in it; this is at least arguable. However, what i am missing is the substantive basis that allows you to claim that only 1-10% of chemicals cause cancer. It seems to me that you have not brought any facts, or references, to the discussion.

    I note that you simply assert that “carcinogenesis is a very unusual biological response”; it is often the case that people bring evidence in support of their contentions. Assertion and assumption do not prove anything; it is experimental evidence which enables the testing of hypotheses.


  17. #17 revere
    December 2, 2006

    per: I discussed this at length in the posts I linked. Sine waves are linear near the origin. All you have to do is look at their Taylor Series or get a magnifying glass and look at the section near the origin. Linearizing polynomials near an equilibrium point is the stock in trade of differential equations these days and with it toxicokinetics and pharmacodynamics. It’s the same concept. You need to look at a calculus book rather than argue with me here. This isn’t the result of experiment. It’s mathematics.

    The selection bias of NTP is not arguable. It is inarguable. It is intentional. It is designed that way.

    I have a day job and limited time, so I can’t take more time to carry this on. You’ve made your point and I’ve made mine. Let’s just agree to disagree.

  18. #18 per
    December 3, 2006

    “This isn’t the result of experiment. It’s mathematics.”
    the point I am making is that you didn’t make a statement about maths; you made a statement about the number of deaths at low dose from a chemical. This is not maths, and we agree that there is no data here. The assertion that any type of dose response here is a fact is an assumption. Amongst other things, a dose response shape where a chemical has a beneficial effect at low dose will bust your linear relationship between exposure to chemical and number of deaths.

    When you say, “the bottom line is that while individual risks from this are undoubtedly very small, aggregate risks are not”, this is an assumption, and not a statement of fact. You are making assumptions not only about the lack of threshold, but about the nature of the dose response curve.

    With regard to the NTP dataset, I have done the science thing. I have made a statement, and provided pdfs which refer to the original science, so that you can see what the data is. As with all data, there are limitations, and I accepted straightaway that you can argue that there is bias in the NTP dataset- though you cannot quantify what that bias is. By contrast, your estimate of 1-10% of chemicals being carcinogens is completely unsupported as far as I can see. At the very least, I cannot see your reasoning.


  19. #19 mc2
    December 7, 2006

    Per: Even if a chemical which is carcenogenic at high doses is beneficial at low doses it will depend on how the body excretes (or metabolises) the chemical. If for example it stores the chemical then lots of benefical small doses can add up to a large dose (just think of say vitamin A poisoning).

    And the beneficiality does not neccessarily impact on the response for the dose what could be beneficial in one way could also be harmful in others.

    And as Revere points out at low doses mathematically the DR becomes linear, this it self allows for a threshold in the risk (or even a negative risk if you could define such a thing – maybe as a benefit)

New comments have been disabled.