Overall, a prediction accuracy of about 82% was achieved, which is only a little lower than was obtained during the validation of the local lymph node assay (LLNA) (NIH, 1999), although some might be concerned by the failure of this in vitro system to detect almost 20% of the sensitizers. The authors address this issue by invoking the conduct of complementary assays, such as peptide binding or in silico methods. For example, an example of a chemical class that is not well predicted is aldehydes, where the possibility of employing an existing predictive Quantitative structure activity relationships is suggested. Of course, such devices are most easily implemented for chemicals where we already know what the answer should be, whereas the most critical assessment is for new substances where we do not already know the answers.
Preach on. With out huge and complete data sets we are only training a system to catch stuff we only know about. A reply (not by the authors of the critiqued article) that was published this month contains, among other arguments, the argument that boils down to, “Hey, it actually works near 100% of the time for the right chemical classes”. Now, I get their point, that it could be restricted for use in those classes but that’s not really where the need or the problem exists for alternative animal testing. We need something that’s comprehensive. Beyond that they say that you could use other models, structure/activity models (QSAR), for the classes that don’t work. But, that’s why they’re trying to develop in vitro tests in the first place! Because QSAR doesn’t work well enough! I’ll end by saying, ‘keep trying guys and gals!’ and echoing the statements of Dr Basketter in his reply to the reply:
I note an accusation of pessimism and I would simply respond that generally speaking, a pessimist is merely a well-informed optimist.