Via mt I find
too much of our scientific code base lacks solid numerical software engineering foundations. That potential weakness puts the correctness and performance of code at risk when major renovation of the code is required, such as the disruptive effect of multicore nodes, or very large degrees of parallelism on upcoming supercomputers 
The only code I knew even vaguely well was HadCM3. It wasn’t amateurish, though it was written largely by “software amateurs”. In the present state of the world, this is inevitable and bad (I’m sure I’ve said this before). However, the quote above is wrong: the numerical analysis foundations of the code were OK, as far as I could tell. It was the software engineering that was lacking. From my new perspective this is painfully obvious.
[Update: thanks for Eli for pointing to http://www.cs.toronto.edu/~sme/papers/2008/Easterbrook-Johns-2008.pdf. While interesting it does contain some glaring errors (to my eye) which I'll cmoment on -W]