Sad News: John Backus has died.

I just heard that John Backus died last saturday.

John Backus was one of the most influential people in the development of what we now know as software engineering. In the early days of his career, there was no such thing as a programming language: there was just the raw machine language of the hardware. Backus was the first person
to come up with the idea of designing a different language, one which was easier for
humans to read and write than machine code, and having the machine do the translation.

For those of us in the field today, the idea of not having a human-readable language is almost unimaginable. But in the days when Backus got started, the idea was not just revolutionary - it was downright controversial. Even just using what we would call an assembler today was considered a bad idea by many programmers of the time.

Just to give you a sense of what that means: An assembler is a tool that lets
you write programs in the basic instructions of the machine; only instead of having to write the instructions in numeric forms, they'd let you write them as words - it's the difference between
writing something like "2E 03 F1 79" and "ADD R3, (A1)+9". Many of the early programmers believed that writing the latter form was harmful - because when you're writing in non-numeric form, you
aren't as aware of some details like exactly how the instructions are aligning in memory. So
the idea of moving even farther away from the machine code, so that you didn't even know
exactly what sequence of instructions your code would be translated into was considered
absolutely crazy.

But Backus beat the so-called common wisdom of the time. He and his team developed
Fortran, and showed that not only did the compiler match the performance of
hand-written code for most systems, but it often beat the performance of hand-written code for anything but extremely small hand-optimized code segments; and it created a dramatic
improvement in productivity: programmers could get their work done much more quickly, with
fewer bugs, by writing in a higher level language.

Just designing the first programming language ever wasn't enough for John Backus. Once the first Fortran compiler hit the world, he led his team into the area of figuring out how to describe and process programming languages. With Peter Naur, he developed what we now call BNF grammars - that is, Backus-Naur Form grammars, which are a formal tool for describing the syntactic structure of a programming languages; and he and his team worked out how to use those grammars as a guide to
the translation of code from the human readable syntax of languages like Fortran and Cobol to the
pure binary used by the machine.

In later years, Backus devoted his time to trying to build better languages for expressing
programs. He became an outspoken advocate for the idea of functional programming, designing his own
language and programming called href="">FP. FP was never a huge success,
but there are still a few people working on FP-derived languages, and they're a very interesting
idea. I'll probably write a little something about FP later this week.

John Backus won the Turing award in 1977, and used his Turing Award lecture as an opportunity to push the idea of non-imperative programming. The written article that accompanied his lecture remains one of the classic documents describing why people should look at functional and non-imperative programming languages.

Backus is also the guy who hired Fran Allen, this years Turing Award winner.

I never met him, which is something I'll always be disappointed about. I've known quite a few people who worked for him, but I'm too young to have had the opportunity meet him.

Backus will be remembered in the Computer Science community. His name ranks with some of the true greats - I would go so far as to include him in a list with people like John von Neumann and Alan Turing as the greatest influences on the development of computer science as a discipline.


More like this

Sad news. He was, as you made clear, a giant of the field.

There are still a few functional programming languages around (Standard ML is being taught in Danish universities, the Microsoft research team developed F# for .NET), but how these relate to FP, as Backus described it, is a little unclear to me.

Speaking of Turing Award winners - has there ever been any news of Jim Gray? I haven't heard anything about him since they gave up on the search.

Kristjan, the short answer is that Backus's proposed function-level programming is points-free programming with a fixed set of combinators that can be analyzed algebraically. I personally suspect that FP was too much "like math" to be fun for most programmers, but maybe Fortran was simply too successful?

Sad. Those of us who are still good at mental binary to octal conversion (an invaluable skill in the old days) and who remember rows of switches on panels and wires hanging off plugboards are a vanishing breed.

has there ever been any news of Jim Gray?

Kristjan, not everyone gave up on the search at the same (official) point; certain people with access to high-quality imaging continued to use it to search for him--but sadly, even with those efforts, there have been no results so far.


There are still plenty of functional programming languages around. In particular, Haskell and Erlang are two fantastic and at least moderately successful serious functional programming languages.

But Backus had his own very unique idea of what a functional language should look like (which I'll try to write about in another post later today, if I feel up to it; I've been sick since the weekend) - and I don't know of any current serious attempts to build anything like a realization of exactly what Backus had in mind.

But going back and reading his 1977 Turing Award lecture is a fascinating thing to do. If you look at the evolution of Haskell, what you'll find is that Backus basically anticipated the Monad! He discusses functional languages as state transition systems, where the system is implemented as a sequence of purely functional computations that create state-to-state maps. In fact, that's exactly what monads do!

I am a bit handicaped by having spent less time on computer science and programming than most IT-people my age (I'm 32), since I started out with studying economics and business management.
This means that I don't always have the historical knowledge than one could expect (I have a passing knowledge of the development of databases, but that's because it was a specific interest area of mine at some stage).

That doesn't mean that I don't enjoy reading the writing of the pioneers, even if I don't have the proper perspectives to understand every nuance.

Oh, and Backus' Turing Award lecture can be downloaded from here (there is a link on the page that links to it as a pdf).

Backus was the first person to come up with the idea of designing a different language, one which was easier for humans to read and write than machine code, and having the machine do the translation.

Though this is not to disparage Backus-- inventing FORTRAN and being the B in BNF is huge-- just for the record, I usually see the honor for that particular innovation going to Grace Hopper, for her work on the languages that eventually became COBOL. Checking Wikipedia, it looks like Backus' FORTRAN and Hopper's FLOW-MATIC were being developed practically at the same time, so I'm not sure which of the two (if either) gets credit.

Looking forward to the article about Backus' "idea of functional programming"!

An update on the search for Jim Gray can be found here.

By Kenneth Fair (not verified) on 20 Mar 2007 #permalink

You write:

But Backus beat the so-called common wisdom of the time. He and his
team developed Fortran, and showed that not only did the compiler
match the performance of hand-written code for most systems, but it
often beat the performance of hand-written code for anything but
extremely small hand-optimized code segments . . .

This is a TOTAL myth. I do not understand how or why this myth is propegated. Fortran code absolutely was not faster than hand-crafted assembly; the code the early Fortran compilers produced was direct-threaded code -- it was *interpreted*. It *HAD* to be. 4KB was considered a *luxury* back then, and had to support both the OS *AND* the application. No bloody way was it faster.

Even today's best compilers produce code that is demonstrably inferior to hand-crafted assembly language. I can demonstrate this easily by comparing the best output of, say, Intel's C compiler and a hand-crafted version of the same code that exploits fewer instructions, but is more fully aware of cache line prefetching. Go figure. Execution speed, at best, will match the hand-crafted assembly. But I can guarantee you, my code *WILL* be smaller. And with finite front-side buses and finite cache speeds, the more punch you pack per cache-line fetch, the better.

I really wish people would stop spreading this myth. True, most compilers have advanced to a point where they produce "damn good" code, and I welcome this. But, please don't lie outright about a compiler's capabilities. It lacks sentience and understanding of the hardware and of the application. It can grok only one of these, but it takes both to truely make a small, fast, and highly performant product.

The FORTRAN IV compiler on the IBM 1130 where I started programming in 1966 definitely made better Assembly Language code than I did. Of course, I was always rotten at Assembly, and have depended on high level languages for 41 years... So I'm very sad to hear about John Backus, who got a very respectful obit in the New York Times, which quoted (among others) the admirable Computer Science Historian J. A. N. Lee who was Chairman at UMass/Amherst CS department when I arrived, before he left and was replaced by a plagiarist.

[Mark: I've openly been saying this latter point for decades, and so have many others, so I don't think that I'm exposing you to defamation charges]

I can't comment on your assertions about early Fortran, but the fact is that compilers do produce better code than humans, for all but the most trivial applications.

Modern applications are too large, and modern architectures too complex, for a human to actually write good assembly for non-trivial tasks. There's only so much detail you can hold in your head at once; when the program requires something on the order of a million times more detail (give or take a few orders of magnitude) there's no way to assert that a human can actually produce code that can beat an optimizing compiler. As well, architecture is rapidly growing more and more complex. It started with the switch to RISC processers; surely you don't think those are easier for humans to use than CISCs! But they're better for compilers.

It's not a question about human sentience vs. compiler 'cleverness'. On small tasks, skilled humans can almost always beat even the best compilers (though the number who can is always shrinking, and it's not just attrition doing it). It's about the sheer scale of things.

By Xanthir, FCD (not verified) on 21 Mar 2007 #permalink


It never ceases to astound me that people still debate this.

For most of the history of programming, it's been true that on small code segments, a human being who is working very hard can often generate code that beats a compiler.

For real programs - i.e., large code bases - that hasn't been true for decades, if in fact it ever was. Large machine language programs implemented by human beings have a tendency to not perform well compared against
code generated by a compiler. Sure, there have always been hotshots who can do all sorts of amazing hand-optimization - but they're rare, and even for those hotshots, their routine every-day code is not the amazing hand-optimized stuff. Normal code, even written by a super coder, isn't super-optimized.

Even back in the 50s, when the first Fortran compiler hit the scene, the quality of code generated by the compiler was better than the vast majority of human-generated code. Most of the time, humans simply do not manage to write code that's anything close to optimal.

In the modern world, I don't even believe that for small code segments that human beings outperform compilers. On modern processors, the complex interplay of pipelines, caches, memory fetches, branch prediction, and instruction prefetch are just too complicated for humans. I've heard lots of people brag about how they can beat the compiler - but I've yet to see anyone actually do it on an architecture with multi-level cache, multistep variable-length pipeline, and branch prediction for anything other than fairly trivial straight-line code.

I finished my PhD 11 years ago. Back then, I was working with people doing work on making code perform on massively parallel supercomputers. One of the things that was really striking was just how badly humans did at writing code for the parallel machines. Especially the hotshots who thought they knew what they were doing. You'd look at their code, and find that their carefully crafted hand-optimized loops create un-necessary data dependencies that blocked parallelization. What we found was that the code that actually ran fastest wasn't the carefully hand-optimized code implemented by some superjock coder; it was the most naively written code after being put through an optimizimg parallelizing compiler.

The compiler was better at figuring out all of the data dependencies, and finding an ordering of loops and instructions that would take best advantage of all of the intricate properties that affected the execution of real code.

We've also recently lost Jean Ichbiah and Doug Ross. Jean was the leader of the Honeywell team that designed the GREEN language and subsequently the first version of Ada when GREEN was selected.

Doug was my mentor and my friend. While at MIT in the 50's and 60's, he invented the APT language for programming numerically-controlled machine tools. APT wasn't widely known in our business, as it wasn't a computer programming language, but it was huge in the field of robotics and automation, a dominant world standard for decades.

Doug also invented the AED language (ALGOL Extended for Design) in the early 60's, introducing many modern concepts of abstract data types and O-O modular programming structures. His co-inventor (and co-founder of SofTech) Clare Feldmann also died just recently.

By Bob Munck (not verified) on 21 Mar 2007 #permalink

RE: compilers vs. hand-coding.

I was a real cracker-jack IBM/360 assembler coder back in the Day, writing things like compilers and interpreters (the BRUIN language), word processors (Hypertext), and even the odd OS or two. Some time in the early 70's I went through the exercise of de-compiling/de-assembling a fairly large matrix manipulation package that had been written in FORTRAN IV and compiled with the IBM FORTRAN H compiler. I was completely and utterly blown away by the quality of the machine code; my impression was that it was code that would be produced by a programmer about twice as smart as I was, with a huge and perfect memory, and with powers of concentration far beyond anything human (see also focus in Vinge's Deepness in the Sky).

No, programmers can't write better code than compilers, any more than sprinters can outrun cars.

By Bob Munck (not verified) on 21 Mar 2007 #permalink

Bob... my experience is similar to yours. I still have my "green card" and loved writing S/360 assembler code. There's no way that I could match what an optimizing compiler can produce.

Samuel -- you first assert that the first FORTRAN code had to be slower than hand-written code because it was interpreted -- that's a non-sequitur. It's probably, but not difinetly, true. If true, so what? I doubt that the condition prevailed for terribly long.

Second, you make the statement that your code "would be smaller" and at least as fast (or faster) than compiler-generated code. Can you substantiate that with a non-trivial example?