Records in Erlang

One of the things I discovered since writing part one of my Erlang introduction is that Erlang has grown a lot over the last few years. For example, the idiom of tagged tuple as a way of creating a record-like structure has been coded into the language. There is, unfortunately, a bit of a catch. They aren't really added to the language. Instead, there's a pre-processor in Erlang, and records
are defined by translation in the pre-processor. This to me typifies one of the less attractive
attributes of Erlang: much of Erlang has a very ad-hoc flavor to it. There are important high-level features - like record data and modules - which aren't really entirely part of the language. Instead, they're just sort of spliced in however it was easiest for the implementors to cram 'em in. And there are other things that were added to later versions of the language that, while first class, are awkward - like the "fun" prefix for first-class functions.

The effect of that kind of thing is mainly syntactic. The implementation is good enough that
even though things like records are really pre-processor constructs, they feel almost as
if they're built-ins.

Anyway - let's take a look at records.

As I mentioned in the first Erlang post, the basic data type idiom of Erlang is
based on tuples. Typically, to represent a data type, you create a tuple whose first element
is an atom which identifies the type of the tuple.

That's exactly what the record facility does. You define a record type, and it's built using
a tuple whose first element is the atom which names the record type. The rest of the entries of a record tuples are the fields of the record, which are represented by alternating field names (as atoms) and field values. So, for example, if we
wanted to define a binary tree like the following Haskell:

data Tree a = Node (Tree a) (Tree a) | Leaf a

sample = Node (Node (Leaf 1) Leaf 2) (Leaf 3)

Would be implemented as something like the following in tuples in Erlang:

Sample = {node, 
           left, {node, 
                   left, {leaf, value, 1},
                   right, {leaf, value, 2}},
           right, {leaf, value, 3}}

That's obviously pretty ugly, and pretty laborious to code. The record-feature in the meta-language makes it much more convenient:

-record(node, {left,right})
-record(leaf, {value})

Sample = #node{left=#node{left=#leaf{value=1},
                          right=#leaf{value=2}},
               right=#leaf{value=3}}

A record is declared with a pre-processor form "-record". The first parameter to the form is
the atom naming the record type; the second parameter is a tuple of the record field names. Once a record type is defined, you can create an instance of it with "#" followed by the record type name, and a set of field assignments inside of curly braces.

Like everything in Erlang, the way to manipulate records is by pattern matching:

#node{left=L,right=R} = Sample

will assign "L" to "#node{left=#leaf{value=1},right=#leaf{value=2}}", and "R" to "#leaf{value=3}".

Aside from the weirdness of the meta-language division, the sequential part of Erlang is very
straightforward. It's a classic eager functional language. Programs in Erlang are quite similar to
programs in a language like Scheme, except for syntax, and the syntax is OK; sort-of a bastard offspring
of Lisp and Prolog, which isn't an awful thing.

To give you a bit of flavor, here's a simple bit of binary search tree code in
Erlang:

-module(bt).
-export([insert/2]).

-record(node,{value,left,right}).

insert(#node{value=N,left=L,right=R}, V)  when N > V ->
   #node{value=N, left=insert(L,V), right=R};
insert(#node{value=N,left=L,right=R}, V) ->
   #node{value=N, left=L, right=insert(R,V)};
insert(none, V) ->
   #node{value=V, left=none, right=none}.

fetch(#node{key=K, value=V, left=L, right=R}, K) ->
V;
fetch(#node{key=K, value=V, left=L, right=R}, Target) when K > Target ->
fetch(L, Target);
fetch(#node{key=K, value=V, left=L, right=R}, Target) ->
fetch(R, Target);
fetch(none, Target) -> none.

It's very straightforward functional code. It's almost exactly the same way that it would be written
in ML (my favorite eager functional languages), and very close to how it would be written in Haskell. It is a bit verbose - but that's because I'm not using Erlang to its full advantage. The code can be
simplified a bit by using record field access and update syntax.

If you only want to access one field of a record, there's a shorthand syntax that's more convenient
then writing out a full pattern match to get at one field: if Sample is a variable
containing a record, then Sample#node.left gets the value of the field named "left" of the
record type "node". (If you use this syntax with the wrong record type, you'll get a run-time error.)

Similarly, there's a syntax for updating a record - that is, creating a new record that's the same
as an old record except for a couple of modified fields: Sample#node{right=#leaf{4}} will
create a new tuple of type "node", with the same left value as Sample, but with a different right
value.

So, using that, we can modify "insert", so that we don't need to write the full record
construction syntax. Instead, we could write insert the following way (called "sinsert" for "simpler insert"):

sinsert(none, Newkey, Newval) -> 
   #node{key=Newkey, value=Newval, left=none, right=none};
sinsert(Node, Newkey, Newval) when Node#node.key > Newkey ->
Node#node{left=sinsert(Node#node.left, Newkey, Newval)};
sinsert(Node,Newkey, Newval) -> 
Node#node{right=sinsert(Node#node.right, Newkey, Newval)}.

For this kind of straightforward data structure programming, Erlang is a really pleasant language.
The record system is very nice, flexible, and easy to work with. The strong pattern matching facility of the language is flexible enough that for an impressively large number of functions, you don't need to write any control structure code beyond patterns and "when" guards. I just wish it wasn't in the
meta-syntax - when I get around to explaining the way that a large Erlang program is broken into multiple files, you'll see a bit of why I'm unhappy that it's meta.

In general, my impression thus far playing with Erlang (which I've used for a bit more complicated
stuff than I've included in this post) is that it's very easy to read and write, and it behaves in a
very intuitive fashion. I definitely find it easier to pick up as a beginner than I did Haskell, but I think that's mainly because I find eager evaluation to be easier to predict than lazy. For this kind of basic data structure programming, I think it's a toss-up with Haskell: I prefer Haskell's
strong types, but Erlang's eager semantics make it a whole lot easier to understand the efficiency
tradeoffs between different implementation strategies.

Before moving on, I'll toss in some information on the bit of Erlang that I really hate.
Aside from using a preprocessor for implementing meta-language features like records, Erlang's
preprocessor also has a macro facility that's modeled on the macro system from the C preprocessor.
Like cpp, it's ugly as all get-out, and it's bloody dangerous to use. It's a ghastly
thing, but it's there, and a fair bit of Erlang code uses it. (Gag.) One minor redeeming feature of the
macro system is that at least uses of macros are clearly marked - unlike C or C++, you can clearly tell a macro invocation from a function invocation.

To define a preprocessor macro, you use:

-define(name, replacement)

Or, for a macro with parameters:

-define(name(p1, p2, p3), replacement)

For example:

-define(TIMEOUT, 2500)
-define(Double(X),2*X)

Then you can use the macros by prefixing the name with a "?": ?Double(10).

Naturally, if you've got a hideous preprocessor, you can't escape using it for conditional compilation. So the Erlang preprocessor includes "-undef", "-ifdef", "-ifndef", "-else", "-endif", each
of which means pretty much the same thing as the corresponding C preprocessor directive.

Once again, gag.

Anyway - next post, I'll go into more depth about control structures in Erlang, and introduce one of the more unusual features of Erlang, called bit syntax. Bit syntax is an amazingly powerful and flexible pattern-matching syntax for data encoded in bit streams. Given Erlang's origins, it's fairly obvious why this is part of the language - and it's one of the things that make it so suitable for high performance distributed systems: it's got deep support for the kind of bit-tweaking that's necessary for
many communication protocols, and it's done in a way that allows the compiler to optimize the living daylights out of it. But that's the next post.

Tags

More like this

Several commenters pointed out that I made several mistakes in my description of Erlang. It turns out that the main reference source I used for this post, an excerpt from a textbook on Erlang available on the Erlang website, is quite out of date. I didn't realize this; I assumed that if they…
(This is an edited repost of one of the posts from the earlier version of my Haskell tutorial.) (This file is a literate haskell script. If you save it as a file whose name ends in ".lhs", it's actually loadable and runnable in GHCI or Hugs.) Like any other modern programming language,…
Last time around, I walked through the implementation of a very simple binary search tree. The implementation that I showed was reasonable, if not great, but it had two major limitations. First, it uses equality testing for the search, so that the implementation is only really suitable for use as…
For this post, I'm doing a bit of an experiment. Haskell includes a "literate" syntax mode. In literate mode, and text that doesn't start with a ">" sign is considered a comment. So this entire post can be copied and used as a source file; just save it with a name ending in `".lhs"`. If this…

have you used to Common Lisp macro system? it may be non-trivial, but ability to do turing-complete computation is a huge advantage.

Is there an Erlang-compiler that produces native executables? One of the things that puts me off with Erlang is that I haven't found one. I checked out how Wings3D managed their installation and even they install an Erlang runtime.

No, there is not native compiler for Erlang. Figuring out how to do hot-loading of code on the fly in a compiled system would be quite a unpleasant task to try to pull off...

evgen: I don't see how compilation would prevent hot-swapping, or even make it that much harder, but I acknowledge that it doesn't quite fit the idea of monolithic immutable executables. Obviously such a feature would be of little value to anyone wishing to make stand-alone applications. Although, it could be kind of fun to be able to update applications, without restarting them (though, I'd be happy if a certain operating system could do that).

ch:

Yes, I've used CommonLisp macros. I'm a lisper going way back; I even briefly got to play admin on a Symbolics lisp machine. And I *hate* CL macros. I just started writing a long response about macro systems, but I realized that it would make a decent top-level post - so expect to see it later this week.

Flaky:

The reason that dynamic loading doesn't co-exist well with native compilation is pretty straightforward. On modern machines, true native compilation is done for performance: you want to optimize the crap out of the code, and obviously that entails eliminating anything like an interpretive step.

The catch is, to really optimize the code, you do an awful lot of transformations of it. Lots of things get inlined, parameters get added, calls get re-ordered, functions get combined, or split into specializations, etc.

To be able to dynamically splice a modification in to the running system, you need to know what you're splicing into with a great deal of accuracy. If you have separately compiled modules, and you want to re-load a changed one, you need to know exactly how it's going to be interacting with the code around it. But if that code was potentially transformed significantly during compilation, you can't know that well enough.

Optimization also relies on information. If you know that a module could change, you can't do any cross-module optimization, because you can't rely on any information from the other module staying the same. That eliminates a huge number of important optimizations, from constant propagation to inlining to alias analysis.

If you compile down to a good intermediate code, and you rely on a runtime that includes dynamic native compilation, then for a long-running system (which is the kind of system where hot-swapping code makes sense), you get the best of both worlds. You can optimize anything you want, so long as you retain information about what you did. Then when you need to hot-swap, you can basically rewind the optimizations, splice in the new code, and then re-optimize.

It can make a huge difference. On some Java code I've written for an SCM system, we found that we payed a startup cost of about 5 seconds of CPU time for loading (which is reflective of the really lousy way that Java does code loading), and around 30 seconds to 1 minute of dynamic compilation time each time we started our system. Once that cost was paid, the overall performance of the system was *better* than corresponding C++ code. Since an SCM server can go weeks at a time without a restart, even if the cost were a hundred times higher - if it took an hour and a half of dynamic compilation time to stabilize on optimized code,
it would *still* have been a net positive to use the dynamic compilation. With the tradeoff where it is, dynamic compilation is the sensible way to go.

It makes even more sense when you realize how different the performance characteristics of some of the different CPUs are. An Intel Celeron, an Intel Core Duo, a Core 2 quad,
an opteron, etc., have very different characteristics in instruction dispatch, cacheing, memory access, I/O bandwidth, etc. With dynamic compilation, you can optimize to whatever specific target you're running on. With static compilation, you're stuck picking one optimal target.

MarkCC: Quite right. Demands of performance would require the use of JIT-compilation for hot-swapping. However, I don't see any major hurdles in implementing straight-forward native code hot-swapping, if performance is not the top-most issue. My desire for a native compiler for Erlang has to do more with ease of use, both for me an any potential users for whatever I might write. If the runtime and the program would be packaged into one native executable, that would be quite fine. In fact, I'm quite tired of having to install programs at all, when, as a programmer, I know that it would be quite possible to package an entire application into one file that contains all the code and data the application needs, and to run it without delay with just one click, not unlike JAR-packages sans the horrible startup times.

Besides targeting various processors, JIT can also, at least in principle, do an improved recompilation based on on-line profiling, which sounds cool. And of course, if you wish to do a bit of compile time scheduling of parallel programs, doing it on the target system is a good option, since then you know how many cores you have. Incidentally, Java and CLI bytecodes are far from the best possible IM languages, in terms of compilation time and efficiency of resulting code. (LLVM looks quite interesting.) I wonder why they don't phase those bytecodes out in favour of better alternatives.

Erlang does have a native code compiler.
It's called HiPE, and it's in the documentation.

Just pass +native on the erlc
command line, or the native option in the
call to compile:file/2.

Currently it supports SPARC and x86. What it gives you
is native code. What it does not give you is a.out/.exe
files.

Concerning Erlang syntax and macros,
there is a Lisp-Flavoured Erlang package,
offering Common-Lisp-ish syntax and also
Common-Lisp-ish macros.

Pluggable "transformations" are the "native"
Erlang way to do macro-like things.

The Erlang preprocessor exists due to customer demand;
I wish it didn't exist.