Today, I have something really fun and twisted for you. It's my favorite variation on
BrainF**k, called "BFM", which is short for "BrainFunct-Macro". It's a doubly-Turing-equivalent language - it's got two complete and distinct Turing equivalent computing systems in it: it's
got regular BF on the bottom... But before BF gets going to run the program,
the program is pre-processed by a complete lambda-calculus macro expander.
Basically, you start with a BF interpreter, and you observe, quite logically, that there's a
ton of repetition in a typical BF program - stuff that could be nicely extracted. An
obvious example is: suppose you want to add a + b, where they're stored in neighboring cells, and
store the result where a was. In BF, that's
So, you add a way of defining that as a macro:
Next, you notice that gosh, there's a lot of similar macros, which you could make the
same if you could only parameterize them. For instance, what if you wanted to add "a" and "b", but
they were actually three cells apart? The program would be: "
[-<<<+>>>]<<<". That's almost the same as the original adder. But wouldn't it be easier to be able to write
repetitions like that in a smarter way? Like, say, "3(x)" to be the equivalent of "x x x"? So
then, as a macro, that program would be:
And the, you might notice that "add" and "addThreeApart" are pretty much the same thing - the only difference is really a parameter, for how many times to repeat the "<" and ">"s. So you
&addApart(n)=[-n(<)+n(>)]n(<); &add=addApart(1); &addThreeApart=addApart(3);
Now, you notice that the "n"s are really macros themselves, and what you're doing is allowing
you to write macros that take macros as parameters. So just forget about worrying about it, and just
let macros be parameters to other macros. And bingo - you've just created a macro system
which is pretty much lambda calculus.
So, you've got a lambda calculus macro system. Well, in Scheme, which is also based on lambda
calculus, they make code a lot easier to read by allowing you to write local function
definitions. So why not local macros?
&addNApart(n) = &left=n(<); &right=n(>) [- left + right ] left;
Of course, we'd also like to be able to use more than one parameter, so that we don't need
to curry everything out, right? No problem. Just separate the parameters by "|". We'll use this
to built something really crazy; let's treat the BF tape as if it were a stack, and write
some stack-oriented code so that we can build a multiply operator.
&zero=[-]; &lefttwo = <<; &righttwo = >>; #copy the nth value on the stack to the top of the stack. &rotn(n)= &leftn=n(<); &rightn=n(>); # Push two zeros onto the stack 2(> zero) 2(<) # copy nth value down the stack to the two new stack cells. leftn [ - rightn >+>+ lefttwo leftn] rightn # copy the second new value back to the nth cell down righttwo [- leftwo leftn + righttwo rightn ] # move the stack pointer to the first copy of the value. >; &pushzero = > zero; &mult= pushzero lefttwo [- righttwo rotn(1) add lefttwo] righttwo [- lefttwo + righttwo ] lefttwo >
Gosh, BF starts to look downright legible!
I find myself suddenly wondering about the possibility of an optimizing compiler for this...
Gosh darn it, do I hear echoes of FORTH somewhere here .... dual-stack PDA ticking away.
http://www.swapped.cc/bf/ <-- optimizing compiler for straight BF (link stolen from wikipedia).
So ... if you had a compiler for BFM that produced BF as output (reasonably simple, right?), you can get optimized BFM.
Of course you lose some information you might otherwise keep, but hey.
somebody'd better kidnap whoever hacked up this macro system, or else before long BF will end up actually used in a commercial product somewhere. i just know it.
this reminds me how people have discovered that the templet(sp) notation in c++ is turning complete and use it to do compile time computation. (It seems craze to bother using the preprocessor to do computations but maby it is usefull somehow)
Our tutorial on COLA provides insight on how programming languages can be implemented using the combined abstractions and an implementation of parsing expression grammars in COLA. The "esoteric" programming language brain**** was chosen for its simplicity, which allows for concentrating on COLA's features.