For your amusement and edification, the following is a very simple interpreter for fractran
programs which, in addition to running the program to generate its result also generates a trace
to show you how the program executed.
;; A Trivial Fractran Interpreter
;;
;; Copyright 2006 Mark C. Chu-Carroll
;; http://scienceblogs.com/goodmath
;;
;; You're welcome to do anything you want with this code as long
;; as you keep a copy of this header to identify the original source.
;;
;; This program runs a fractran program. A fractran program is a list
;; of fractions. The fractions are represented by a list of two integers:
;; the numerator, and the denominator. For example, the classic fractran
;; multiplication program could be written:
;; ((385 13) (13 21) (1 7) (3 11) (7 2) (1 3))
;; or:
;; (((* 7 11 5) 13) (13 (* 3 7)) (1 7) (3 11) (7 2) (1 3))
;;
;;
;; To run a program until it terminates, call (run-fractran program input).
;; This will return a list; the car of the list will be the result of
;; running the program, and the cdr will be a trace of the executions in the
;; form of a list of the fractions that ran at each step.
;;
;; To run a program for a specific maximum number of steps, call
;; (run-fractran-bounded program input maxsteps)
;;
(define (step-fractran fract int)
(if (equal? fract ()) int
(let ((fr (car fract)))
(if (= (remainder int (cadr fr)) 0)
(cons (/ (* int (car fr)) (cadr fr))
(list fr))
(step-fractran (cdr fract) int)))))
(define (run-fractran fract int)
(let ((step-result (step-fractran fract int)))
(if (list? step-result)
(let ((new-int (car step-result))
(last-step (cadr step-result)))
(cons step-result (run-fractran fract new-int)))
(list int ))))
(define (run-fractran-bounded fract int bound)
(if (> bound 0)
(let ((step-result (step-fractran fract int)))
(if (list? step-result)
(let ((new-int (car step-result))
(last-step (cadr step-result)))
(cons step-result (run-fractran-bounded fract new-int (- bound 1))))
(list int)))
(list int)))
;; The mult program.
(define mult '((385 13) (13 21) (1 7) (3 11) (7 2) (1 3)))
;;
;; (run-fractran mult 432)
;; The primes program
(define primes '((17 91) (78 85) (19 51) (23 38) (29 33) (77 29) (95 23)
(77 19) (1 17) (11 13) (13 11) (15 2) (1 7) (55 1)))
;; (run-fractran-bounded primes 2 1000)
----------
Commenter Pseudonym has kindly provided a Haskell version in the comments, which was mangled by MTs comment formatting, so I'm adding a properly formatted version here. I think it's a really interesting comparison to the scheme code above. The Haskell code is very nice; cleaner than my rather slapdash Scheme version. But overall, I think it's a pretty good comparison - it gives you a sense of what the same basic code looks like in the two languages. Personally, I think the Haskell is clearer than the Scheme, even though the Scheme is my own code.
module Fractran where
import Ratio
import Data.Maybe
import Control.Monad.Fix
type Program = [Rational]
runFractran :: [()] -> Program -> Integer -> [Integer]
runFractran bound prog l
= step bound prog l
where
step _ [] l = []
step [] (f:fs) l
= []
step (_:bound) (f:fs) l
= let p = f * fromIntegral l
in case denominator p of
1 -> let pi = numerator p
in pi : step bound prog pi
_ -> step bound fs l
fractran :: Program -> Integer -> [Integer]
fractran prog l
= runFractran (fix (():)) prog l
fractranBounded :: Int -> Program -> Integer -> [Integer]
fractranBounded b prog l
= runFractran (take b $ fix (():)) prog l
mult = [385%13, 13%21, 1%7, 3%11, 7%2, 1%3]
primes = [17%91, 78%85, 19%51, 23%38, 29%33, 77%29, 95%23,
77%19, 1%17, 11%13, 13%11, 15%2, 1%7, 55%1]
-- fractran mult (2^4 * 3^3)
-- fractranBounded 1000 primes 2
- Log in to post comments
And since Haskell vs Scheme is somewhat topical...
This may be only remotely related to fractans.
I just stumbled upon this Conway paper:
The Free Will Theorem [11 Apr 2006]
Authors: John Conway, Simon Kochen
Comments: 31 pages, 6figures
Abstract:
On the basis of three physical axioms, we prove that if the choice of a particular type of spin 1 experiment is not a function of the information accessible to the experimenters, then its outcome is equally not a function of the information accessible to the particles. We show that this result is robust, and deduce that neither hidden variable theories nor mechanisms of the GRW type for wave function collapse can be made relativistic. We also establish the consistency of our axioms and discuss the philosophical implications.
http://arxiv.org/abs/quant-ph/0604079
Since we want to do some Haskell vs Scheme. Here is my scheme version:
(define (fractran fractions input)
(let loop ((current fractions))
(if (null? current) input
(if (integer? (* input (car current)))
(fractran fractions (* input (car current)))
(loop (cdr current))))))
;;
;; and here comes Marks example:
(fractran '(385/13 13/21 1/7 3/11 7/2 1/3) 432)
It computes 244140625 == 5^(4*3)
Thanks, Mark.
Had I thought for more than five minutes about my version, I would have realised that a bounded evaluation version is unnecessary in Haskell. You just generate a possibly infinite stream and let the consumer decide how much computation to do. I was a bit too fixated on doing it the same way, instead of doing the same thing. Final code here.
I also find the Haskell version cleaner (I'm so biassed it's not funny, though), but the one thing that most programmers would find distracting is the explicit conversions between Integer and Rational. Yes, it's distracting, especially in a program this small. However, it's the price you pay for strong typing.
Here's a shorter and more idiomatic Haskell version:
fractran :: Program -> Integer -> [Integer]
fractran fs n = gotIntegers . filter ((1 ==) . denominator) . map (* fromIntegral n) $ fs
where gotIntegers [] = []
gotIntegers (x:_) = let next = numerator x in next : fractran fs next
I came here via Reddit. There's another Haskell implementation in the comments thread. Link:
Short Haskell impl. of Fractran.
(I enjoy the forays into pathological languages quite a bit. Thanks for blogging about them.)
Now, here's the crucial question: Is it tractable to write a Fractran interpreter in Fractran?
Of course, you'd have to do something tricky to encode the input...
And just for fun, here's my Lisp implementation. Lisp handles rationals just fine, so I'm just using them directly. It's likely less efficient than one using integer pairs, like yours does, Mark.
(defun execute-fractran-program (i instructions &key (index 0)
(bound nil) (trace nil))
(if (or (>= index (length instructions)) (and bound (zerop bound)))
i
(let* ((newi (* i nil)
(newi (if trace (cons newi i) newi)))
(if (integerp newi)
(execute-fractran-program newi instructions :index 0
:bound bound :trace trace)
(execute-fractran-program i instructions :index
(1+ index) :bound bound :trace trace))))))
Using the same setup as Mark's, it would be run thusly:
(setf mult '(385/13 13/21 1/7 3/11 7/2 1/3))
(execute-fractran-program 432 mult)
Or
(execute-fractran-program 432 mult :bound 200 :trace t)
Or something like that.
And, of course, because I'm a moron, I made a mistake in the program. In the first line of the second if:
"(execute-fractran-program newi instructions :index 0 :bound bound :trace trace)"
The "bound" should be replaced by "(1- bound)". This only affects it if you are bounding the execution - this error obviously prevents it from being bounded at all!
Coin:
I was thinking about that too. ^_^ You'd probably Godel-encode it using integer pairs, and load that into a single register. The numbers involved would be humongous, of course.
Also, I just realized another, larger error that messes everything up. I don't know *how* it happened, because I definitely had it right earlier in my code. On the line that has the let*, where it says (* i nil), the nil should be replaced by (nth index instructions).
Also, I think Mark did a good thing putting the program before the initial value. You can swap that if you like.
Blarg I suck! The suggested fix for the bounding problem doesn't work, because if you *aren't* bounding it then it craps on you.
Here's an updated, fixed version that works now. Rather than mess with nbsp again, I've just removed all unnecessary whitespace. You can pretty-print it at your leisure.
(defun execute-fractran-program (i instructions &key (index 0) (bound nil) (trace nil)) (if (or (>= index (length instructions)) (and bound (zerop bound))) i (let ((i (if (atom i) (list i) i))) (let ((newi (* (car i) (nth index instructions)))) (if (integerp newi) (execute-fractran-program (if trace (cons newi i) newi) instructions :index 0 :bound (if bound (1- bound) nil) :trace trace) (execute-fractran-program i instructions :index (1+ index) :bound bound :trace trace))))))
Coin, good question about Fractran-in-Fractran.
What you're looking for is a universal Fractran machine. For that, you need a representation of the program. That's a list of pairs of integers. It's not as easy as it sounds.
But at the very least you'll need greatest common divisor as a building block. Here it is:
2/7,3/11,13/17,19/23,51/65,1/13,46/95,1/19,5/6,91/2,209/3
Give it 2^a * 3^b, and it will return 5^gcd(a,b).
Interestingly, it's not much longer than multiply. And I didn't try optimising it very much.
I worked out a marvellous way to compile a small high-level language into Fractran mechanically by hand, but unfortunately this comment box is too small to contain it.
No, I haven't written a compiler. Yet.
Pseudonym:
If you email me a copy of your description of how to compile into fractran, I'll post it on the main page. (I might even implement it.)
Pseudonym:
Mind if I break down your GCD program for the benefit of the viewers at home? It's wonderfully simple, and an excellent example of exactly how to do subroutines in fractran.
Creating a subroutine in fractran is a simple matter of establishing an address register and a restore register. All subroutines begin with A/B, end with 1/A, and have a series of program fractions in the middle, all multiplied by B/A.
You 'call' a Fractran subroutine by simply putting its address register in the numerator of a fraction. It continues execution until none of its body fractions return an int (just like the program as a whole), and then clears the address register, allowing the program to proceed as normal. All subroutines must occur *before* the main body of the program.
You can use the same concept, obviously, to test for a value without permanently decrementing it. If the tested value is A and the restore register is B, you simply put B in the numerator of the fraction that's doing the testing, then put A/B at the beginning of your program (before the subroutines), and you'll immediately get that register value back.
Both of these are used to good effect in this program. You can see two subroutines in the program, of a single fraction each. These are "19/23 2*23/5*19 1/19" and "13/17 3*17/5*13 1/13". Obviously, 19, 23, 13, and 17 are simply subroutine control registers, so we can ignore them for purposes of understanding the program. The point is that whenever a 19 appears in the numerator of a fraction, the program will attempt to run 2/5 until it can't anymore, and whenever a 13 appears, it will attempt to run 3/5 until it can't anymore. When it can no longer run those lines, program execution returns to the main body.
The main body is the following fractions, "5/2*3 7*13/2 11*19/3". Some quick inspection shows that the 13 and 19 are subroutine dispatches. The 7 and 11, though, are actually restore registers, and will trigger the "2/7 3/11" in the beginning. This allows the program to test for the presence of 2s and 3s without actually removing them from I.
The program boils down to only a few statements, when simplified:
(subroutine 19) 2/5 (end 19)
(subroutine 13) 3/5 (end 13)
5/2*3
2*13/2
3*19/3
So, final program execution runs thusly:
First, remove as many pairs of 2 and 3 as you can, and add a five for each one.
When you run out of pairs, if there are 2s left, change all the 5s into 3s. Vice versa if there are 3s left.
Repeat!
In other words, this is precisely what GCD is supposed to do. It repeatedly subtracts the smaller number from the larger until it can't do it complete (you've found the remainder), then goes at it again with roles switched. When you finally find a pair of numbers that evenly divide, the 2s and 3s will shrink to nothing (after a few runs). With nothing left unpaired to trigger the "convert 5s" subroutines, the number left in 5 is the final value of the number that divides both, or the GCD.
Pretty straightforward implementation, really. I still find it beautiful that such a thing can be done with nothing more than fractions. ^_^
All right, thinking on the universal fractran machine. I think it's doable, and actually fairly easily. First, the fractran interpreter, in pseudocode:
Data:
I = I
instr = list of instructions, as integer pairs
Code:
Set Index to 0
LOOP
Increment Index
If instr[Index] == 0
Then Goto END
Set Num to instr[Index]
Increment Index
Set Den to instr[Index]
If GCD(I, Den) == 1 AND Den > 1
Then Goto LOOP
Set I to Divide(I, Den)
Set I to Multiply(I, Num)
Set Index to 0
Goto LOOP
END
Return I
This pseudocode should be fairly easily convertible into Fractran. The hard part, of course, is that pesky list access. I recommend using Godel-numbering to store the whole thing in a single register. That is, if you only have the fractions 5/7 7/11, and you want to store it in register 3, you would code it as 3253757711.
To access a list entry, you need a way of getting the nth prime. I don't know of any easy next-prime functions off the top of my head, so currently I'm leaning toward just plodding through the integers and doing division tests on them until you find a prime. Incrementing Index would be a matter of finding the next prime.
Advice? Anything you notice that can be improved on in the pseudocode?
Xanthir, that's not precisely the way that I designed it (I used loops rather than subroutines), but it's the right general idea.
The key thing is that 19 and 13 are "program counter" locations (I designed them as loop labels, you called them subroutines; same difference), and 23 and 17 are temporaries to avoid the problem that you can't both increment and decrement the same prime power in the same instruction.
I'll see if I can write up the general method this weekend.
Ah, yeah, they can be loops just as much. Fractran is naturally a looping language, so I just considered it a normal part of the thing.
However, considering it a loop is probably better, actually. It prevents you from thinking you can do recursion, which you can't in this simple model. Recursion requires stacks, which are a lot more effort to implement.
Can't this be implemented totally without multiplication or division via arrays of integers. Once the program and its input have been factored (although, of course, that can be problematic at times for large numbers).
Each program position and input for a program would be an array of integers. index 0 would represent the exponent on 2 (+ -> in numerator, - in denominator, 0 not present). index 1 would represent the exponent on 3, index 2 would be 5 and so on. Once the arrays have been initialized, there is no need to reference prime numbers until you want to read an output integer. If the output is in an exponent, it's right there in front of you.
Then the program would run as a series of (possibly sparse) vector additions with the branching rule determined by the existence or not of a negative value in the current sum.
Xanthir, simple recursion (primitive recursion, perhaps?) isn't much harder to implement than a loop. Once you've got the idea that a flag register can be a program counter, you just need to realise that the register doesn't have to be a flag, but a counter.
I haven't quite worked out mutual recursion yet.
Z. Sesqui: Yes. Indeed, it's easier to write programs if you think of it that way.
Pseudonym:
Hmm, the problem with recursion, I would have thought, is not the program counter, but the local variables. You can't reuse them. You have to implement a stack before you can do that, and that's getting into some higher-level stuff (though I can see how you would do so - it's related to how I believe you would do a list).
Z Sesqui: Are you talking about implementing Fractran in a normal language, or in itself? In a normal language, using mult/div is pretty darned easy. In Fractran, using negative numbers is non-trivial - it requires that you invent a method of coding negative numbers in a purely positive form (like computers do now), and thus come up with more complex add/sub/mult/div/etc routines. This would be necessary eventually, of course, but why go to that length when a Fractran interpreter in Fractran is easier to simply write directly using mult/div?
In addition, using an array still requires you to be able to reference memory locations. This can either be done manually (in which case it can't create arrays at runtime), or programmatically (in which case it *does* still need to work with prime numbers, in order to know what prime corresponds with what array location).
Fractran is prime-based. There's no getting away from it until you build something higher-level on it which allows you to work with normal integers at run time.
Multiplicative encoding uses a transformation akin to Godel numbering to flatten 2-D arrays (and datastructures with rows of varying length) into a 1-D sequence of integers.
Examples on OEIS (Slaone's Online Encyclopedia of Integer Sequences, hosted by AT&T Research, with distinguished editorial board):
http://www.research.att.com/~njas/sequences/A007188
A007188 Multiplicative encoding of Pascal triangle: Product p(i+1)^C(n,i).
http://www.research.att.com/~njas/sequences/A007189
A007189 Multiplicative encoding of Stirling numbers of first kind.
http://www.research.att.com/~njas/sequences/A007190
A007190 Multiplicative encoding of Stirling numbers of 2nd kind.
http://www.research.att.com/~njas/sequences/A007280
A007280 Multiplicative encoding of partition triangle.
http://www.research.att.com/~njas/sequences/A007338
A007338 Multiplicative encoding of the Eulerian number triangle.
and my own, from two days ago:
http://www.research.att.com/~njas/sequences/A124061
A124061 Multiplicative encoding of Catalan's triangle: Product p(i+1)^T(n,i).
Sure, the numbers get big rather quickly, but doesn't bother Turning or Conway.
I spoke with Conway a few times at Caltech, and admire him greatly. I made him the prime suspect (pun intended) in my novel manuscript "Axiomatic Magic", where a magical murder on an alternate-universe Caltech campus is solved by amateur sleuth Richard Feynman. But that's another story...
Don't miss the Transit of Mercury on Wednesday, a rather rare (in human terms) astronomical event.
Pseudonym:
I spent a little bit looking at your GCD program, and I think I've found the only optimization that can really take place. Since the subroutines are only called from one place, you can insert the re-incrementing of 2 or 3 directly into them. You'll want to put it on the end fraction, the one that exits the subroutine, so that it only executes once. Then, you cut the program down by two fractions, and kill the need for two registers. As the restore registers for 2 and 3 which we are eliminating are quite low (7 and 11), this has a very nice cascaade effect on total values as well.
It should look like this, optimized:
7/13 39/35 2/7 11/19 38/55 3/11 5/6 7/2 11/3
Or, decomposed:
7/13 (3*13)/(5*7) 2/7 11/19 (2*19)/(5*11) 3/11 5/(2*3) 7/2 11/3
I was thinking of implementing in hardware or an embedded processor where multiplication is more expensive than addition.
I still don't see where it needs prime numbers once the arrays are initialized.
Multiplying 6 / 35 by 85 / 2 is the same as a vector addition of [1; 1; -1; -1; 0; 0; 0] to [-1; 0; 1; 0; 0; 0; 1] which gives [0; 1; 0; -1; 0; 0; 1].
To explain the resulting fraction to a human requires knoweldge of a prime number table, but a Fractran interpreter doesn't need to care in the middle of a computation. That all comes for free with the fundamental theorem of arithmetic.
By analogy, explaining binary numbers to a human requires a conversion step, but the processor doesn't need to do that except on input and output.
Sesqui:
You're right. In fact, there's a derivative of Fractran called Bag which uses identifiers in the fractions; each identifier represents some *unspecified* prime number. So Xanthir's program 7/13 (3*13)/(5*7) 2/7 11/19 (2*19)/(5*11) 3/11 5/(2*3) 7/2 11/3
could be written (I think) as:
a/b bc/df f/e g/h fh/dg i/g d/fi e/f g/i
The same basic trick that allows Bag to use identifiers for primes could be used to represent it using the vectors to represent the coefficients of the primes.
Ah, I see, you're talking about doing a fractran interpreter in some other language. I've been discussing doing a fractran interpreter *in* fractran, which *does* require a prime generator to access arrays.
Oh, wait... Actually, it only requires such a thing if you want (theoretically) unbounded integers in the array. If you're fine with bounding them, you can store them in powers of two. To see what I mean, say you decided that the array values must be 4-bit. Then, you can simply concatenate the whole array, and use it as a giant number. So, say you wanted to store 1, 3, 5. That would be 0001, 0011, 0101. Concating in reverse order (so that the lowest subscript is the least significant place), you'd get 010100110001, or 1329. You can store this in a register, and then extract each digit individually. It actually functions more as a stack, and you'd need an additional stack holding the digits as you popped them off, so you could put them back on when you extracted the number you wanted. Not too efficient in general, but it works pretty well if you're mostly eating the array as you do in Fractran, with only the occasional restoration.
Incidentally, I'm not convinced that you need a prime number generator in Fractran to represent lists and arrays.
Fractran itself only uses multiplication and division. The meaning of a program is dependent on prime numbers, but you don't need prime numbers to interpret it. (It really helps when you're writing programs though!)
A binary stack (i.e. a stack with only 0's and 1's on it), for example, can be represented with a counter (plus a register or two as scratch space). To push a symbol, multiply the value in the stack by two and then add either 0 or 1. To pop a symbol, divide by two. To peer at the top symbol, take the remainder when divided by two. It's trivial to generalise to the case of a stack with a finite alphabet.
Once you have a binary stack, you can store arbitrary integers using any prefix-free encoding like unary or one of the Elias codes.
I'd been thinking of a binary stack already, though I'm sure I described it in a roundabout way. ^_^ I don't know *why* I didn't think of prefix-free encoding, though. Thanks, Pseudonym. Unary's probably the easiest possible - arbitrary numbers of ones separated by a zero. I'm not familiar enough with the other encodings to know off the top of my head if one can easily do arithmatic with them. With unary, though, you just eat the stack until you find a zero, adding one to an accumulator with each divide-by-2 operation.
Thinking on it more, I realized that embedding unary into a binary number is just about the worst thing possible. If you're trying to embed a number N, the binary number required to embed it is 2N. On the other hand, if you use higher-level embeddings, like embedding a binary number in trinary, the number is only 3ln(N), which is an improvement in computational magnitude. Increasing the base of the embedded number results in more efficient embeddings, as well.
So, after thinking about it a bit, it's probably best just to embed a decimal number in base 11. That makes it easy to write, and though the operations involved may be a bit more difficult than using binary, the efficiency of the embedding is greatly improved, resulting in lower numbers overall.
Using A for the digit after 9, and storing it in register 3, the listing for the optimized GCD program would be:
33A11A2A7A6A5A11A3A55A38A19A11A7A2A35A39A13A7
To extract the values, just divide by 11 repeatedly, pushing remainders onto another stack to compile the value(for actual implementation, you'd use two alternating stacks for the value, pushing the divided value onto the opposite stack for efficiency, as division is inherentely destructive in Fractran). When you hit an A, you know you can stop. Fractran's access patterns make it relatively cheap to implement, as well - push the digit you've pulled off onto another base-11 stack, and when you finally fire a rule, just reverse the process.
This number is truly stupendously large - stupidly so, and that's for a relatively small program. No one ever said Fractran was efficient. ^_^ This is what is required, though, for a universal machine. You *must* put everything onto a single register, unless you're willing to bound the possible size of program. If you are, then you can simply load each number onto specified registers, and test them explicitly in the program. While this would be *ungodly* more efficient, it's not *universal*. However, a bounded machine could, I believe, be implemented in such a way as to be very easily extended to whatever level one needs.
I thought we were talking about creating a Turing machine as a Fractran program. If that's our goal, the exponents can get arbitrarily large - what with an infinite tape and all.
Nah, I'm aiming for a Fractran interpreter in Fractran. A Universal Fractran Machine, as it were.
Yeah, the exponents can get arbitrarily large, but it will still take quite some time to actually run the machine with numbers that staggeringly huge. ^_^ As a theoretical machine I guess it doesn't matter, but it's practically impossible to run in real life - the time to simply *access* the program increases nearly exponentially with the length of the program. ((I don't know the term for the computational class of O(11^log10(n)), but it's smaller than exponential, though not by too much.))
Mark, do you refer to this kind of program?
Number n is an array of 6 elements, prime factoring the initial state: 2^1*3^0*5^1*7^0*11^0*13^0=2*5=10
Then 2 arrays containing the exponents of the factored numerators and denominators of the shorter prime generator:
7/3,99/98,13/49,39/35,36/91,10/143,43/13,7/11,1/2,91,1
then, at the initial state of 10, results that lead to powers of 10 are prime exponents.
Sorry, the program didn't get formatted right. Here is the correct version:
#include
int main()
{
int i,j,div,m;
long n[6]={1,0,1,0,0,0};
int num[10][6]={{0,0,0,1,0,0},{0,2,0,0,1,0},{0,0,0,0,0,1},{0,1,0,0,0,1},{2,2,0,0,0,0},
{1,0,1,0,0,0},{0,0,0,2,0,0},{0,0,0,1,0,0},{0,0,0,0,0,0},{0,0,0,1,0,1}};
int den[10][6]={{0,1,0,0,0,0},{1,0,0,2,0,0},{0,0,0,2,0,0},{0,0,1,1,0,0},{0,0,0,1,0,1},
{0,0,0,0,1,1},{0,0,0,0,0,1},{0,0,0,0,1,0},{1,0,0,0,0,0},{0,0,0,0,0,0}};
for(i=0;i<10;i++){
for(j=0,div=1;(j<6)&÷j++) if(n[j]<den[i][j]) div=0; //divisibility test.
if(div) {
for(j=0;j<6;j++) n[j]+=num[i][j]-den[i][j]; //Actual fractional multiplication.
i=-1;
if((n[0]==n[2])&&!n[1]&&!n[3]&&!n[4]&&!n[5]) printf("%ld,",n[0]); //if n=10^p, print p
}
}
}
Mark, you should fix the php code of this page to display printable '<' and '>' when they don't make an allowed HTML tag, as currently by defaut if they appear in the comment, then it gets distorted. Also the preview destroys the translation of those characters, so one must do it again (and guess what happens if you fail to replace one of them...).
Besides, it might be a good idea to include a "code" tag among the allowed html tags for comments, then the code would display properly tabbed and spaced.
DrK3055A:
I have no control over the code that manages the scienceblogs site; it's maintained by the wonderful folks at Seed magazine. The code is the moveable type platform, and the Seed technical wizard, Tim Murtaugh, is pretty overwhelmed just dealing with keeping the site up and stable. I'll mention it to him, but I doubt that he's got the bandwidth to deal with it.
Actually, I found a configuration option in MT that allows me to set the permissible tags for comments. <code> and <pre> tags should now work in comments.
It looks a bit better. Still there's an incomplete line (the first #include <stdio.h>) but the program core is compilable. Never mind though.
I forgot to mention above that with the use of arrays, we are working with exponents, hence multiplications and divisions are translated to additions and substracions. In fact, there aren't any mult or div operations inside the program. And there wouldn't be any in an interpreter of such kind others that those used for input and fraction factorizations.
I don't know why i didn't think of this before, but you can represent an irreducible fraction by a product of positive and negative exponents. If we take this in consideration, i can simplify my original program so i can use just an array for commands that represent the irrational numbers.
for those that can't see this at first sight, the condition that an integer is multiplied by an irrational number, is the same no matter if we represent such number by an irreductible or reductible fraction. If we assume the denominator criteria for selecting the next command to execute, then there's a slight difference between reducible and irreducible fraction. Nevertheless, the algorythm states that the next command to execute is the one that yields an integer number from the multiply operation, and this number is the same no matter the fraction is irreducible or not. Then, for removing the ambiguity in the operations, one can assume that the fraction is unique, thus irreducible in a trivial case. By performing a substraction of the exponents of both numerator and denominator factors, one automatically transform such fraction into a reducible one:
21/9 = 3*7/3*3 = 3^(1)*3^(-2)*7^(1) = 3^(-1)*7^(1) -> {0,-1,0,1,0,0}
translated to the "variables state" space.
I'm confused as to what you're saying here. An irrational, by definition, can't be written as a finite fraction, reduced or not.
As well, I'm not sure what your point is with regards to reducible/irreducible fractions. Yes, you can write a variant of fractran that allows fractions which have not been fully reduced. You'll save on a couple of states, that's all, as you no longer need 'restore' states to handle nondestructive testing of a variable.
You lose on notational simplicity, though, as you now need to explicitly differentiate between your numerator and denominator. If you store the fraction as a list of exponents, it's automatically in simplest form.
Oops!!, I see, i did a typo (thank you for your correction). Obviously i wanted to refer to a rational number, as you correctly stated, an integer product can't result from a multiplication of integer and irrational operands.
What i refer to is regarding to this paragraph (from the part I of fractran article series):
There is a difference at the execution criteria if we consider reducible/irreducible fractions. Lets explain this with an example:
15 will be executed by 8/5, because 15/5=3 (5 divides 15) and 3*8=24, then results in 24.
15 won't be executed by 16/10 because 15/10=1.5 (10 doesn't divide 15 in an integer fashion), and although the result would come to be 24, that fraction would be skipped because of the "denominator" criteria).
What i mean is that the following statement...
"An integer number, multiplied by a fraction yields another integer, if the denominator of that fraction can divide the number".
...is true and only true when it refers to irreducible fractions.
By other hand, with the "denominator criteria" one would put in mind to create a program with reducible fractions (maybe unintentionally, i.e. the inclussion of 16/10 expecting to catch the execution when the number is multiple of 10, and not just when is multiple of 5) and then the interpreter, once reduced the fractions to a list of exponents, would fail to executed the code as expected.
You'll save that couple either, because you don't need to verify if number can be divided by the list of exponents, before choosing to execute or not, by performing such division (so you don't need to destroy the number while testing). If you look at the code i posted above, there is just one condition to let execute the operation, this is acomplished (assume that the number is in the same form like the fractions) if none of the exponents of the number is greater in absolute value than the negative exponents (exponents that go to the divider) of the fraction. As a number is integer, every exponent must be positive, hence this is enough to compare each exponent with the negated exponents of the fractions (then the numerator's exponents become negative, and then there is nothing to care about numerator, because a positive number is always greater than a negative one).
Sorry, i did a typo again, the following paragraph is the correct one:
So far as I understood with you, I agree with you. But I'm not quite getting the point. If you program in a Fractran that allows reducible fractions, it's a bit more economical. And of course, if you *think* you're programming in a Fractran with reducible fractions when really it's plain ol' vanilla Fractran, then you'll run into problems, same as if you try to run C++ code through a C compiler.
Does this discussion affect the design of Fractran interpreters? If so, exactly how?
Xanthir, You have given to me a great idea. Fractran with reducible fractions + some other nice features like I/O, funtions, multithreads, and objects = Fractran++.
This is how I defined the Fractan++ specifications:
http://www.esolangs.org/wiki/Fractran_plus_plus
Hey, I had an idea the other day, just throwing it out here because I'm not creative enough to do anything with it.. Could a Fractran featuring complex number multiplication be of any intellectual interest? I'm very short of qualified to analyze the properties of such a language, but to those who are, I ask: are there any thoughts on such?
Alex
I doubt it would do much. Essentially, it would just add two new divisors to the language - -1 and i.
Hmm, though. I dunno, you may be able to perform some clever tricks with that, since multiplying by i also affects the presence of -1 in the current fraction, but it wouldn't be much. No reason to outlaw it, though.
So what you're talking about here is defining Fractran on Gaussian integers.
Gaussian integers, unlike the normal sort of integers, don't have unique prime decompositions in general. That makes programs a bit trickier to reason about. Normal Fractran "instructions" have a natural meaning in the prime decomposition view: decrement some counters and increment some others. The language that I hereby call "Gaussian Fractran" has the property that the same instruction might mean more than one different thing depending on the machine state.
Ooh, you're right, Pseudonym. By allowing negative numbers, one may decompose a number into an infinite number of values.
We may avoid this by specifying that the decomposition view of Complex Fractran may only include at most a single -1 and/or i. That brings us back to a unique factorization. This is not strictly necessary, of course, as the decomposition view is only a method of making the program more transparent, but it does help.
We then realize, though, that the uses of these new values are very limited.
-1, unfortunately, is useless. It is impossible to test for, for example, as all it does it make the fraction it appears in negative, and this does not affect whether or not the fraction will produce an integer when multiplied by the data fraction. If it cannot be tested for, then there is no reason to use it. It's an invisible value.
Luckily, there's a difference between i and 1/i, so it's usable. However, it works quite oddly. We can only increment/decrement the i counter once per operation (as opposed to the arbitrary in/decrementing of the prime counters). As well, incrementing it merely has the result of toggling it (and switching whether the data fraction is positive or negative, but that has no effect, as I showed). So basically, multiplying by i toggles the value of the i counter, while multiplying by 1/i tests/decrements the value (without toggling), that is, it bottoms the counter, providing a binary conditional branch. Weird, but possibly useful in a limited manner.
Wait, crap, I just realized I'm reasoning about Real/Imaginary Fractran, not Complex Fractran. You can probably ignore what I said. That's what I get for not reading up on Gaussian integers before I started thinking too hard.
Eisenstein integer Fractran. Quaternion Fractran. Octonion Fractran. The mind boggles...
And, of course, there are polynomial Fractrans, too. In fact, there's a Fractran for every ring, no?