# Fast Arithmetic and Fractals

As pointed out by a commenter, there are some really surprising places where fractal patterns can
appear. For example, there was a recent post on the Wolfram mathematica blog by the engineer who writes
the unlimited precision integer arithmetic code.

Unlimited precision integers are numbers represented in a variable-length form, so that you can
represent very large numbers by a sequence of smaller numbers. For an overly simplified example, suppose you wanted to represent the number "340123461297834610029346129834761298734612483". One way of doing that
would be as a list of four-digit numbers: [0003,4012,3461,2978,3461,0029,3461,2983,4761,2987,3461,2483].

One of the challenges with very large numbers like that is doing arithmetic on them in a reasonable amount of time. You can do what we'd do on paper - basic long multiplication, one digit at a time. But that's really very slow for really big numbers.

Naturally, people have searched for better algorithms. One great example of that is called Karatsuba multiplication. The Karatsuba algorithm is based on the idea that you can replace some of the sub-multiplies by shifts, where shifts are less expensive than multiplications.

So, let's look at a simple number, like 5743. We can re-write that as 57×102+43. In fact, we can write any four-digit base-ten number x in a similar form, as x1×102+x2.

So, suppose we want to multiply two 4-digit numbers in this expanded pair form: (x1×102+x2)×(y1×102+y2). We can generalize that slightly - replace the 102 with a symbol B - we're basically doing base-100 multiplication in the example, so B=100.

The normal way of multiplying is basically to multiply the four pairs, (x1×y1,x2×y1,x1×y2,x2×y2), and then shift the first pair left 4 places, the second two each two places, and then add them all up. That's four multiplications. If we repeat this splitting process for each of the two digit numbers, then we end up doing 16 one-digit multiplications to multiply the two 4-digit multiplications.

Katsuba is based on doing a bit of algebra on the paired form. Do an algebraic expansion of the symbolic version of the paired form:

(x1×B+x2)×(y1×B+y2) = x1y1B2 + (x2y1+x1y2)B + x2y2.

We can rewrite than in terms of three new variables:

• L = x1y1
• M = x2y2
• N = (x1y2 + x2y1)

So X×Y = LB2 + NB + M

Now, here's the clever part. We're going to find an alternate way of producing N, using
one multiply instead of two. We'll start by taking (x1+x2)×(y1+y2):

1. (x1+x2)(y1+y2) = (expand)
2. x1y1 + x1y2 + x2y1 + x2y2 = (replace terms with equivalents)
3. L + N + M
4. So: N = (x1+x2)(y1+y2) - L - M

So once we compute X and Y, we can compute Z using a single multiplication, two additions, and two subtractions. Then when we recombine all of these, we'll end up doing an extra shift. But since shifts,
additions, and subtractions are cheaper than multiplication, the cost of multiplies dominates - and we can do this multiplication using 3 sub-multiplies instead of 4.

If we're multiplying a really big number, say, one with 32 digits, we can do this recursively - three 16-digit sub-multiplies; 3 8-digit sub-sub-multiplies for each of those, for a total of 9 sub-sub-multiplies; and so on, until we get to 243 single digit multiplications. Compare that to the
conventional multiplication, which needs to multiply each digit in the first number, with each digit in the other number, for a total of 1024 multiplications.

So, we've got a nice fast algorithm for multiplying large numbers. Now, what does this have
to do with fractals? Let's look at a bitmap (via Wolfram) that shows the basic pattern of
what digits get multiplied by what digits:

Voila! The rows represent bits from one number; the column represent bits from the other. The multiplication pattern of Katsuba arithmetic is a fractal. In fact, it's very close to being the
classic T-square fractal. In fact, it is the inverse of the T-square fractal after a finite number of iterations, with the bottom right corner snipped out. Fractals are ubiquitous in describing mathematical patterns. There are a huge number of recursive algorithms (which are, by definition, self-similar), which correspond to a variety of simple fractals.

Tags

### More like this

##### Surreal Arithmetic (Edited rerun)
In my last post on the surreals, I introduced how the surreal numbers are constructed. It's really fascinating to look back on it - to see the structure of numbers from 0 to infinity and beyond, and realize that ultimately, that it's all built from nothing but the empty set! Today, we're going…
##### The Excel 65,535=100,000 Bug
I've been getting a lot of requests from people to talk about the recent Excel bug. For those of you who haven't heard about this, in Excel 2007, floating point calculations that should result in a number very, very close to either 65,535 or 65,536 are displaying their result as 100,000. It's only…
##### Arithmetic with Surreal Numbers
Last thursday, I introduced the construction of John Conway's beautiful surreal numbers. Today I'm going to show you how to do arithmetic using surreals. It's really quite amazing how it all works! If you haven't read the original post introducing surreals, you'll definitely want to [go back and…
##### Using the Abacus, part 2: Multiplication
Once you can add on an abacus, the next thing to learn is multiplication. Like addition, it follows pretty closely on the old pencil-and-paper method. But it's worth taking the time to look closely and see it step by step, because it's an important subroutine (to use a programming term) that will…

That's one of the most fascinating things I've ever seen! Utterly amazing.

Are there been people studying known fractals to extract new algorithms ?

Very interesting. I knew about the Katsuba algorithm, but didn't know the fractal structure. Although, with 20/20 hindsight, the recursive divide&conquer structure of the algorithm giving a fractal isn't that surprising.

What about the Montgomery Exponentiation ? Is it similar to Karatsuba's ?

One of the more clever approaches I saw to mulitplying large numbers was to use the DFT. The basic approach turns a convolution into simple multiplication. How does the DFT approach compare with Katsuba?

By Jeff Alexander (not verified) on 29 Sep 2007 #permalink

DFT-ish approaches to multiplication take time something like n log n log log n. Karatsuba is na for a = log3 / log2, which is (for large n) much better than the naive approach's n2 but quite a lot worse than DFT. But the constants of proportionality matter too, and (1) n has to be quite big before you want to switch from Karatsuba to fast-Fourier, and (2) there are other intermediate approaches, with both exponents and constants intermediate, that you might consider. GMP uses an nlog5/log3 method in between Karatsuba and fast Fourier.

whoever is interested, a quick implementation in a concurrent and recursive fashion in java using javaolution is here

By harald schilly (not verified) on 01 Oct 2007 #permalink

For example, there was a recent post on the Wolfram mathematica blog by the engineer who writes the unlimited precision integer arithmetic code.

I thought Mathematica had been using GnuMP for a couple versions now.

Yup. At least, Wolfram's web pages (1) say that as of version 5.0 they're using GMP and (2) include a pointer to where you can download the GMP sources.

This isn't the only interesting thing in his post, at least if you're (like me) into software development anecdotes. War stories like these are in a sense like an oral history of our industry.

Does fractals math have any application to the forecasting of stock prices ?

I very much doubt it.

I'm exceedingly skeptical that there's any particularly good model for forecasting stock prices. Stock prices are set by a combination of lots and lots of information, plus lots of irrational behavior based on bits of incomplete information, rumours, made-up bullshit, etc.

Hi Mark,
I am new reader to your blog, hence I am catching up. So I apologize for a late comment on this post.
Does fractals math have any application to the forecasting of stock prices?

I am actually reading a book by Mandelbrot. He believes and is pretty confident that fractals can be used model volatility in stock prices. In fact in stocks we are not interested in predicting the stock price, but rather its volatility. This again is modeled and not necessarily predicted. On this model various portfolios are stress tested and then the required exposure or hedging or insurance is decided upon. In fact reading about LTCM in When Genius Failed is a good way to get introduced to the concept of modeling.

Also I wanted to query about another point:
Stock prices are set by a combination of lots and lots of information, plus lots of irrational behavior based on bits of incomplete information, rumors, made-up bullshit, etc.

The same goes for coast line creation or mountain range creation. We do not have the full information, but according to fractal theory it has a pattern. So I feel that fractals or chaos theory should be able to predict exactly a thing like stock price. It would be difficult to exactly do so as we do not know the initial conditions.
Again we do not model a stock price, we model an index or a market, where many of the irrational factors should be cancelled out and general statistics should kick in.
I would be keen to hear your opinion on the same.

Regards,
Martin

By Martin James (not verified) on 05 Jan 2008 #permalink

Martin:

I think Mandelbrot often overstates the applicability of fractals. I don't really blame him; he's a brilliant guy, who
managed to put things together in an amazing way.

But there's a difference between a mathemetical system that describes something, and a mathematical system that defines something; and there's also a difference between knowing that a system can be defined in a way, and actually finding that definition; and there's a difference between being able to describe something and being able to predict it.

It seems likely that things like the stock market can be described by fractal phenomena. But I very much doubt that we can find a fractal function or system that will allow us to predict the stock market.

My reasoning on this goes back to two things: information theory, and recursive function theory. The basic lesson of these two is how hard it is to find meaningful predictions about chaotic systems.

There's a subfield of recursive function theory, which is studied by a guy named John Case, who was on my PhD committee. It's called inductive learning machine theory. The basic idea of it is find a mathematical model of a learning phenomenon.

The basic model of it is that you're given a series of
pairs, one at a time. (x0, f(x0)), (x1,f(x1), ...

You get to design a computing machine which, after each pair, outputs a program which guesses the function f.

The fundamental question of inductive learning machine theory is, can you build a machine that correctly guesses f? The absolute answer is that, in general, you can't. That much is obvious, by the halting theorem. But there are classes of functions for which you can, given enough time, guess the function.

But they are very small classes of functions.

And if you add the requirement of being able to say when you finally have guessed it - that is, instead of just outputting a guess after each input, but getting to a point where the machine itself can stop, and say "I don't need any more input: I know what the function is", the class of functions for which you can do it basically vanish. Unless you're told ahead of time something about the function - like "It's a polynomial of degree two" - you can't do it at all, for any function.

Things like the stock market are incredibly complex. They're far from simple functions; they're incredibly complicated. I think the odds of us looking at the history of the stock market, and using that to correctly guess a function that in some way allows us to predict the volatility of the market is pretty much nil. At best, I think we might be able to figure out a class of functions that match past behavior - but that class will have an infinite number of members, and selecting the correct one is exactly the kind of thing that learning machine theory says is probably impossible.

Volatility is assumed to be a parameter of a Gaussian, when volatility is cranked into the Black-Scholes equation. This is the fundamental equation used for essentially all Options in markets today.

Although that one an Economics "Nobel" prize, it is not a good fit to actual markets.

And when it fails, hedge funds collapse.

Nor is there an accepted theory of the origin of volatility in markets.

Nor of observed heteroskedacity.

The best fit seems to be by the guys at the Santa Fe Institute, with an agent-based model giving scale-free (fractal) clustered volatility.

If they're right, they'll get a Nobel.

Mark admits being uninterested in Economics. To me, this nongaussian issue is one of the places where it gets interesting.

Hi Mark / Jonathan,

Thanks for the response. What you are saying makes absolute sense. Past price modeling is a description and using that to predict future price movement is different.

Actually I feel that modern finance theory is not a pure science. Many see natural phenomena and force fit that into finance theory. For example the law of diffusion of heat was used to describe the fundamental equation based on which modern finance theory is built (the random walk, hence the fit into Gaussian curve). In fact history has it that Scholes used to scan physics books to see which equation can be used for finance.

Information theory is a subject I was introduced in a very strange manner. I read about it in finance. In fact Shannon was a speculator and he used information theory to make substantial returns.
http://en.wikipedia.org/wiki/Kelly_criterion
(I am sorry I do not know how to use HTML tags, I will learn that soon :))
Here we assume that investing is gambling :0 . http://en.wikipedia.org/wiki/Gambling_and_information_theory
Again a mutual fund named Princeton Newport run by Ed Thorp a colleague of Shannon ran this and made returns just short of Buffet with very less volatility. It is a pity that most modern portfolio managers have not read about this as Thorp used Mile Milliken as his broker and his fund had to be closed on account of share manipulation.

It would be nice if you can explain in your words how Information Theory and gambling meet. This need not be finance, but a subject in probability and statistics.

Regards,
Martin

By Martin James (not verified) on 05 Jan 2008 #permalink

Thanks for the response. What you are saying makes absolute sense. Past price modeling is a description and using that to predict future price movement is different.

Actually I feel that modern finance theory is not a pure science. Many see natural phenomena and force fit that into finance theory. For example the law of diffusion of heat was used to describe the fundamental equation based on which modern finance theory is built (the random walk, hence the fit into Gaussian curve). In fact history has it that Scholes used to scan physics books to see which equation can be used for finance.

Information theory is a subject I was introduced in a very strange manner. I read about it in finance. In fact Shannon was a speculator and he used information theory to make substantial returns.
http://en.wikipedia.org/wiki/Kelly_criterion
(I am sorry I do not know how to use HTML tags, I will learn that soon :))
Here we assume that investing is gambling :0 . http://en.wikipedia.org/wiki/Gambling_and_information_theory
Again a mutual fund named Princeton Newport run by Ed Thorp a colleague of Shannon ran this and made returns just short of Buffet with very less volatility. It is a pity that most modern portfolio managers have not read about this as Thorp used Mile Milliken as his broker and his fund had to be closed on account of share manipulation.

It would be nice if you can explain in your words how Information Theory and gambling meet. This need not be finance, but a subject in probability and statistics.

By Martin James (not verified) on 05 Jan 2008 #permalink