Final Notes on a Toy Model of the Arrow of Time

We're in the home stretch of this term, and it has become clear that I won't actually be using the toy model of the arrow of time I've talked about in the past in my timekeeping class this term. These things happen. Having spent a not-insignificant amount of time playing with the thing, though, I might as well get a final blog post about it, with something that sort-of worked and something that shows why I'm not a computational physicist:

First, the thing that sort-of worked: in thinking about trying to use the code I wrote, I was struggling to come up with a way to quantify the apparent irreversibility of the evolution of my toy system for larger numbers of "particles" in a relatively non-technical way that might be comprehensible for non-physicists. After a bit of thinking, I realized that there's an easy-ish way to do this, because the system has an easily calculated maximum entropy, which it ought to converge to. And empirically, when I start it going, it shoots up toward the maximum, and then noodles around up there, with occasional downward fluctuations as more "atoms" randomly end up on one side or the other.

To quantify the number and amount of those downward fluctuations, I realized I could just use the number of points dropping below various fractions of the maximum entropy, once the system got close to the maximum value. Which led to this graph:

i-dda4039376b844aa234c4779580f8c8a-arrow_time_all.PNG

To get these, I ran for 100,000 time steps, throwing out the first 1000 (as the entropy climbed upward from the initial low-entropy state with all the "atoms" in the left half of the array), and set up some simple counters that recorded the number of time steps where the entropy was below 90%, 75%, and 50% of the maximum possible for that number of lattice sites and atoms.

As you can see, the resulting behavior fits pretty well with what you would expect, qualitatively. For small numbers of sites, there are lots of big downward steps (in part because the discrete nature of the problem means that there aren't many options-- for ten sites, there's no option below 90% of the maximum that isn't also below 75% of the maximum), but this drops off very quickly. By the time there are 30 sites in the lattice, there are no longer any 50% fluctuations seen, and fluctuations dropping the total by 25% disappear by 50-60 sites.

To be a little more quantitative, I did a curve fit with the 90% data:

i-63341f63120afdb0add0726ef76bea69-arrow_time.PNG

That's a semi-log plot, so that nice straight line is an exponential decay, dropping by a factor of e for every 15 sites. Which, again, is more or less the sort of thing I would expect, though I'm not sure there's any particular significance to 15.

This is, unfortunately, where the "I'd never make it as a computational physicist" thing comes in. Because if you look at that second graph, you might notice a bit of a problem: it almost looks like it's flattening out at high site numbers. That is, there are a lot of high-ish values out toward the right side of that graph. This gets more pronounced if you go out even farther-- I started running some points for 120 sites, and got values that were almost the same as those for 100 sites.

Which baffled me for a while, until I realized that I'm an idiot. The low-value count doesn't mean anything interesting unless you're starting from nearly the maximum entropy. That is, you need to exclude the initial rise toward the maximum, otherwise you record a bunch of low values that are on their way up, rather than the downward fluctuations I'm interested it.

Of course, the time required to reach the maximum increases as you increase the number of sites (since atoms can only shift one site per time step, increasing the distance they have to move increases the time needed to get half of them over to the other side from where they started). But lazy me had set a fixed time cut-off for the initial rise of 1000 time steps. Which was more than good enough for the 20-ish sites I started with, but not always good enough for the 100-site lattice. So, I was picking up garbage points.

When I fixed this problem by throwing out the first 10000, rather than the first 1000 time steps, I got much lower values, by a factor of nearly 100 for the 120-site lattice. this suggests that I really ought to re-do all of the simulations with a higher cut-off. But 100,000 time steps takes 15-20 minutes to run on the computers that will run it (my new laptop doesn't really like VPython, and crashes on anything that long), so, yeah, not going to do that just for the sake of a blog post.

So, anyway, there you have it: the conclusion of my toy model of the arrow of time. I'm kind of disappointed that time ran out on the class (heh) and I didn't get to try it. If I ever teach statistical mechanics (which is vanishingly unlikely in the next several years), I might dust this off and try it there, or I might explicitly suggest it as a student project the next time I run this course (which is more likely). But, for now, it's been a decent source of physics-related blog material. And typing it up has made me much happier than a lot of other things I could be doing today...

More like this