The toy model of statistical entropy that I talked about the other day is the sort of thing that, were I a good computational physicist, I would’ve banged out very quickly. I’m not a good computational physicist, but by cargo-culting my way through some of the VPython examples, I managed to get something that mostly works:

The graph at the bottom of that window is the entropy versus “time” for a lattice of 20 sites with a 25% hopping probability (either left or right). The window with the colored balls at the upper left is a graphical representation– red dots are “occupied” sites, white “unoccupied.” The VPython code to do this is here: entropytestN2.py, from which you can see that I’m not much of a computational physicist, let alone a programmer (there are some really kludgey initialization steps that exist because it kept throwing strange errors when I did simpler things that should’ve worked, and I didn’t have the patience to figure out the underlying problem).

It is, as I expected, kind of fun to play around with. I think there might be some bias built in that ends up pushing things to the right, but I’m not sure whether that’s real or just my inability to recognize randomness. As expected, for small numbers of sites, you see frequent fluctuations down to zero, and as you increase the number of sites, significant decreases in entropy become much rarer.

I haven’t attempted to do anything quantitative with this, because I don’t know how to write data out from VPython into a form that any of my other analysis programs might use. If I think of a good way to do it, I might try to quantify this a little more, but for now, I’m happy just to have a working toy.

*Related*