I'm fairly certain somebody has already done this, because it's such an obvious idea. It's a little beyond my cargo-cult VPython skills right at the moment, though (I can probably learn to do it, but not right now), and I none of the applets I Googled up seemed to be doing this, so I'm posting this sketchy description because I've spent some time thinking about it, and might as well get a blog post out of the deal.
So, as we said back in the holiday season, one of the most fundamental concepts in the modern understanding of thermodynamics and statistical physics is the notion of entropy. You can also argue that entropy is in some sense responsible for our perception of time-- that is, the reason we see time marching forward into the future, not backwards into the past is that entropy increases as we move forward, and it's the increase in entropy that determines the arrow of time.
We can define entropy using Boltzmann's formula:
which says that the entropy of a given arrangement of microscopic objects (atoms or molecules in a gas, say) is related to the number of possible ways that you can arrange those objects to produce states that are macroscopically indistinguishable. The more states that look the same on a coarse scale, the higher the entropy. This makes the arrow of time a sort of statistical property: entropy tends to increase because it's easy for a collection of stuff to randomly move toward a high-entropy state (which you can do lots of ways) but unlikely that a random motion will take you to a low-entropy state (which can only be made a few ways).
Boltzmann's idea is simple and powerful, but it can be a little hard to do anything more than qualitative hand-waving with it at the intro level. It's kind of hard to explain how you "count" microstates of things that are (classically) continuous variables, like the velocities of atoms in a gas, without getting infinite results.
So, here's my rough idea, that I might still try to code into a model for my timekeeping class this term: Rather than thinking about continuous variables, let's think about a lattice of points that may or may not contain an atom. It's easiest to picture in 1-d, where a low-entropy starting state might look something like this:
1 1 1 1 1 0 0 0 0 0
This represents all of the "atoms" being on one side of the lattice representing space.. Then you just allow each "atom" some probability of moving either left or right to an unoccupied space. So, for example, a few time steps later, the state might look like this:
1 1 1 1 0 1 0 0 0 0
This is a state where the one atom on the boundary has shifted right one space.
What does this have to do with entropy? Well, to use Boltzmann's formula, we need to define a set of "macrostates" of the system that can be made up from the "microstates" in multiple ways. For this, we can imagine a "density" distribution for our line of numbers, which we'll take as the number of atoms in each half-lattice. The total entropy will be the sum of the entropies for each of the halves.
So, for the initial state above, you have five atoms in the five sites of the left half-lattice, which can only be done one way. You also have five vacancies in the right half-lattice, which can also only be done one way. Each of these halves has an entropy of zero (up to some possible additive constant, depending on how you do the counting).
The second state has four atoms on the left, and one on the right. Each of these "macrostates" can be put together in one of five ways, so the "entropy" for each half is a constant times log(5). This is an increase in the entropy of the system. Some time later, you'll have two atoms on the right and three on the left, each of which can be done 20 different ways, so the entropy increases to log(2) for each half. At which point you've hit the maximum entropy.
So, we have a system where a purely random hopping from one spot to another leads to a clear increase in the entropy of the system, without having to put any explicit rules in place to generate that. The nice thing about this is that it's purely combinatorical-- even intro students can tally up the possibilities (for small numbers of sites) and see that the entropy as defined by Boltzmann does, indeed, increase.
It should be relatively easy to code this up on a computer, too, at least for somebody having some familiarity with the right tools. (I've never done anything with arrays in VPython, though, which makes this difficult to do right.) This would also allow you to run if for longer times and larger numbers of states. It's also easy to extend this to a two-dimensional array,using, say, the number of atoms in each quadrant of a square grid as the macrostates.
The other nice thing about this is that it should make it possible to demonstrate that entropy does occasionally decrease-- it's perfectly possible for a random fluctuation to take you to a state with lower entropy than the previous state. It's just highly unlikely to do so, because there are more ways to move to higher entropy than to move to lower entropy. And, again, it's relatively easy to see this because you can readily count the states involved.
So, there's my toy model idea, which I'm sure is not remotely original. I'll probably try to cobble together some version of this for use in the later part of my timekeeping class this term (after we get into more modern ideas about relativity and so forth). Though if anybody has such a program lying around, I wouldn't object to being sent a working example in one of the tools I've got easy access to (VPython, Mathematica, or potentially MatLab (though I don't have that installed at the moment).
- Log in to post comments
One issue with an argument such as this is that you've hand picked the starting state to be low entropy. Let me change the example slightly for sake of argument.
A typical example is that all of the gas molecules start in a corner of a room. This would be a low entropy state. If we use standard physics, we will see that the gas will expand into the room, increasing the entropy, similar to your example of 1s "spreading out".
The problem is that if you reversed the direction of time, your argument says that we should see continually decreasing entropy, but we would actually see the exact same increase in entropy as we did "forwards" in time. It's a little harder to see with your example, but if you ran it "backwards", you would similarly see an increase in entropy.
So it's tough to say that increase in entropy alone guides the arrow of time.
I believe this is the "simple symmetric" case of the exclusion process, see e.g. cond-mat/0310453, though it's been a while since I worried about such things seriously.
Some time later, you'll have two atoms on the right and three on the left, each of which can be done 20 different ways, so the entropy increases to log(2) for each half.
Actually, there are 10 different ways, not 20, because the holes are indistinguishable. For the general case of k atoms and n slots, the number of ways to do it is n!/[k!(n-k)!]. Which still increases until k is as near as possible to n/2, but not quite as quickly as stated above.
The problem is that if you reversed the direction of time, your argument says that we should see continually decreasing entropy, but we would actually see the exact same increase in entropy as we did "forwards" in time. It's a little harder to see with your example, but if you ran it "backwards", you would similarly see an increase in entropy.
So it's tough to say that increase in entropy alone guides the arrow of time.
Absolutely. This toy model is random rather than deterministic, and as such doesn't have reversible dynamics. I think I'm willing to sacrifice strict reversibility for the pedagogical value of being able to directly count the states.
And as Eric points out, I screwed up the counting, which is what I get for doing this on a scrap of paper during breakfast.
The java simulations used in Gould & Tobochnik (http://stp.clarku.edu/simulations/) do this to some extent. I don't think they do a 1-D chain, but they do N particles in a 3-partition box. You can watch a non-equilibrium initial configuration equilibrate and see the fluctuations about that average.
In 2+ dimensions, there are fully reversible, deterministic cellular automata and lattice gas rules you can use to illustrate things like this --- and hit the "R" key, have all the velocities reverse, and recover your initial condition. The rules I am thinking of for reversible 2D diffusion involve a trick ("Margolus neighborhoods") which wouldn't work in 1D, but I think there might be some discussion of what does work in Rothman and Zaleski's book on lattice-gas hydrodynamics.
Hi Chad,
Did it occur to you that your model is looking very much like the Ising model of magnetism? All you have to do is change 0 in -1. This immediately gives you vast amount of (statistical physics) literature on this model alone. This includes Monte Carlo simulations and very probably examples of the software you are looking for...
By the way:
The total energy of the Ising model in magnetic field H is defined as the sum of J*S(i)*S(i+1)+ H*s(i). S(x) being the value on position x in the grid. For the one dimensional case an exact expression for the partition is known. This partition function can be used the calculate the entropy of the model as function of temperature and magnetic field.
Neat idea. I disagree with Andy's statement (first comment), though, that if you ran your model backward the entropy would still increase. If you truly time-reversed the "microscopic physics" in the model, then your apparently random hops would be exactly time reversed, and you would indeed go to a low entropy state. If you're going to run the clock backwards, you really do have to reverse the microscopic motions. To put it another way, open a valve on a gas cylinder and let the gas molecules expand into an otherwise empty room. If you run that backwards, *exactly*, classically you really would have the gas molecules reverse their motions and their collisions and all end up back in the cylinder. Quantum mechanically the situation can be trickier and gets at the heart of the measurement problem.
As Andy surmises, though, the real trick is that you have stacked the deck by starting from a low entropy state. When I teach undergrad stat mech, I always point this out. The mystery is not so much why there appears to be irreversibility on the macroscale. Rather, the question is, why did the universe apparently start in a very low entropy state?
Hi Doug,
I meant to say that if you started the situation in the low entropy state and ran backwards. For example start where all the gas molecules were in one corner. Backwards in time, the entropy will increase.
In your example, going backwards in time, once the molecules return to this low entropy state in the cylinder, they will then return to a high entropy state by a gas expansion. For all of these toy problems, the graph of entropy versus time should be symmetric about the special low entropy state.
Aaaaaahhh. Got it. Now I agree. Thanks for the clarification.
Any definition of the arrow of time based on entropy is self referential because entropy's definition itself is based on time. As Feynman would say, "you have cheated very badly". :)
Chad's model behaves a bit like the 1D Ising model, except at fixed total magnetization and zero interaction strength and external field, which makes the maximum entropy solution different from just kB*ln(2) per spin.
Regarding the initial low entropy state of Universe, Penrose has some interesting ideas (although they may not originally be his) about a sudden switching on of gravitational interactions in a uniform (inflated) distribution of matter, whose subsequent collapse then drives entropy production in the rest of Universe. So, you could say that gravity is responsible for the arrow of time.
Did it occur to you that your model is looking very much like the Ising model of magnetism? All you have to do is change 0 in -1.
That's not accidental-- a colleague does research on Ising-like models, and there are student posters about it in the hall outside my office.
I meant to say that if you started the situation in the low entropy state and ran backwards. For example start where all the gas molecules were in one corner. Backwards in time, the entropy will increase.
Yes, this is true. The low-entropy initial condition is put in by hand, to make a point, and things would look very different if I started with a high-entropy state.
But then, we know empirically that the universe started in a low-energy state, so as an experimentalist, I'm perfectly happy to put that in as an initial condition, and let theorists and philosophers worry about why it got that way. For the limited point I'm trying to make, an ad hoc starting point is just fine.
Because I had time to kill on the plane, I knocked together a crude version of this with ten sites, and I was a little surprised at how often it fluctuated all the way back to the zero-entropy initial state. This is probably one of those "humans are really bad at recognizing true randomness" things, but it would be interesting to play around with this and see how many sites you need before it really looks irreversible. Sadly, while I had time enough to do a brute-force implementation for ten sites, I didn't have access to documentation on how to do arrays in VPython (no in-flight wi-fi, grumble mutter grump), so expanding it to an arbitrary number of sites wasn't happening...
This is very close to (I want to say "identical to," but I'm not completely sure) something called the "Ehrenfest urn" (http://www.arcaneknowledge.org/science/ehrenfest.html). It's what I used to make the graphs of entropy in my book, although I described it as a box of gas with a wall in the middle and a tiny hole in the wall, which enabled particles to switch sides with some small rate. I did it in Mathematica, and I'd be happy to share the code.
It's true that if you start with any given condition and evolve both forward and backward in time, entropy will increase in both directions. Just like in the real world! The difference is that, by making the jumps truly random, you eliminate the possibility of subtle correlations that allow entropy to decrease in the past in a truly reversible system.
I knocked together a crude version of this with ten sites, and I was a little surprised at how often it fluctuated all the way back to the zero-entropy initial state.
With ten sites and five atoms, there are 252 possible states. Two of those states, or about 0.8% of the total, have zero entropy: all five atoms on the left, and all five atoms on the right. So it's not too surprising that you would occasionally see a zero entropy state come up.
Let's try some smaller numbers to see how rapidly this changes. Two sites means you always have zero entropy. Four sites gives six possible states, or a 1/3 chance of zero entropy. Six sites gives 20 possible states, or a 1/10 chance of zero entropy. Eight states gives 70 states, for just short of 3% with zero entropy. So it looks like every four sites you add decreases the probability of finding yourself in a zero entropy state by about an order of magnitude. A run with 26 sites would knock your chances of a zero entropy state down to one in a million or less, which should be good enough for your demonstration.
I think this is a symmetric (i.e. L/R probabilities equal) specialization of the asymmetric exclusions process (with no introduction or loss of particles)?
Another thing that you can look at is something called the fluctuation theorem, which sits behind the second law. If your system is away from equilibrium, not too close to maximum entropy, the theorem tells you about the ratio of the probabilities of small jumps of positive or negative entropy. I had an undergrad working on a project coding this up to verify the theorem. Worked nicely and not too difficult.
I'm not sure what you mean by "random" from this description, but one obvious restriction seems to be that no cell can have more than one occupant (i.e., you can have a cell labelled 0 or 1, but you can't have a cell labelled 2). I pick at this because the big issue that comes up time and again, pedagogically speaking, is just what is to be considered a microstate and how that affects your counting.
I'm thinking that you might want to tag your cells somehow so that what you're actually showing - explicitly - is a random transposition of the contents of each cell. So from your initial position, you might have whatever is in cell three jump to whatever is in cell one, which forces whatever is in cell one to be placed in cell three.
This ties in with my point about microstates because you could indicate your tags by, say, five different colors for the five different ones. This would allow you to a) visually indicate the transpositions, and b) make the point that when you include color in the description the low entropy states are distinguishable, but that when you disregard it, they are not (and incidentally introduce your students to the argument that no information is lost, only converted from macroscopic to microscopic information)
This occurred to me right after I hit "enter" - if you want to get a deterministic rewind, one trick might be to give each cell a random initial "velocity" left or right and thereafter keep this value constant for each tracked cell - this is an ideal gas, right :-) Doing it this way, your line of ten cells would probably be better thought of as an Asteroids type of ring.
If you did it this way, you can not only get a deterministic rewind from a random initial state, you can also introduce the idea of recurrence. Further, these conditions allow students to easily calculate how long it would take for the system to return to it's initial low-entropy state - this is actually a permutation group of order 10, with every permutation being either a cycle or a product of disjoint cycles.
You say: "entropy is in some sense responsible for our perception of time-- that is, the reason we see time marching forward into the future [...] is that entropy increases as we move forward".
That seems a stretch. An individual human does not have the ability to measure entropy in more than a small part of their thermodynamic environment, and even then only poorly (we are, after all, bad judges of randomness). Entropy is a property of the entire system, not of just a small part of it. If my perception of the passage of time were based on entropy detection, then I would frequently perceive time as moving backwards; and yet I never do.
You can argue that I perceive time based on an internal clock which I have evolved because it gives me a survival advantage, in part because it allows me to predict the likely effects of entropy (as well as, say, the amount of time it will take a moving predator to reach me). That seems like a very loose argument and anyway one that doesn't really capture what we're trying to get at. You cannot, however, argue I perceive time directly because of change in entropy.