Via social media, John Novak cashes in a Nobel Betting Pool win from a while back, asking:

Please explain to me the relationship between energy, entropy, and free energy. Like you would explain it to a two year old child.

Why? There is a statistical algorithm called Expectation Maximization which is often explained in terms of those three quantities. But apparently entropy is hard to understand because all the detail in the analogy is heaped on entropy part (with some side tracks in Kullback-Leibler divergences.) Since I have a background in communication theory, entropy is a perfectly natural thing to me, even divorced from physical quantities. Energy, and especially free energy aren’t nearly as natural to me in a statistical context.

Sadly, I am one of the worst people you could possibly ask about this. I’m primarily an atomic physicist, meaning most of the time I’m worried about single atoms or very small groups of atoms. I rarely if ever deal with atoms in the kind of quantity where thermodynamic quantities are really necessary. Most of the time, when I think about changing energy states, I’m thinking about internal energy states of an individual atom in response to something like a narrow-band laser field applied to excite the atom. That’s not really a thermodynamic process, even when the laser is on for long enough to establish some sort of equilibrium distribution.

As a result, despite having taken thermo/statmech multiple times, I’m pretty hazy about a lot of this stuff. It always seemed kind of weird and artificial to me, and never all that relevant to anything I was doing. Taking it at 8:30am didn’t help anything, either.

What you really want here is a chemist or a mechanical engineer, because they’re used to dealing with this stuff on a regular basis. I know some of my wise and worldly readers come from those fields, and maybe one of them can shed some additional light. I can tell you what little I do know, though explaining it as if to a two year old will end up sounding a bit like SteelyKid telling The Pip about movie plots (“…and then Darkrai showed up but also there were these purple things and if they touched anything it disappeared, but the kids had to defeat Darkrai except Darkrai had to battle the other Pokemon and they went [*miscellaneous battle noises*] and…”), because my understanding of the subject is around the five-year-old level.

Anyway, the primary concern of thermodynamics is the “internal energy” of a system, by which we mean basically any energy that’s too complicated to keep track of in detail. This is stuff like the kinetic, rotational, and vibrational energy of macroscopic numbers of particles– you’re not going to individually track the instantaneous velocity of every single atom in a box full of gas, so all of that stuff gets lumped together into the “internal energy.” The “external energy” would presumably be something like the center-of-mass kinetic energy of the box full of gas if you shot it out of a cannon, but that’s classical mechanics, not thermodynamics.

Internal energy in thermodynamics gets the symbol *U* because God only knows. (Confusingly, *U* is generally used for potential energy in quantum mechanics, so I spent a lot of time operating under significant misconceptions on this front.) Of course, you never really measure the total internal energy of a real system, only *changes* in the internal energy, which can be broken down into two pieces: energy flow due to heat entering or leaving the system, and energy flow due to work being done on or by the system. Because the apparatus for dealing with this stuff in a mathematical manner primarily is built around systems in equilibrium, we generally consider infinitesimal changes in everything, and handle time evolution by saying that each infinitesimal step results in a new equilibrium, and as long as nothing happens too quickly, that works reasonably well.

The change in internal energy is the difference between heat in and work out (the conventional definitions of positive heat flow and positive work meaning heat coming in and work going out), given by:

where *Q* is heat (God knows why), and *W* is work (finally, a quantity whose symbol makes sense in English). These can be further broken down in terms of other quantities: for a sample of gas to do work (or have work done upon it) you generally need to change the volume, in which case the work can be written in terms of the pressure driving the volume change and the change in volume (Work is force times distance, so pressure (force per area) times volume (distance cubed) has the right units). Heat flow is associated with temperature, and at a given temperature, for heat to flow there must be something else changing, which was dubbed “entropy.” Given this, we can re-write th change in internal energy as:

where *S* is entropy, and the other symbols ought to be self-explanatory. Everything in thermodynamics starts with this relationship.

The original question had to do with the relationship between these quantities, which can (sort of) be seen from this fundamental equation, by holding various bits constant. The cleanest relationship between internal energy and entropy comes thorough temperature. If you hold the volume constant (doing no work), then the change in internal energy is just the temperature times the change in entropy. If you’re a physicist, you can re-arrange this into a derivative:

(at which point the mathematicians in the audience all break out in hives), telling you that the temperature is related to the change in entropy with changes in internal energy, which is one way to understand the negative temperature experiment from earlier this year. If you set up a system where the entropy decreases as you increase the energy, that corresponds to a negative temperature. This happens for a system where you have a maximum possible energy, because at the microscopic level, entropy is a measure of the number of possible states with a given total energy, and if there’s a cap on the energy of a single particle, that means there’s only one way to get that maximum energy, and thus it’s a state of very low entropy.

Unfortunately, that’s about all you can cleanly say about a relationship between energy and entropy. They’re not really directly related in an obvious way– you can have high-entropy-low-energy states and low-entropy-high-energy states, and everything in between. The only thing you can say for certain is that the total entropy of a closed system will never decrease, no matter what happens to the energy.

“Free energy” is a different quantity, which gets its name because it’s a sort of measure of the useful work that can be done by a system. This is a good illustration of the basic process of thermodynamics classes that I always found sort of bewildering. Basically, you can define a new quantity that’s the difference between internal energy and the product of entropy and temperature (*U – TS*). Physicists tend to give this the symbol “F,” and using the above definition of change in internal energy and a little algebraic sleight of hand (and the product rule from calculus), you get:

This new quantity is the “Helmholtz Free Energy.” What’s the point of this? Well, if you consider an isothermal process, for which *dT* is by definition zero, then the change in this free energy is exactly equal to the negative of the work done by the system. So the free energy is a measure of the energy that is available to be extracted as work, which makes it a useful quantity for thinking about pushing things around with hot gases.

The relevance to the Expectation Maximization business would probably have to do with the fact that for a system in equilibrium, *F* must be a minimum. Physically, this makes sense: if there’s extra free energy available above the minimum value, then you can extract useful work from the system, and it’s not really an equilibrium state in that case. Systems with excess free energy will tend to spontaneously give that energy up and change their volume in the process. Most discussions of this helpfully note that the only time chemists use the Helmholtz free energy (they prefer the Gibbs free energy, which includes the pressure and volume term and only changes when components of the system react to change their chemical identity) is when they’re talking about explosives.

These are all very general and abstract relationships between things. The part that I always found kind of bewildering about this stuff is that completely separate from the above math, you tend to also be able to define an “equation of state” that relates various variables together– the “ideal gas law” *PV = nRT* is about the simplest example of an equation of state. These tended to basically drop out of the sky, usually turning up for the very first time in an exam problem, and I would get hung up on just where the hell they were coming from or what I was supposed to do with them. Which is a big part of why I’ve mostly avoided thinking about this since about 1995.

So that’s all I’ve got. Sorry I couldn’t be more helpful, but maybe somebody else reading this will have some light to shed on this stuff as it relates to the maximization of expectations.