From the department of self-promotion, let me call attention to the current volume of *The Electronic Journal of Combinatorics.* If you click on the link and scroll down to P164, you will find a barn-burning, rhetorical masterpiece of a paper entitled, “Isoperimetric Numbers of Regular Graphs of High Degree With Applications to Arithmetic Riemann Surfaces.” And who wrote this paper, you ask, the suspense practically killing you? That would be my long time friend and collaborator Dominic Lanphier, of Western Kentucky University, along with yours truly. Yay! Always nice to have one more line item for the CV.

It occurs to me that while I occasionally do math posts around here, I’ve never actually said anything about my research. So let me say a few words regarding what this paper is about. Here’s the abstract:

We derive upper and lower bounds on the isoperimetric numbers and bisection widths of a large class of regular graphs of high degree. Our methods are combinatorial and do not require a knowledge of the eigenvalue spectrum. We apply these bounds to random regular graphs of high degree and the Platonic graphs over the rings Z

_{n}. In the latter case we show that these graphs are generally non-Ramanujan for compositenand we also give sharp asymptotic bounds for the isoperimetric numbers. We conclude by giving bounds on the Cheeger constants of arithmetic Riemann surfaces. For a large class of these surfaces these bounds are an improvement over the known asymptotic bounds.

Well, that’s a lot to parse. But let’s see if I can give you the gist of what’s going on, one sentence at a time. We start with:

We derive upper and lower bounds on the isoperimetric numbers and bisection widths of a large class of regular graphs of high degree.

In this context, a graph refers to a diagram in which certain pairs of large dots (called vertices) are connected by line segments called edges. A simple example is a square. The four corners of the square are the vertices, and the sides of the square are the edges.

In general, a “graph” is simply a pictorial representation of data. You’re probably familiar with the idea of the graph of a function, which is a visual presentation of some abstract mathematical relation. We sometimes refer to computer graphics, by which we mean a visual presentation of image data stored in a computer.

In chemistry, it has long been commonplace to draw diagrams depicting the structure of molecules in which the individual atoms are represented by their standard abbreviations, with the lines between them denoting different types of bonds. These diagrams used to be called “chemicographs.” Writing in the 1870′s, mathematician Arthur Cayley shortened this to “graph” in the context of the more general dot-line diagrams, and the name stuck.

Why study graphs? Simply put, there are a lot of physical situations that are conveniently represented by graphs. The dots might represent cities and the lines might represent airplane flights between them. Or the dots might represent land masses and the lines represent bridges connecting them. Or the dots could represent countries and the lines represent countries sharing boundaries. Basically, graphs just kept coming up in one application after another, and that’s the sort of thing that makes mathematicians suspect they are worth studying for their own sake.

Moving on, a graph is “regular” if every vertex has the same number of edges coming out of it. Each corner of the square has two edges coming out of it, so it is regular. If we now draw in one of the diagonals, then two corners will have three edges while the other two corners have just two edges, so this graphs is not regular. But if we now also draw in the second diagonal, then every corner had three edges and the graph is regular once again.

In referring to a “large class” of regular graphs, we mean that our big fancy theorem doesn’t quite apply to all regular graphs (D’oh!), but it does at least apply to an awful lot of them (Whoo-hoo!).

The term “high degree” is slightly vague at this point, though we make it very precise in the body of the paper. The degree of a vertex is simply the number of edges coming out of it. Since our graphs are regular every vertex has the same degree, and we can refer simply to the degree of the graph. In referring to high degree we mean, roughly, that the graph has a lot of edges relative to the number of vertices. If you imagine 1000 dots in a circle, each connected only to its two neighbors, then you have a regular graph of degree two on one thousand vertices. Pathetic. But now imagine that each dot is connected to every other dot (except for itself). That’s a graph of degree 999. Now we’re talking! Basically, for our trick to work we need lots of edges in the graph.

Now, on to the isoperimetric number. One question you might ask about a graph is how easy it is to fracture. If the dots represent cities and the lines represent transmission channels, you don’t want a situation where if one line goes down large numbers of people are cut off from one another.

Picture a barbell. Two enormous weights on either end connected by a thin bar. If you make a small cut through the center of the bar then suddenly big chunks of weight go flying off into space. That’s a small isoperimetric number. Easy to fracture. A big isoperimetric number means that you are going to have to do an awful lot of cutting to splinter off a significant number of vertices. The bisection width, meanwhile, is simply another way of measuring the resiliency of a graph. (There are other measures, with names like “toughness” and “integrity”, but that’s a different post.)

Sadly, if you’re graph is at all big it is generally not possible to determine the isoperimetric number precisely. So you make do with saying things like, “It is no bigger than *this*, and no smaller than *that*.” In referring to upper and lower bounds we are saying that we have the exact isoperimetric number sandwiched in between two other numbers. Happily, our upper and lower bounds are quite close together. So even though we can’t measure the isoperimetric numbers exactly, we can, at least, get pretty darn close.

The second sentence is:

Our methods are combinatorial and do not require a knowledge of the eigenvalue spectrum.

If you took a linear algebra class at some point in your life then you might recall the term eigenvalues. Matrices have them, and they play a central role in understanding linear transformations, among other things.

As it happens, any graph can be represented by a matrix. The trick is simple. Number the vertices. Then make a table in which row *a* and column *b* has a 1 in it if vertex *a* is connected to vertex *b* and has a 0 in it otherwise. We now have a table of zeros and ones that records precisely what is connected to what. All of the important information from the graph is recorded in that matrix. That matrix has eigenvalues. In this way we can employ linear algebra as a tool for studying graphs. This line of thought leads to a branch of mathematics known as “algebraic graph theory,” and it just so happens to be my specialty.

Of course, the precise structure of the matrix depends on how you assigned the numbers. It turns out, though, that changing the ordering leaves you with the same rows but in a different order. And one thing you learn in linear algebra is that permuting the rows does not change the eigenvalues. So we can talk about “the eigenvalues” of a graph, even though there are many possible matrices.

It turns out that the eigenvalues encode information about how easy it is to fracture the graph. So if you care about the isoperimetric number, and you know about the eigenvalues, then you’re good to go! And for certain kinds of graphs you actually can say terribly clever things about the eigenvalues. For people who know some abstract algebra, this is especially true of Cayley graphs, which are graphs that arise from the study of groups. (In this case we can employ a branch of mathematics called representation theory to work out the eigenvalues of the graph, but that is *definitely* a different post.)

Sadly, it is generally quite difficult to determine the eigenvalues of a graph. So in this sentence of our abstract we are emphasizing that we are able to say something interesting about the isoperimetric numbers without getting our hands dirty with eigenvalues. Not a bad trick.

To say our methods are “combinatorial” is to say that ultimately they are based on counting arguments. Subtle and difficult counting arguments (I say with all due modesty) but counting nonetheless. Counting’s not so bad, right? (In fact, as bad and as dense as some of the calculations in the paper look, a lot of it, seriously, is just high school algebra. We even use the quadratic formula at one point!)

Finally, in referring to the “eigenvalue spectrum,” as opposed to simply the set of eigenvalues, we mean that we care about something more that just the eigenvalues themselves. We also want to know about how these numbers are distributed, which ones occur multiple times, the span between the largest and the smallest and so on. So, the word “spectrum” indicates that this is a set with some structure.

Next sentence:

We apply these bounds to random regular graphs of high degree and the Platonic graphs over the rings Z

_{n}.

This is the sentence where we show how our abstract theorem applies in certain concrete cases. Suffice it to say that “random regular graphs” and “Platonic graphs” refer to families of graphs that, for various reasons, have attracted considerable interest among mathematicians. Since the abstract is, in part, an advertisement for the paper, we’re basically saying that you should pay attention to our nifty new theorem because it tells us stuff about these graphs that have been deemed interesting by mathematicians.

I won’t define the technical jargon, beyond noting that Z_{n} is a gadget from number theory. So we now have combinatorics, linear algebra and number theory all represented in our paper.

In the latter case we show that these graphs are generally non-Ramanujan for composite

nand we also give sharp asymptotic bounds for the isoperimetric numbers.

A Ramanujan graph is one whose eigenvalue spectrum attains a certain theoretical maximum. We have already noted that the eigenvalues encode information about how hard it is to fracture the graph. Ramanujan graphs are basically graphs that are very hard to fracture, even though they have few edges relative to the number of vertices. If those edges represent physical connections that are expensive to build, you can understand why you might be interested in graphs that are hard to fracture despite having few connections.

Alas, it turns out that when we apply our nifty new theorem to our number theory themed graphs, they turn out not to be Ramanujan. Too bad, so sad. On the other hand, since some previous authors had wondered about precisely this question, we have a reason for noting that we have partly resolved it.

A “composite” number is one that is not prime. It turns out that the sorts of questions we are considering are simpler in the case of prime numbers, and they had largely been resolved in previous papers.

By “asymptotic bounds” we mean, roughly, this: We are actually studying infinite families of graphs in which the number of vertices gets bigger and bigger. Our bounds get increasingly accurate as the number of vertices go up. To say that our bounds are “sharp” means essentially that no better such bound is possible. So don’t even try looking for one!

We conclude by giving bounds on the Cheeger constants of arithmetic Riemann surfaces.

For our purposes, a Riemann surface is really just any closed surface. A sphere and a torus (doughnut) are two simple examples. We have now introduced some geometry into our paper. Our surfaces are “arithmetic” because they arise from certain considerations in number theory.

The Cheeger constant is a gadget introduced by Jeff Cheeger in 1970. It turns out that surfaces, no less than matrices, can be said to have eigenvalues, but hey are hard to study directly. The Cheeger constant is based on the geometry of the surface. It is related to the eigenvalues, and can often be approximated with reasonable accuracy. It remains a useful tool for geometers studying the eigenvalues of surfaces. Want to know something about the reclusive eigenvalues? Just study the less publicity-shy Cheeger constant!

Later, mathematician Peter Buser came up with the idea of discretizing this process. Basically, he noted that in many circumstances our surfaces are triangulated, meaning simply that they arise from gluing together triangles along their boundary edges. In this case we can attach a graph to the surface. Each triangle is a vertex in the graph, and two vertices are connected if they represent triangles sharing an edge. This graph captures much of the information about the surface. Which is cool, since graphs are easier to study than surfaces. What I called the isoperimetric number is actually a discrete version of the Cheeger constant.

One reason our families of graphs are interesting is that they are related to arithmetic surfaces. Since our nifty new theorem provides information about these graphs, they automatically provide information about these surfaces as well. Which brings us to the final sentence:

For a large class of these surfaces these bounds are an improvement over the known asymptotic bounds.

That one seems clear enough. We’re basically saying that while the previous known bounds were just adorable, they have now been exposed as amateurish hackwork in light of our nifty new theorem. Append the word “mofo” to the end of that sentence to get the full effect. (I kid, of course.)

So, on the off chance that anyone has read this far, let’s bring it all home by providing a somewhat translated version of the abstract:

We prove a strong theorem about how easy it is to fracture a large class of graphs. We use methods that are simpler than the more familiar techniques. Among our large class of graphs are two specific families that have been well-studied in the literature, and we can provide some new information about them. In particular, we can show that one of our families, sadly, does not provide good examples of resilient graphs. Since our graphs arise naturally in the study of certain surfaces, we can say something new about them as well.

So there you go! Of course, it was long, winding road from the initial idea (for which Dominic deserves the lion’s share of the credit) to the finished paper. Time well spent, every second.