Computing Strongly Connected Components

As promised, today I'm going to talk about how to compute the strongly connected components
of a directed graph. I'm going to go through one method, called Kosaraju's algorithm, which is
the easiest to understand. It's possible to do better that Kosaraju's by a factor of 2, using
an algorithm called Tarjan's algorithm, but Tarjan's is really just a variation on the theme
of Kosaraju's.

Kosaraju's algorithm is amazingly simple. It uses a very clever trick based on the fact that
if you reverse all of the edges in a graph, the resulting graph has the same strongly connected components as the original. So using that, we can get the SCCs by doing a forward traversal to find an ordering of vertices, then doing a traversal of the reverse of the graph in the order generated by the first traversal.

That may sound a bit mysterious, but it's really very simple. Take the graph G, and
do a recursive depth-first traversal of the graph, along with an initially empty stack of vertices. As the recursion started at a vertex V finishes, push V onto the stack. At the end of the traversal, you'll
have all of the vertices of G in the stack. The order of the reverse traversal will be starting
with the vertices on the top of that stack.

So you reverse all of the edges in the graph, creating the reverse graph, G'. Start with the
vertex on top of the stack, and to a traversal from that vertex. All of the nodes reachable from
that vertex form one strongly connected component. Remove everything in that SCC from the stack, and
then repeat the process with the new top of the stack. When the stack is empty, you'll have accumulated all of the SCCs.

i-56b36b0280bacb6dc765751d3ba52d56-example-scc-graph.png

As usual, things are a whole lot clearer with an example. Let's do that using the graph to the right as an example.

First, we do a depth-first traversal of the graph, putting vertices on a stack as we finish the
recursive step started with that node. We'll start the traversal with "A". A,F,E,C,G,H - H is finished,
so it goes on the stack. Then G goes on the stack; then C, then E, then F, and we're back to A. So
at this point, the stack is [H, G, C, E, F]. Then we keep going from A: A, B, D. D is done, so it goes
on the stack; then B, then A. So the final stack is [H, G, C, E, F, D, B, A]

i-00b24fb44c66c3d127eb9338bffee76f-example-scc-graph-rev.png

Then, we reverse the graph, and start traversing from whatever is on top of the stack. So first, we start from A in the reversed graph. That gives us A, D, B. So {A, D, B} form one strongly connected
component. Now, the top of the stack that hasn't been traversed yet is F. So we do F, E. {F,E} is
another SCC. The remaining stack is now [H, G, C], which traverses straightforwardly as C, H, G. So
we end up with {A, B, D}, {E, F}, and {C, G, H} as our SCCs.

What's the time complexity? We need to do two traversals, and one graph reversal. If we're using an adjacency list representation, we can create the reverse graph while we do the first traversal, so it's really just two depth-first traversals. So two times the complexity of a traversal; in adjacency list representation, traversal is O(V+E), where E is the number of edges - so Kosaraju's SCC algorithm is also O(V+E). (If you use an adjacency matrix, then it's O(N2).)

Can we do better than that? Order of complexity, no. You can't do better than the cost of a single traversal of the graph. You can eliminate the second traversal. Instead of doing that second
traversal, you can do the forward traversal using the stack, and then pull things off the stack
checking if they are the root of a strongly connected component, and if so, removing all members
of that component. Tarjan's algorithm works this way, and ends up doing one traversal, but considering
each edge in the graph twice. So it's slightly, but not dramatically faster. It's generally preferred
just because it avoids the computation of the reverse graph.

In terms of what we were talking about in the last post, this is good news. The computation of the SCCs is quite fast - quite a lot better than NP-complete. So we can, feasibly, use the division into SCCs as the basis of parallelization of graph algorithms.

As usual, all diagrams were drawn using OmniGraffle.

Categories

More like this

One amusing thing in graph theory is graph traversal. Many of the interesting algorithms on graph are ultimately based on the idea of iterating through the nodes of the graph in some order that is related to the structure of the graph. There are two fundamental orders of graph traversal, known as…
One of the reasons that I like about graph theory so much is because of how it ties into computer science. Graphs are fundamental to many problems in computer science, and a lot of the work in graph theory has direct implications for algorithm design. It also has a lot of resonance for me, because…
One of the problems with many of the interesting graph algorithms is that they're complex. As graphs get large, computations over them can become extremely slow. For example, graph coloring is NP-complete - so the time to run a true optimal graph coloring algorithm on an arbitrary graph grows…
Another really fun and fundamental weighted graph problem is the shortest path problem. The question in: given a connected weighted graph G, what is the shortest path between two vertices v and w in G? To be precise, the weight of a path is just the sum of the weight of the edges that make up the…

Some newer readers might need clarification on the distinction in digraphs between weakly connected, strongly connected, and weakly connected but not strongly connected.

For large random graphs, there's the relationship to gels and the phase change in coagulation-diffusion systems (where a large connected subgraph sharply emerges, whether the components are molecules or stars in galaxies). This is a universal phenomenon.

You have not really explained what happens in the case that not all the nodes are reachable from A in the first transversal. Still, I found this explanation clearer than others I have read.

By Tom Cooper (not verified) on 31 Oct 2007 #permalink

Great! When you have time, how about a post on identifying cliques, and one of computing all shortest paths?

By other bill (not verified) on 31 Oct 2007 #permalink

You have not really explained what happens in the case that not all the nodes are reachable from A in the first transversal. Still, I found this explanation clearer than others I have read.

A node that is not reachable from A cannot be in the same SCC as A or any node reachable from A. So, if after finishing the subgraph reachable from A, you have nodes left over, you pick one of them and start over. Eventually you run out of nodes.

Doug: Dynamic programming isn't an algorithm, but rather a class of programming techniques. Mark had a post on it a while back, I think.

Dynamic Programming methods probably really wouldn't do anything here - as Mark said, you can't do better than a single graph traversal. There's always that chance that a quantum algorithm can cut it down, but they're sufficiently non-intuitive for me that I wouldn't be able to say anything.

By Xanthir, FCD (not verified) on 01 Nov 2007 #permalink

Hey Mark C.C, not to get you off topic, but could you maybe investigate what the UK student figured out in winning Wolfram's $20k contest for his universal turing machine proof? I'm really interested in what the guy figured out. :)

There's always that chance that a quantum algorithm can cut it down, but they're sufficiently non-intuitive for me that I wouldn't be able to say anything.

Same here, but Scott Aaronson had an explanation of why QA's can sometimes cleverly use phase interference to find probable solutions fast. As for example Shor's algorithm seems based on a quantum Fourier transform, I guess the first question is how to recognize eigenfrequencies of a "resonating" SCC, whatever that means.

Physically it seems to me naively there should be something there. Picking up a set of elastic bands connecting point masses and shaking it, it should be possible to identify the SCC's movements as they are more tightly bound. But if it can be parallelized and identified, I dunno. That model builds on a many-body problem...

By Torbjörn Lars… (not verified) on 01 Nov 2007 #permalink

Hi Xanthir,

Thanks for clarifying the relation of dynamic programming to these algorithms.

Richard Bellman credited with this development was a Rand applied mathematician who is, with the Hamilton-Jacobi-Bellman Theorem, also prominent in mathematical game theory, optimal control theory and quantum dynamics.
http://en.wikipedia.org/wiki/Hamilton-Jacobi-Bellman_equation

Example:
arXiv:quant-ph/0402143v1 19 Feb 2004
Shlomo E Sklarz, David J Tannor, Navin Khaneja
Optimal Control of Quantum Dissipative Dynamics: Analytic solution for cooling the three level LAMBDA system

Biography - '621 papers, 41 books, 21 book translations authored (or co-authored)'
http://www-groups.dcs.st-and.ac.uk/~history/Biographies/Bellman.html

Torbjörn, thanks so much for that link! That was extremely clarifying. It was *so* good, in fact, that I want to repost a comment that really cemented how Shor's algorithm works (and thus points at the major benefit of quantum algorithms):
Originally from Robin Blume-Kohout

Scott asked, "...But then, given a superposition over all the elements of a periodic sequence, how do we extract the period of the sequence?"

Really, there are two questions here. First, suppose I've got a periodic data sequence. How do I extract the period? Second (this is the interesting bit) if the sequence is a quantum state, how do I use quantum mechanics to do it really fast?

The answer to question #1 is "Use something called the Fourier transform," and the answer to question #2 is "Use the quantum Fourier transform". Of course, this is nothing but jargon, so lemme explain what it means.

Like me, you probably spent some quality time on the swing set as a kid. Maybe (like me) you tried to swing so hard that you went all the way around (no, I didn't succeed). You probably discovered that, to really get going, you have to push in synch with your swing. Every swing has a natural period -- say, 5 seconds. If it's got a 5-second period, and you kick your legs really fast (say, every 1 second) or really slow (say, every 17 seconds), then you never really get going. On the other hand, if you kick every 5 seconds exactly, then you can get up some serious oomph -- even if the kicks are weak.

That swing is like a canary in a coal mine. It detects periodic kicks. Just by seeing how high you go, a bystander can tell whether your kicks are coming at 5-second periods. There's nothing sacred about 5 seconds -- we can design swings with 3-second periods, 13-second periods, or whatever.

So, if I use my data sequence as a sequence of kicks, then one swing with a period of T tells me, "Does the data repeat with period T?" What if I don't know what N is, and I want to find it? Well, imagine a whole array of swings, each tuned to a different period (T=1,2,3...N). If I kick all of them (at once) with my data sequence, then the one that swings the highest at the end is the correct period.

This, mon frere, is what the Fourier transform does. It simulates all those swings. You put in an N-element sequence, and it spits out a different N-element sequence -- containing the heights to which N different swings get pumped up. The F.T. is a linear transform -- feed it the sum of two sequences, you get the sum of their F.T.s -- which means it can be represented as an N x N matrix. (I should point out that a quantum operation on log(N) qubits is also represented as an N x N matrix... which is sort of suggestive.)

Does the Fourier transform solve our factoring problem? Nope. See, N is really big -- as big as the number we want to factor. So just writing down that sequence would take a loooonnng time. Furthermore, the Fourier transform takes O(N log(N)) steps, which is even slower than just sequentially dividing by every prime number less than N.

However, the F.T. provides a bunch of extraneous information. We get the heights of all N swings, whereas we only want to know which one goes the highest. This is where quantum mechanics gets useful! If we had a quantum state with N amplitudes -- each proportional to the height of a swing -- then it would be Very Useful. Why? Because a measurement would single out one swing, and a swing's probability-of-being-observed is proportional to [the square of] its amplitude. So if one swing is much much higher than the others -- i.e., the initial sequence was periodic only at that swing's period -- then we'll see it in short order.

The Q.F.T makes this state. It's an algorithm on log(N) qubits that turns the initial state (the periodic sequence) into the Very Useful state (whose amplitude has a spike at the period). It only takes O( log(N) ) operations, which is possible because we only need O( log(N) ) qubits to store the initial and final states. This is how Shor's algorithm finds the period of a sequence.
----------
I don't think I made any serious mistakes in there. I think it's fascinating to see how the QFT works, because it sort of does something (a Fourier transform) much faster than classical algorithms -- but at the cost of giving you much less information. The classical F.T. tells you the amplitude of all the swings... whereas the Q.F.T basically only tells you if one (or a few) of the swings is swinging higher than all the rest combined. So you gotta have a problem with lots of structure to attack it this way.

That's absolutely fascinating, and so enlightening in its simplicity that I'm astounded. I now have a decent grasp on quantum algorithmic limits.

The other comments on there are quite good as well. I recommend reading them!

By Xanthir, FCD (not verified) on 01 Nov 2007 #permalink

Mark, it seems that the blockquote isn't working all of a sudden. My quote *should* start just below the italic text, and end above "That's absolutely fascinating...".

By Xanthir, FCD (not verified) on 01 Nov 2007 #permalink

Torbjörn, thanks so much for that link!

Xanthir, my pleasure. I had forgotten about the quantum localization stuff. But it's true, the quantum realization of classical Fourier "uncertainty" is actually our friend, as QM minimizes our uncertainty of the system.

Btw, I think localization ties into "my" graph problem dual. In principle we don't have to solve the full mass movements, only perturbations, to localize such stuff as SCC's. But that and a $ will get me a coffee.

By Torbjörn Lars… (not verified) on 01 Nov 2007 #permalink

Note that it matters, but the question about dynamic programming didn't come from the same Doug in the set theory conversation.

Does anything top the Fourier transform in terms of applicability and importance?

By Fuzzy Doug (not verified) on 02 Nov 2007 #permalink

Xanthir: So wait, that's interesting. Do I understand this correctly?

- The classical DFT works on N samples with complex values X0,X1..Xn; and produces N bin values representing as complexes the frequency/phases of the sinusoids those samples decompose into

- The quantum DFT works on one value which is a superposition of N basis states, with complex amplitudes X0|0> + X1|1> + ... Xn|n>; and produces a new value which is a superposition of N basis states, with each state corresponding to a bin in the classical DFT and having a complex amplitude corresponding to the value that bin would have had in the DFT.

Fuzzy Doug: Good to know! I had assumed it was you.

Coin: Not completely familiar with all the details, but that seems to be accurate. The quantum DFT does exactly the same thing as the classical DFT, only faster, but you can only get a much smaller amount of information out of it.

By Xanthir, FCD (not verified) on 03 Nov 2007 #permalink

Maybe just a formality, but we call the "reversed" graph the "transpose" graph. Good article!

Thank you for this very simple, very easy to follow example of Kosaraju's algorithm for calculating SCC's. I was struggling with some of the more mathy descriptions, and your description plus example helped make it all clear.

By Max Kaplan (not verified) on 12 Oct 2008 #permalink