From a crazy model to a concrete question: is there a nice mathematical structure hidden here?
Once upon a time I wrote a crazy paper (arXiv:quant-ph/03091189) on quantum computation in the presence of closed time-like curves. In this model, one identifies two types of quantum systems: those that are “chronology respecting” and those that traverse “closed time-like curves.” These two types of quantum systems can interact amongst themselves and also between each other. A typical setup is as in the figure at the right, where n chronology respecting qubits interact with l closed time-like qubits. One then makes two assumptions: (1) that the CTC qubits are “initially” unentangled from the non-CTC qubits and (2) that the evolution is consistent (no grandfather paradox.) These two imply the condition where is the density matrix for the CTC qubits, and is the density matrix input of the chronology respecting qubits. This is Deutsh’s model of quantum computation in the presence of closed time-like curves. Deutsch showed that the above equation always has at least one solution: in other words you can formulate a theory in which the grandfather paradox never appears. But if you think a little bit about it, the above machinery, while consistent, is seen to be non-linear. Since the CTC input to the unitary depends on the output of the unitary, the non-CTC qubits will see a change which is nonlinear. Since nonlinearity in quantum computing gives us a lot of power (Abrahms and Lloyd: arXiv:quant-ph/9801041), you might guess that this model packs quite the computational punch.
And indeed, recently Aaronson and Watrous have shown that if you give a classical or quantum computer access to closed time-like curves, you end up with a model which has the computational power of PSPACE (arXiv:0808.2669) Wait: classical model? Doesn’t th e classical model lead to paradoxes? Well if you insist on deterministic solutions then yes: if I take a single CTC bit and have a bit flip operation act on it, then there is no consistent solution. But if I loosen this restriction and allow solutions with probabilities then there is a consistent solution: if the bit is fifty percent 0 and fifty percent 1, then there is no paradox. In general, then one can imagine a classical model in which the U in the figure above is replaced by a stochastic map and on insists on consistency of a probability distribution.
Now there are many ways to sell Aaronson and Watrous’ result. They like to sell it as “in the presence of time travel quantum computation and classical computation are equivalent.” But one way I like to think about this result is it gives us the first model of classical information theory which can possibly, with an appropriate restriction, yield quantum theory. Whah? Well this random thought emerges from a sort of random thought process: the class of problems efficiently solvable on a quantum computer is called BQP. The relation of this complexity class to other complexity classes isn’t well established (for example it isn’t known if BQP lies in the polynomial hierarchy.) But one of the oldest known results is the BQP is in PSPACE. The best improvement we have on this is probably that BQP is in PP (or a slightly more complicated class), but lets just fixate on the BQP in PSPACE result. Because of the result of Aaronson and Watrous, this means that there is an efficient “simulation” of quantum computers using classical computers with closed time like curves. In other words, if we allow for classical models with closed time-like curves, we arrive at a model where simulating quantum theory wouldn’t lead to any direct conflicts with complexity theory.
Now of course, it is important the classical computation with closed time-like curves is more powerful than quantum computers (unless BQP=PSPACE), so this really isn’t telling us too much. Except I would argue that it motivates us to examine classical theories in which strange causality mucking phenomenon are used to “derive” quantum theory.
In thinking about these subjects over the last few years, I’ve toyed with a ton of different models which give rise to some interesting problems. Among these are a set of questions about the structure of fixed points under composition. Let be a stochastic matrix. “Stochastic” means that and that , i.e that this is the transition matrix for some probabilistic transform on a d dimensional space. Let be another stochastic matrix. Now at least one fixed point, i.e. a probability distribution such that (this follows from the Perron-Frobenius Theorem). The set of fixed points will be a convex set. Similarly has at least one fixed point and will have a convex set of fixed-points. Call the first set of fixed points and the second set of fixed points .
Now the general question that I keep running up against when thinking about these crazy classical models is: is there a nice mathematical structure for discussing how and are related to the fixed points of the composed stochastic matrix ? This feels like something that must be well understood, but my cursory scanning of literature hasn’t turned up anything too interesting. Does anyone know of any set of tools or results which can be used to talk about this structure of fixed points under composition? Special cases where the matrices are not stochastic, but are doubly stochastic or are permutation matrices are also probably of interest in dreaming up these strange models.