An interesting idea from Mark Changizi from RPI: can one design pictures which, when interpreted by your vision, perform a computation? Press release here (note to RPI public relations department: you should probably make it so that the webpage address of your press releases can be copied from the browser address bar. Somewhere a web designer should be shot.) and paper in Perception published here.
The basic idea is to use the orientation information we glean from looking at objects to perform computations. Thus for example, Changizi suggest that we can represent zeros and ones via the two different orientations seen in this picture:
Okay so far so good. I definitely see a zero and a one. Now the idea is that by putting elements like this together one can then have the part of your vision system which computes these orientations perform a computation. Cool idea, no? But, try as I might, I just can’t see how the gadgets described in the article work. For instance, here is the proposed NOT gate, which should flip the orientation of the input blocks:
This also makes me wonder whether there are any similar concepts in other senses: perhaps in sound? (Which leads naturally to: you may think you are listening to the latest song from Band of Horses but really you’re calculating the thirtieth digit of Pi)