I'm teaching my developmental biology course this afternoon, and I have a slightly peculiar approach to the teaching the subject. One of the difficulties with introducing undergraduates to an immense and complicated topic like development is that there is a continual war between making sure they're introduced to the all-important details, and stepping back and giving them the big picture of the process. I do this explicitly by dividing my week; Mondays are lecture days where I stand up and talk about Molecule X interacting with Molecule Y in Tissue Z, and we go over textbook stuff. I'm probably going too fast, but I want students to come out of the class having at least heard of Sonic Hedgehog and β-catenin and fasciclins and induction and cis regulatory elements and so forth.
On Wednesdays, I try to get simultaneously more conceptual and more pragmatic about how science is done. I've got the students reading two non-textbookish books, Brown's In the Beginning Was the Worm and Zimmer's At the Water's Edge, because they both talk about how scientists actually work, and because they do such a good job of explaining how development produces an organism or influences the evolution of a lineage. The idea is that they'll do a better job of prompting thinking (dangerous stuff, that, but I suppose it's what a university is supposed to do) than the dryer and more abstract material they're subjected to in Wolpert's Principles of Development.
Today we're going to be discussing the concept of analogies for how development works. It's prompted by the third chapter in Brown's book, titled "The Programme", where they learn all about Sidney Brenner's long infatuation with the idea that the activity of the genome can be compared to the workings of a computer program. We'll also be talking about Richard Dawkins' analogy of the genome as a recipe, from How do you wear your genes? and Richard Harter's rather better and more amusing analogy for the genome as a village of idiots. I'm also going to show them this diagram from Lewontin's The Triple Helix, to remind them that there's more to development than just gene sequences.
But I'm not going to write about any of that right now. Analogies have a troubling effect on developmental biology because the fundamental processes are so different from what we experience in day-to-day life that they are always flawed and misleading. My main message is going to be that while they may help us grasp what's going on, we have to also recognize where the analogies fail (like, everywhere!) and be mentally prepared to leap elsewhere.
That said, what I'm writing about here is my favorite analogy for development. It's flawed, as they all are, but it just happens to fit my personal interests and history, and after all, that's what analogies are—attempts to map the strange unto the familiar. And, unfortunately, my experience is a wee bit esoteric, so it's not really something I can talk about in this class unless I want to lecture for an hour or so to give all the background, violating the spirit of my Wednesday high-concept days. I can do it here, though!
I'm a long-time microscopy and image processing geek, and you know what that means: Fourier transforms (and if you don't know what it means, I'm telling you now: Fourier transforms). I'm going to be kind and spare you all mathematics of any kind and do a simplified, operational summary of what they're all about, but if bizarre transformations of images aren't your thing, you can bail out now.
A Fourier transform is an operation based on Fourier's theorem, which states that any harmonic function can be represented by a series of sine and cosine functions, which differ only in frequency, amplitude, and phase. That is, you can build any complex waveform from a series of sine and cosine waves stacked together in such a way as to cancel out and sum with one another. The variations in intensity across a complex image can be treated as a harmonic function, which can be decomposed into a series of simpler waves—a set ranging from low frequency waves that change slowly across the width of the image, to high frequency waves that oscillate many times across it. We can think of an image as a set of spatial frequencies. If there is a slight gradient of intensity, where the left edge is a little bit darker than the right edge, that may be represented by a sine wave with a very long wavelength. If the image contains very sharp edges, where we have rapid transitions from dark to light in the space of a few pixels, that has to be represented by sine waves with a very short wavelength, or we say that the image contains high spatial frequencies.
One fun thing to do (for extraordinarily geeky values of "fun") is to decompose an image into all of the spatial frequencies present in it and map those frequencies onto another image, called the power spectrum. All of the low spatial frequencies are represented as pixels near the center of the image, while the high spatial frequencies are pixels farther and farther away from the center. And the fun doesn't stop there! You can then take the power spectrum, apply a Fourier transform to it, which basically takes all the waves defined in it and sums them up, and reconstruct the original image.
I know, this all sounds very abstract and pointless. Fourier transforms are used in image processing, though, and one can do some very nifty things with them; it also turns out that one useful way to think of a microscope objective lens is as an object that carries out a Fourier transform on an image, producing a power spectrum at its back focal plane, which the second lens than transforms back into the original at the image plane.
Still lost? Check out this extremely spiffy online tutorial in Fourier imaging, then. It'll show you visually what I'm talking about.
You can select from a series of images, and here I've picked the human epithelial cell, seen on the left and labeled "specimen image":
It's not very pretty. It has a bunch of diagonal stripes imposed on it, which is something you might get if there were annoying rhythmic power line noise interfering with the video signal on your TV. Those stripes, though, represent an intensely well represented, specific spatial frequency imposed on the image, so they'll be easy to pick up in the power spectrum.
The second image is the power spectrum, the result of a Fourier transform applied to the first image. Each dot represents a wavelength present in the Fourier series for that image, with low frequencies, or slow changes in intensity, mapped to the center, and sharp-edged stuff way out on the edges. Those diagonal lines in the original are regularities that will be represented by a prominent frequency at some distance from the center; you can probably pick them out, the two bright stars in the top left and bottom right quadrant. All the speckles all over the place represent different spatial frequencies that are required to reconstitute the image.
Important consideration: the power spectrum is showing you the spatial frequency domain, not the image. A speckle in the top right corner, for instance, does not represent a single spot on the top right side of our epithelial cell; it represents a wavelength that has to be applied to the entire image. Similarly, that bright star in the lower right is saying that there is a strongly represented sine wave with a particular orientation that has to be represented over everything, just as we see in the original.
The third image is the result if we apply another transform to the power spectrum, to restore the original image.
How is this useful? In image processing, we sometimes want to filter the power spectrum, to do things like remove annoying repetitive elements, like that diagonal hash splattered all over our epithelial cell. The Fourier tutorial lets you do that, as shown below.
You can use the mouse to draw ovals over the power spectrum, and the software will then filter out all of those spatial frequencies that have been highlighted in red. I told you that those two stars were the spatial representation of the diagonal lines all over the image, so here I've gone and blotted them out of existence. Then we reconstruct the image by applying a Fourier transform to the filtered power spectrum, and voila…we get our epithelial cell back, with the superimposed noise mostly gone. It's like magic!
Try it yourself. You can wipe out speckles all over the place and see what effect they have. If you blot out the edges of the power spectrum, what you'll be doing is deleting the higher spatial frequencies, which represent the sharp edges in the image, so you'll be effectively blurring the reconstructed image (which, to all you microscopists, is what stopping down the aperture at the focal plane does, chopping out high spatial frequencies and blurring your image). Notice that blotting out some particular set of speckles may not have much of a detectable effect at all, while others may cause dramatic changes. Notice also that a filter in one place on the power spectrum won't typically have an affect on one discrete place on the reconstructed image, but will affect it virtually everywhere.
I don't know about you, but I find that playing with power spectrums is good for hours of fun. I really wish I'd had this page available years ago, when I was teaching a course in image processing!
Before everyone vanishes to play with Fourier transforms, though, let me get back to my original point—which was to make an analogy with how the genome works.
Think of the genome as analogous to the power spectrum; we'll call it the genomic spectrum. The organism is like the reconstructed image; we'll call that the phenotypic image. A mutation is like the filters applied to the power spectrum. Most discrete mutations will have small effects, and they will be expressed in every cell, while some mutations will affect prominent aspects of the phenotype and will be readily visible. Genes don't directly map to parts of the morphology, but to some abstract component that will contribute to many parts of the form to varying degrees. There is no gene for the tip of your nose, just as there is no speckle in the power spectrum responsible one of the folds in the membrane of the epithelial cell image.
And what is development? It isn't represented in any of the pictures. Development is the Fourier transform itself, or the lens of the microscope; it's the complex operation that turns an abstraction into a manifest form. What results is dependent on the pattern in the genome, but it's also dependent on the process that extracts it.
And now that I've gotten that cosmic philosophizing off of my chest, I can go teach my class without being tempted into confusing the students by explaining one novel idea with which they are unfamiliar by using an analogy to another novel idea with which I am certain they are completely unfamiliar.
That was beautiful. I feel so terribly ignorant: why did no-one tell me about power spectrum images and image filters before? I'm most disappointed.
Are you suggesting that your students are completely unfamiliar with Fourier transforms? Geez, what are they teaching kids in junior high school these days?
I'm teaching on Saturday and looking for a good metaphor/analogy for development. I am afraid I will spend too much time explaining Fourier tranforms to a classroom full of glzed eyes. But the village idiot analogy may work for my purposes....
I'm teaching on Saturday and looking for a good metaphor/analogy for development. I am afraid I will spend too much time explaining Fourier tranforms to a classroom full of glazed eyes. But the village idiot analogy may work for my purposes....
Thanks. I've done hordes of Fourier Transforms (largely for some fluid dynamics modules I was stupid enough to take) - but this is more practical than anything I've done with them.
Are you suggesting that your students are completely unfamiliar with Fourier transforms? Geez, what are they teaching kids in junior high school these days?
New Math?
PZ, that was wonderful. Here's a question - does it confer any evolutionary benefit to have this method of encoding genetic information where a mutation to one gene could in principle affect everything in the organism, as opposed to having specific bits of DNA do specific things?
I mean, in Brenner's program analogy, no idiot programmer would ever code that way.
The village idiot analogy is very similar to the "airplane factory" analogy I use, but the article introduces some new ideas, like "tags" on library books, that can be usefully incorporated.
I can't tell whether this article is a repost or a remake, but the first time I saw it is when I became a Pharyngula regular. It's a great analogy, but as you say requires more math background than most biologists have. Or indeed most physicists, if I may say so. After n lame "teachings" of Fourier Transforms in physics class, I still didn't really grok the whole thing. It took an EE class (taught by the amazing Hal Abelson) to get me to the point that I feel comfortable using FTs as an analogy.
Very cool analogy.
I personally think that Fourier Transforms (or at least the wave mechanics pictures underlying them) should be required for graduating from high school. But what do I know?
This is spooky synchronicity: I just last night wrote a rant about the way people think of genes as computer programs. I've now added a link to this post in that post. Go blogosphere!
JD
hey, it's a repost! i knew i'd read this somewhere before
I sat through some lectures on Fourier transforms in a college math class and never understood just what the heck was going on. Your description, however, of image processing via Fourier transforms just cleared up a multi-decade mystery!
Cool stuff. If I don't ask too many questions, can I still sit in the back of the class?
I'm teaching my developmental biology course this afternoon,
I'm confused, what happened to spring break?
D: I would expect that part of the reason is that it's "easier" this way.
Another part of the reason is that when "phenotype space" is densely connected via "easy mutations" in "genotype space", it's a lot easier to find an adapative mutation, and a little harder to get stuck in local maxima.
I am not a very science-oriented person, but I enjoy reading about...practical science? I'm not sure how to define the genre, but I recently read a book that had a really good explanation of Sonic Hedgehog and all that. It was called Mutants: On Genetic Variety and the Human Body by Armand Leroi, and it discussed various genetic mutations throughout history. It was well-written and easy to understand for someone like me. I don't know if the students who take your classes would be interested in a basic book like this, but I thought it was grand.
Hah, a little ramble, in an area I like.
A good example of something that everybody uses that is based on this principle is the JPEG compression in almost every digital camera. It uses what is called a Discrete Cosine Transform, which is basically a variation on a Fourier transform. JPEG compression takes the picture, breaks it up into little blocks, does a DCT of the intensity values of the pixels in each block, (putting it in the frequency domain is some of the jargon) then it just zeroes out small values, i.e. throws them away to "compress" the file into a smaller size. It happens the small values of a cosine transform picture are those that people do not perceive well, so the picture can still look good when a lot of that data is tossed. (JPEG does another compression after this, but anyway...) You can see this if you have some program like Photoshop that has a dial that you can turn to very high compression for JPEG output, if you can do this far enough you will start to see little rectangles appearing over everything, this are the little blocks I talked about above. If you threw away all but the first number from each block you would get back a picture when retransforming that would just have the average intensity for each of those "macroblocks".
This same idea is used in everything from radar to CAT scanners, geology, you name it. One of the unsung pillars of modern technology.
Weathergirl: I'm confused, what happened to spring break?
This post is a rerun, direct from the old Pharyngula site.
Neat. Given that I'm a software developer raised on several years of information theory (blame me for any errors here, though, not Shannon or Nyquist!), it's not too much of a stretch to consider the genome as a packet of information. It's a fine analogy, even within the limits of analogy you mention (and I agree with).
I know, I know. The Information Theory of genetics has been done to death, but it does sort of fit. And not just because it's the only tool I have (a hammer) that helps every problem look like a nail!
Applying FFT to such "data" is not something I'd have ever considered, but it makes perfect sence given all the other ways I *would* use FFT to manipulate spectral data. Food for thought.
This is a little like the consciousness-as-hologram analogy I've seen in popular science mags (SciAm for one, IIRC).
PZ: Thanks for the analogy. FTs were my best friend during my PhD diss work...now, I am wondering if you can have a blast going still further with the analogy, to the Davidson paper we were discussing, where the "kernel" of the organism's developmental program (in this case, the kernel of the image) is equivalent to the coarse-grained overall structure in the transform image, while the fine detail is organism-specific...
Well, I guess analogies only go so far, but it COULD be drawn...
Deanne
Are you suggesting that your students are completely unfamiliar with Fourier transforms? Geez, what are they teaching kids in junior high school these days?
I'm about to graduate college with a major in math, and I barely know what Fourier transforms are.
Wow! Fourier transforms and its related cousins are nifty and fun things that are usable from signal processing to theoretical physics - but I hadn't thought they would pop up in developmental biology!
Let's turn up the geekology, Fourier transforms are fun and beautiful both in theory (Parseval's theorem; mmm...) and practise (image processing; mmm...).
First, the Fourier theorem says that periodic functions can be represented by fourier series. When you look at a once-over function you need an 'infinitely dense' series - or an integral, in lay mans term.
That's the Fourier transform - it contains all frequencies, as it must. In practise you make cutoffs at high enough frequencies. And you introduce negative and complex frequencies, which sounds terrible, but really works well - they are transformations of real frequencies to make the theory easier, not harder.
Second, we aren't restricted to harmonic functions. They are 'merely' the nice solutions to Laplace equations - twice continously differentiable functions. Examples are functions describing potentials from charges or masses.
The Fourier transform has some very small restrictions, like a bounded absolute value integral and a finite number of discontinuities. What it means is that you can usually forget what function you are transforming, most everything you will meet in the real world works. Of course, the smoother the function, the more compact its transform.
Hey you, in the back of the class, wake up! These things are really fun, look at the links in the post and enjoy.
Thanks for the link to the FT demo. Three cheers for Fouriers!
Now I'm off to search for a similar application for sound files.
Are you suggesting that your students are completely unfamiliar with Fourier transforms? Geez, what are they teaching kids in junior high school these days?
Alon, I think he was being sarcastic.
PZ, nice one. I didn't see how you were going to connect it back to genes but once you did it was right on.
I encountered fourier transforms in grad school as a way to deal with 60 cycle noise (60 cycles per second is standard for electrical current in the U.S.) not on images but on electrophysiological recordings from neurons. You run the fourier, notice a sharp peak at 60 Hz in the power spectrum, clip it out, and do the reverse transform (all done with software that is set to help you do this; in my case, IgorPro). This may distort your original waveform a bit, though, so it is better to just make sure everything is grounded properly and you don't get the noise in the first place.
Yes, this is a repost -- anytime you see that strange big icon in the top right corner of the article, it's something that's been transferred from the old site to this one. You can also click on that icon to see the original article and its comments.
I'm doing some traveling today, with only intermittent net access, so it's mainly going to be reruns for a little bit.
"D: I would expect that part of the reason is that it's "easier" this way.
Another part of the reason is that when "phenotype space" is densely connected via "easy mutations" in "genotype space", it's a lot easier to find an adapative mutation, and a little harder to get stuck in local maxima."
Probably. And I can't see how the 'genotype space' (the genome level) really knows about which organism it has created on the 'phenotype space' level. It just does it thing, merrily singing all the while, while the organism lives and dies.
Of course, if you invoke ID and forget about evolution, I guess something like D's suggestion should be expected since it's easier for an ID designer. So if ID finally own up to a real theory, it would immediately be falsified on these grounds. Umm, I guess that explains why they refuse to do science...
"really knows about which organism"
I mean to say really knows about how the organism looks and functions like. But in a sense I guess it does since the structural developmental genes PZ has described maps roughly to the organism. So maybe D's suggestion isn't totally off. Nor Michael. But I was.
I mentioned this in an email to PZ, but you can think of the fourier transform as a super-index that gives you the not only the words, but the location of the words, too.
The inverse transform is taking the index and re-assembling the original book.
If you make changes to the index, you can change say, "creation" into "intelligent design."
We have similar tools for encryption and compression: not only can you pack information smaller than it was originally and represent it visually, you can pack information in a way that 'heals' itself on decompression.
Much of the technology is public domain and you can see it used every day on snail mail envelopes, weird square grids of small dots that can be scanned even if ripped or with a staple through them. We're developing it for use in clinical trials to 'pack' form data so it can be faxed or mailed securely and accurately, and it's easy to audit later.
If only we could do a Fourier Transform of the ID crowd and then block out those "annoying, repetitive elements". I guess there wouldn't be much of an image left afterwards.
I found this analogy helpful, but...
Fourier transforms are linear. Linearity lets you do all kinds of neat tricks, like working out what will happen to the Fourier transform when you add two functions together.
Unfortunately, as I understand it, sequences of genes do not (in general) have this very useful property, and can interact in strange ways when combined. (e.g. if one sequence contains a gene to produce a protein, and the other contains a gene that is regulated by the same protein). So the molecular biologists have a tougher problem ahead of them than the analogy might suggest...
[Disclaimer: This isn't my area of research, so I may be wrong]
1. I don't think I buy the "it's easier" rationale.
Programmers didn't always write modular, objectified code either, nor did it really happen that some wonderfully bright spark figured out that goto statements are bad, slaying that demon forevermore. In a very real sense, object oriented programming "evolved" in several stages from the mess that is spaghetti code, when it started getting too long to maintain. I do not doubt that several firms with particularly awful coding practices went out of business.
PZ's description intuitively appears to be "bad at evolving" if that makes any sense. At the very least, this design seems to guarantee that most mutations will *necessarily* be awful even when they DO achieve good things - the mutation that gives you camouflage also kills you because it gives you cystic fibrosis ;)
Naively, a design where everything affects everything else sounds more like a Rube Goldberg machine than anything else, and I would expect evolution to be smart enough to figure out code modularity, which means there are in fact good reasons to NOT be modular. I think.
2. Yes, if Dembski did any science at all he WOULD be worried by just how bad a C student his designer is, both phenotypically and genotypically.
As a physicist with an educated amateur's interest in biology and evolution, I have to say that this is a gorgeous metaphor. Thanks for sharing it with us. (Between this and the Niobrara chalk post, you are totally *on* today.)
A related thought occurs if you've ever fitted curves to functions with varying parameters. In this case, say the 'evolutionary fitness curve' for a bat-like organism to be the best bat it can be (assuming it doesn't stray too far from bat-ness and become something else altogether...) where the parameters you vary are the genes via mutations.
It's practically impossible to get a good fit to a function with (say) a generic degree five polynomial ax^5 + bx^4 ... + f, by varying 'a' through 'f' simply because the basis in powers of x isn't orthonormal and by changing any parameter you prance around wildly over the fitness landscape instead seeking nice minima.
Instead you always use orthogonalized polynomials - and spend much time in linear algebra learning how - so that you CAN do something useful. With fourier series for example your sinusoidal functions at different frequencies are orthogonal.
SusanC, you're right on -- I was going to say the same thing and searched for "linear" to see if anyone else said it..=)
It doesn't break this analogy though; the point is that individual elements in "gene space" have global effects in "phenotype space"; it isn't one gene for one bit of organism.
Strangely enough, in my lab I'm imaging cells with structured illumination, where we illuminate the specimen with a striped pattern of light, and use Fourier reconstruction to get high-resolution information beyond the Rayleigh limit. Our raw data looks a lot like the above "specimen image"!
Some friends of mine and I used to collect references to fourier analysis since it came up so often. I hold the record for attending the most bizarre course it came up in, though I must say until now I've not seen it in any bioscience context. (It came up in the Plato course I did as an undergraduate, so beat that. ;))
"With fourier series for example your sinusoidal functions at different frequencies are orthogonal."
Yes, but it doesn't help much, as the first picture in the post shows (since fenotypes are affected by environment), or as SusanC and Pete explains. Genetic algorithms are good for finding solutions in such cases. Surprise, surprise.
To get back to the geekology, or nerd of knowing, Fourier transforms are a major inroad to duality.
In this case, you can see a function as its value over time, or as its dual, the frequency content over time. I guess it's a little like looking at RM+NS as "survival of the fittest" or its dual "culling of the unfittest".
What the point is? You can do things more easily in one of the dual representations. This is what PZ describes above about easy removal of power line noise in the power spectra, while it is a big task in the original image.
I really like this topic, so I have to make another commentary, just for the fun of it.
First, I wanted to remark on duality that it's a 'scientific proof' that it's useful to look at one topic from several angles. Well, a useful hint towards that view, anyway.
Second, it's fun to look at the practical side some more. Markk explains that Fourier Transforms (FT) or it's cousins are highly usable algoritms. In practise one does a cutoff for higher frequencies, and also a digitalization (discretization) as one must when computing in software (usually). The fast (smart) implementations are called Fast Fourier Transforms (FFT. It's a large industry to make the fastest FFT, or the best type of transform that fits the specific problem.
Since this is a US originating blog, it could be fun to contemplate that once the fastest FFT was called the Fastest Fourier Transform in the West (FFTW). I guess they tried to scare the competition away...
Thanks for the link to the FT demo. Three cheers for Fouriers!
Now I'm off to search for a similar application for sound files.
Posted by: just john | March 8, 2006 10:41 AM
Here's a link to some freeware designed for speech analysis, but it'll take spectral slices using either Fast Fourier Transforms or Linear Predictive Coding (inverse filtering) and create spectrograms for sound files in a wide variety of formats (plus it's very user friendly): http://www.speech.kth.se/wavesurfer/
I suppose I'm not expecting an answer, since my post is so late and all, but can anyone explain why the power spectrum in those tutorials (and image processing in general?) is 180^o rotationally symmetrical? When you do (digital) spectral analyses of speech, you're typically dealing with amplitude as a function of time, so you take a 'window' (n samples) of the sound file, apply a function to reduce edge effects, then apply an FFT. From this you get a vector of complex numbers symmetrical about zero and the nyquist frequency.
So, how does 'windowing' enter in to the formula when you process a 2D matrix of 'amplitudes' as in the FT tutorial linked to above? And why exactly does this give the center=low frequency, periphery=high frequency array it does?
I hope I answer this correctly, I know a little bit about this:
They are symetrical because the input data is *real* but the FFT is complex. In order to get a *real* value back out you need a symetrical real and imaginary part of the FFT.
Windowing is used since the FFT transform assumes you have a *periodic* waveform, and that you have sampled a finite number of cycles. Since the input signal is not really periodic, you window it to prevent false signals from showing up in the spectrum caused by the fact that the FFT tries to link the beginning and end of your input signal. Since they do not usually match up you get a big high frequency spike that is not really there.
KeithB, thanks for the response, but I see now that my question wasn't clear. I'm curious about how the 2D matrix of pixel values is windowed (as opposed to how a 1D waveform is) and how that makes the pseudo-radial power spectrum (also in a 2D matrix of pixels) in the tutorial.
For a speech signal, e.g., you take consecutive windows, transform them, them display the sequence of spectral slices and you get a frequency by time display. I just haven't thought much about spatial frequency and how you would go about taking into account the 'extra' dimension, and I don't understand what PZ means when he talks about the frequency components being 'parts of' the whole image - don't they have to be 'parts of' each window?
"the FFT transform assumes you have a *periodic* waveform"
Yes, that would be because it is discretized, isn't it? So it's more like a FF series. (It was awhile...)
"For a speech signal, e.g., you take consecutive windows"
But for images and movies you do each image at a time. And FTs are separable:
"For images, the classical method consists in computing 1-D FT's successively, since the Fourier transform is separable. The Fourier transform is computed horizontally and then vertically on the intermediate result"
http://supportweb.cs.bham.ac.uk/documentation/visilog5/html/refguide/ch…
I wrote a program to generate fractal clouds for an image processing program. I built up a "fake" power spectrum with some random numbers in a particular relationship, and performed the inverse FFT to get the clouds.
The algorithm did all the work for me, in 2D.
8^)
FFTs are also an analogy for the way memory works. It is built of many different components - each with their own pathways, eg visual, olfactory and emotional content. A memory doesn't exist as a unique entity but as a combination of these various components, each of which can have some common preset values. When one component is lacking, strange effects result.
For example, with the emotional pathway damaged, people think that family members are alien clones etc. This is because they aren't getting the expected emotional tones included in their overall recalled image. So this gets interpreted as the person not possibly being the real person.
Also there's deja-vu where a component is out of synchronisation on record and playback, making the whole seem peculiarly familiar, like a confused memory
False memory (eg through hypnotism) fits the pattern too. It's about modifying or retrofitting components so that when the whole is called up (FFTed back), it looks as though those things were part of it all along. With a pre-direction to view something as sinister, an old memory containing that same element can suddenly seem as if it were always sinister.
My claim to geekiness:
"Occasional NOP (EA) instructions were inserted to delimit blocks and loops. Nested blocks or loops may require two or three NOPs in a row, but rarely will an assembly language program contain a four EA series." -- A 2K Assembler for the 6502, RF Denison 1979