Long, long ago, during my first summer as a grad student (technically, I wasn't even a student yet), in one of my first meetings with my graduate adviser, he suggested that I think about the problem of representing negation. The problem of representing negation? That seemed like an odd suggestion. I mean, I was looking for potential research projects, and negation, being so common in everyday speaking and thinking, seemed like an issue that would have been researched to death, so that there's little I could have done with it. But as the grad student saying goes, ours' is not to question why, at least not until we've officially registered for classes, so I asked for some references, and started reading up on negation. It turns out I was partially right -- a ton of work has been done on negation. I was wrong, however, to think that there was nothing left for me to do. The more I read, the clearer it became that we really had no idea how negations are represented.
As you might expect given the ubiquity of negation in everyday speech, research on the topic was in full force at the very beginning of the cognitive revolution. Early researchers found that when asked to verify the truth of a sentence relative to a picture or their own background knowledge, sentences containing a negation took longer to verify than sentences that didn't(1). This is true regardless of whether the negation is explicit (e.g., "Bob was not at the party") or implicit (e.g., "Bob was absent from the party")(2), and it's true when you contrast the sentence with negation (e.g., "Three is not an even number") with a positive sentence that it implies (e.g., "Three is an odd number")(3). They also found that negated information is more difficult to remember than affirmed information(4). It seems that no matter how you look at it, negations are more difficult to process than affirmations. But why?
The problem appears to lie with how we have to represent negation, that is, how we represent an entity or condition as being absent. If you represent the situation without the negated entity or condition, e.g., if you represent the party without Bob, then you're not representing "Bob wasn't at the party," but just the party. Or if you are representing Bob as absent from the party, you're also representing everything else that wasn't at the party as absent. You, for example, weren't at the party (I know, 'cause I was there, and I didn't see you). So when you represent one negation this way, you represent all possible negations. That's not very effective. If, on the other hand, you represent Bob at the party, then, well, Bob's at the party, and he's therefore not not at the party, and we've made no more progress. So for negation, it seems that you need two representations: the affirmative one (Bob at the party) and the negative (the party without Bob, which was much better that way, trust me). As Bertrand Russel put it, "When I say truly 'this is not blue', there is, on the subjective side, consideration of 'this is blue', followed by a rejection" (5). But how would that work?
Or does it work that way? Others have suggested that you don't need to represent an affirmative to represent a negation. For example, we could just represent Bob's absence from the party by taking affirmative version ("Bob was at the party") and then appending some marker than indicates that the affirmative version is false. This would mean that we represent negation generally, and never have to represent specific negations explicitly. In other words, we just need one general negative representation, the representation of "false," and anytime we append that to an affirmative representation, we know that whatever it involves has been negated. How do we represent "false?" I have no idea, but if negation is so hard, only one is better than many, right?
I wish I could tell you that my reading all those years ago has led to me producing a breakthrough on negation myself, and proceed to blog about that, but alas, the negation literature got me interested in figuring out how we represent counterfactuals (something else we don't know), and I've been obsessed with that ever since. But I check in on the negation literature now and then, and a recent paper by Hasson and Glucksberg(6), in a special issue on negation no less, takes us a pretty large step forward in our attempts to understand the problem. At the very least, it provides pretty strong evidence that allows us to distinguish between the two representations (the negation and the corresponding affirmation) and one (just the affirmation with the "false" marker) views, and it gives us some sense of the time course of representing negation.
The experiment Hasson and Glucksberg describe is pretty ingenious. They use a lexical decision task, which just involves presenting participants with a string of letters and asking them to indicate whether the string is a word. This is a common task that's often used to measure priming. If you present people with a word or phrase or picture or whatever it is you're using as a prime, and then give them a letter string that represents a word related to your prime, they'll be able to verify that it's a word faster. For their experiment, Hasson and Glucksberg used corresponding affirmative and negative statements. More specifically, they used corresponding metaphorical affirmative and negative statements? Why metaphorical? Well, because the meaning of a metaphorical statement isn't necessarily directly related to the meanings of the words in the statement. This allows you to be sure that the priming is a result of the statement itself, and not just particular words within it.
So, for example, they gave participants the metaphor "This kindergarten is a zoo," or its corresponding negation, "This kindergarten isn't a zoo" on a computer screen. Participants indicated that they'd read the sentence by pressing the space bar, and after a short delay, they were presented with the target word and asked to indicate whether it was, in fact, a word. The targets were words that were consistent with either the affirmative or negative version of the metaphorical statement, or were irrelevant to it. Thus, after reading "The kindergarten is/isn't a zoo," they might have to determine whether "calm" or "noisy" are words (they'd only see one of those after reading the statement). "Calm" would be consistent with the negation ("The kindergarten isn't a zoo"), while noisy would be consistent with the affirmation ("The kindergarten is a zoo").
The key manipulation, for this experiment, was the length of the interval between the metaphorical statement and the presentation of the target word. They used three different delays: 150 ms, 500 ms, and 1000 ms. Why the different delays? Well, one way to tease out two different representations, as in the two-representation (affirmation and negation) version of negation, is to look at whether they become active at different times. Based on previous research, Hasson and Glucksberg hypothesized that when people read the negative metaphorical statements, they would initially represent the "counterfactual" scenario (that is, the affirmative version of the statement), and only later represent the "factual" scenario (the negated version).
So they predicted that if a participant read a negation ("The kindergarten isn't a zoo"), then after only a 150 ms delay he or she would only have represented the counterfactual (i.e., affirmative) version of the scenario, and thus would be faster at verifying target words implied by that version (e.g., "noisy") relative to the baseline for that word, but not words implied by the factual (i.e., negative) version (e.g., "calm"). As time passed, however, the participant would represent the factual version of the scenario, which would serve as a prime for words associated with that version ("calm"). Because the factual version would eventually replace the counterfactual version (once you have the negation nicely represented, you don't need the affirmative version anymore), after longer delays, the negative metaphorical statements would no longer prime the words associated with the affirmation ("noisy").
To make this more explicit, let's look at the results for the affirmation condition. When participants read metaphorical statements like "The kindergarten is a zoo," they were faster at verifying affirmative target words (e.g., "noisy") relative to the baseline for those words (the same words used with unrelated metaphorical statements) regardless of the delay. Thus, the affirmative metaphorical statements primed affirmative target words. In this condition, however, participants were actually slower to verify the negative words relative to the baseline for those words, regardless of the delay. Thus, representing the affirmative version of the statement actually hurt the processing of the negative target words.
The picture was different for the negation condition, though. When participants read statements like "The kindergarten isn't a zoo," the results looked like this (from their Figure 2, p. 1022):
The y-axis in this graph represents "facilitation," which means the difference between the baseline condition and the negation condition. A positive score on this axis means that the verification time for target words were faster by that amount at the delay represented on the x-axis. So the figure shows that at delays of 150 and 500 ms, the affirmative target words were facilitated (by about 20 ms, relative to their baseline), while verification of the negative target words was actually slower, relative to their baseline. This is the same pattern they found in the affirmation condition at all three delays. This implies that at least up to 500 ms, the representation of "The kindergarten isn't a zoo" is the same as the representation of "The kindergarten is a zoo." In other words, only the affirmative version was represented up to that point. At 1000 ms, however, the verification times for the affirmative words dropped to the baseline (0 on the y-axis), and verification times for the negative targets was facilitated (i.e., faster than the baseline). At 1000 ms, then, the participants were representing the negative version and not the counterfactual affirmative version.
In the course of analyzing the data, Hasson and Glucksberg apparently realized that some of the metaphors they used might be taken as being meant ironically in their affirmative versions. So they had a separate set of participants rate how ironic the different metaphors were, and then using only the low-irony metaphors, they again looked at the facilitation pattern for negative metaphorical statements. As in the previous analysis, only the affirmative target words were facilitated at 150 and 500 ms, while the verification times for the negative targets were actually slower than the baseline. At 1000 ms, however, the facilitation for the negative targets was 40 ms (twice what it is in the above graph).
What do these results mean? Two things: First, they suggest that people really are representing negations using both the affirmative ("Bob was at the party") and negative ("Bob wasn't at the party"), contrary to the single-version theory (i.e., "Bob was at the party" = False). Second, at some point between 500 and 1000 ms, the affirmative version passes the baton to the negative version, and ceases to be active itself. At that point, then, the negation is represented only with the negative version of the scenario.
Though they don't discuss it, Hasson and Glucksberg's results actually suggest a reason for the difficulty in processing negation observed in the sentence verification tasks mentioned at the beginning of this post. Way back in the early 70s, Wason (yes, that Wason), noted that when people hear negations in everyday speech, they usually hear them in a context that includes discussion of the corresponding affirmative scenario(7). He thus argued that the reason people have trouble processing negation in sentence verification tasks is because the negations are presented to them without that context. Hasson and Glucksberg's results support this argument. If, in order to represent a negation, we first have to represent its corresponding affirmation, and only after doing so (much later, in processing terms) can we represent its negation, then if the context in which the negation is presented doesn't supply the affirmation, we have to work from the negation (which can suggest several exclusive affirmative versions, sans context) to sort out the appropriate affirmation to represent. Naturally, this would make things much more difficult.
1E.g., Wason, P. C. (1961). Response to affirmative and negative binary statements. British Journal of Psychology, 52, 133-142; Clark, H. H., & Chase, W. G. (1972). On the process of comparing sentences against pictures. Cognitive Psychology, 3, 472-517.
2Just, M. A., & Carpenter, P. A. (1971). Comprehension of negation with quantification. Journal of Verbal Learning and Verbal Behavior, 10, 244-253. Cited in Kaup, B., Zwaan, R. A., & LÃ¼dtke, J. (In Press). The experiential view of language comprehension: How is negated text information represented? To appear in F. Schmalhofer & C.A. Perfetti (Eds.), Higher Level Language Processes In the Brain: Inference and Comprehension Processes. Mahwah, NJ: Erlbaum.
4Cornish, E.R., & Wason, P.C. (1970). The recall of affirmative and negative sentences in an incidental learning task. The Quarterly Journal of Experimental Psychology, 22, 109-114.
5As quoted in Hasson, U., & Glucksberg, S. (2006). Does negation entail affirmation? The case of negated metaphors. Journal of Pragmatics, 38, 1015-1032.
7Wason, P.C., (1965), The contexts of plausible denial. Journal of Verbal Learning and Verbal Behavior, 4, 7-11.
That reminds me of the "focus and topic" literature -- things like, "JOHN ate the beans" (with a funny intonational contour on "John"). One idea is that sentences like this evoke a list of alternative scenarios (e.g., "Sally ate the beans," "Barry ate the beans," ...), and single out just one from this list.
In real life, we hear these in context -- maybe the person is unsure about who exactly ate the beans, and only two or three people are under consideration. If presented in a lab context, it might be harder to represent the "JOHN ate the beans" case, which you could test in a similar way to that described.
Maybe something like "I'LL do it" vs. a typical "I'll do it." Then do a decision task for words that the 1st would prime (frustration, laziness, etc.), and that the 2nd would prime (cheerfulness, agreeableness, etc.). To be clear, the 1st is something you'd say when a group of people need to get some drudgery done, they all stare at each other, and one finally gives in and says, "All right fine, I guess I'LL do it."
Chris, great coverage, but I have one question...
How could SOAs be used here? Do people even reach (within 150 ms) the critical word "zoo" before the LDT target appears?
Michael, I think I said it somewhere in that mess of a post, but I should have made it more clear: the delay started when they pressed the space bar indicating that they'd read the metaphor. They could take as long as they wanted to read the metaphor, including the critical word. So the delay is between their reading the metaphor and the onset of the LDT, which they then have to complete as fast as they can. So any priming that's going to be done will be done with whatever representation's they're working with at the start of the LDT.
Ahh, my bad for missing it - I see it now. But still, it doesn't seem to me to be an SOA, so that's why I was confused. Nevertheless, very nice way at getting at the issue on the authors' part.
Wouldn't the variable amount of time allowed for reading the sentence cause a serious confound? What if one of the subjects spent more time reading the negated sentences than the affirmative sentences, to accommodate the required additional processing? While this seems like it could only decrease the strength of their results by distributing effects associated with a single processing duration over many SOA bins, it's still a bit troubling. Why not use presentation technique with better temporal control, like a recording of spoken statements and an SOA tied to the end of the recording?
Interesting post, Chris. I have a question: Has any work been done on how people represent the negation of necessarily false satements? For example, it may happen that we wish to prove something by a reductio ad absurdum, which is to assume that some statement p is true, show that it entails a contradiction or other impossibility, and conclude on that basis that p is (necessarily) false. But it would seem that we can't form a full-fleged representation of a contradictory state of affairs; if we could, on what basis would we come to reject it as impossible? So assuming that attempts to represent contradictory states of affairs break down at some point, it seems the break down would cause us to mark that state of affairs as impossible and hence to represent its negation as being the case. But if that is so it seems to show that we can, at least in some cases, represent that the negation of a state of affairs is the case without being able to (fully) represent the state of affairs itself.
Jason, that's a good question. There is a lot of research on how we process true and false assertions, though I'm not sure there is any research specifically on necessarily false statements. I'll have to look it up.
Hi Jason, if I understand correctly, your question also applies to non-negated statements and can be asked as, 'how do people represent a statement whose falsity is necessary'. You seem to extend this to negation (e.g., a statement such as "a person cannot be in two places at the same time"), but, if I understand correctly, you are asking how the affirmation "a person can be in two places at the same time" can be represented to begin with. Am I understanding this correctly? If so, I have some thoughts/refs on this issue.
I'll just say for now that not being able to represent the affirmation may not be necessarily taken as a meta-cognitive information attesting to the truth of the statement's negation (some, like Recanati, have argued that people can believe statements that they cannot fully represent, and I think there's quite a bit in that approach). So, a cognitive model saying that people believe "not(x)" when the cannot represent x may not be a good model of comprehension / belief.
Hi Oori, yeah, I think you understood me correctly. I have a question, though: In your last line you say
"...a cognitive model saying that people believe "not(x)" when the cannot represent x may not be a good model of comprehension / belief."
Are you saying that if people cannot represent x, then they can't believe its negation? Or are you saying perhaps that if people cannot represent x, x cannot properly said to have a negation? Or something else...?
I meant to say that one could think that if people cannot represent the
affirmation of a statement [e.g., "an object can be at two places at the
same time"], they would consider it as false. This account sounds plausible
when thinking about the social cognition literatre (e.g., Norbert Schwartz's work)because the experience of not understanding a statement could be taken
as meta-cognitive information suggesting the statement is false. So on this
approach, if people cannot represent x, they would be inclined to
believe its negation. However, when thinking about how people learn
new information, it seems people do and can believe statements they don't
understand (by taking expert's words for it). And it has been argued
people can believe propositions they don't completely understand. What
does this mean for statements that are necessarily false? I guess if you know that
a statement is false, say, in vertue of its formal form (x and not-x), you will know it's negation is true but at no tim will you construct
a model corresponding to the affirmation of the statement. Perhaps in these cases people do tag the embedded proposition with a 'tag' such as FALSE(p).
I wouldn't say I'm not ignorant of how negation is represented (ha!), but here's an interesting (and ancient) piece of data. The N400 EEG component (an ERP/brain wave) is recorded when semantic violations are encountered, as in "I like my coffee with cream and dog" or "Fish are terrestrial." You would not see much of an N400 to "I like my coffee with cream and sugar" or "Fish are aquatic." In an old experiment by Fischler et al. from 1983 (PDF), participants read sentences one word at time and performed a sentence verification task. The N400 to the sentence-final word was measured between 250-450 ms post-stimulus. I'll let the authors summarize their results in the abstract below. I believe their findings are relevant here, no?
Fischler I, Bloom PA, Childers DG, Roucos SE, Perry NW Jr. (1983). Brain potentials related to stages of sentence verification. Psychophysiology 20(4):400-9.
Subjects were shown the terms of simple sentences in sequence (e.g., "A sparrow / is not / a vehicle") and manually indicated whether the sentence was true or false. When the sentence form was affirmative (i.e., "X is a Y"), false sentences produced scalp potentials that were significantly more negative than those for true sentences, in the region of about 250 to 450 msec following presentation of the sentence object. In contrast, when the sentence form was negative (i.e., "X is not a Y"), it was the true statements that were associated with the ERP negativity. Since both the false-affirmative and the true-negative sentences consist of "mismatched" subject and object terms (e.g., sparrow / vehicle), it was concluded that the negativity in the potentials reflected a semantic mismatch between terms at a preliminary stage of sentence comprehension, rather than the falseness of the sentence taken as a whole. Similarities between the present effects of semantic mismatches and the N400 associated with incongruous sentences (Kutas and Hillyard, 1980) are discussed. The pattern of response latencies and of ERPs taken together supported a model of sentence comprehension in which negatives are dealt with only after the proposition to be negated is understood.
Quite relevant! I've never read that paper. I'll have to check it out now.
Interesting ERP study. Couldn't it be the case that the entire effect is driven by lexical priming in this case? If the N400 is loading on the ease of processing the sentence terminal word (and later negation is considered) then
sparrow is / isn't a bird [True-Affirmative, False-Negative] would generate a weaker N400 due to lexical activations. Why is there a need to assume that a sentential / propositional representation was at all constructed to account for the N400 effect? The last line in the abstract could say: "supported a model of comprehension in which negatives are dealt with only after semantic-based activation has been completed". Note that McKoon and Ratcliff have done much work on exactly this sort of lexical-based priming in early stages of comprehension before the relational aspects (predicate directionality) kicks in.