Implicit Understanding and Inference in Language

The following is a guest post by Joshua Hartshorne at the Cognition and Language Lab.

The first scientific paper I wrote states, in the second paragraph, that "language depends on two mental capacities with distinct neurocognitive underpinnings": vocabulary and grammar. To understand cats are mammals, all you need to know are the definitions of CATS, ARE and MAMMALS, plus the grammar involved. This was how I was trained to think about language. I knew that there were probably some other aspects of language (like phonology), but they seemed peripheral.

That worked well enough until I came across sentences like this:

Bill is tall.

There are many people named "Bill," and tall is relative both to Bill (Bill may be a tall Pygmy) and to the speaker (who may be a Pygmy). Understanding this sentences seems to require more than just dictionary definitions. It gets worse:

Abby: Would you like some coffee?
Betty: Coffee would keep me awake.

Betty is either refusing coffee or accepting coffee, depending on whether she wants to stay awake. She also assumes Abby can figure this out for herself. Now, consider:

Woman: I'm leaving you.
Man: Who is he?

This dialog is easy to understand, but a dictionary and a grammar book would get you absolutely nowhere. This is not a minor, peripheral problem either. In case you think I'm cherry-picking examples, here are the first two sentences from today's review of Rebuilt:

A popular question when trying to start a pseudo-intellectual conversation is whether you would rather lose your sense of hearing or sense of sight. Almost invariably, the answer would be to retain one's sight; it seems people are more worried about being able to drive alone than hearing a fire alarm.

Notice that the author assumes we know that "the answer" referred to in the second sentences is the answer to the question in the first sentence. Also, what does retaining sight have to do with being afraid of driving alone vs. hearing a fire alarm? We have no difficulty making these connections, but it didn't have to be that way. Try reading these two sentences (or anything else) as if you were a robot, and you will find many more non sequiturs. Even careful, patient writers leave much unsaid, yet we usually read between the lines without ease, not even noticing.

I wish this post was going to be an explanation of how we accomplish this task. I have no idea. There is some decent theoretical work on this subject, but relatively little empirical work. I have a couple studies running now, including this 5-minute experiment you can do online (shameless self-promotion, I know).

Interestingly, Steven Pinker, whose work inspired that first study of mine (read about it here) has also begun working on the inferences that underlie language, and we are currently discussing a collaboration.

If you want to stay informed and even join in the conversation as I try to piece together this puzzle, you can read more about inference and language my Cognition and Language blog at my laboratory website.

*Credit where credit is due: The Bill, Abby and Betty examples are adapted from Sperber & Wilson's groundbreaking Relevance. The woman & man dialog comes from Steven Pinker's The Language Instinct.

More like this

"language depends on two mental capacities with distinct neurocognitive underpinnings": vocabulary and grammar

I'm trying not to sound too disparaging (just disparaging enough :-)), but stuff like this is why I find cognitive science so frustrating. A semiotician or a linguist would never make a naive statement like that, let alone succeed in getting it published.

I understand that cognitive science brings different tools and methodologies to the table, and I think it has great potential, but it is certainly a science in its infancy. The issue as I see it, is that if you ignore the work of philosophers and social scientists, cognitive scientists will continue to misidentify the questions that would lead to fruitful results.

(And don't even get me started on music cognition. I've yet to read of a music cognition study that didn't betray a complete misreading of the problem.)

So HP why don't you tell us what's so wrong with the statement. Why is it so Naive? How is the question misidentified?

By Steve Higgins (not verified) on 31 Jan 2008 #permalink

Not to sound too disparaging myself, but in my experience, the way sociology and philosophy, as used in connection to it, get applied it makes it **way** to easy imho to become a Deepok Chopra, instead of coming up with halfway sane theories, which don't rely one stuff that is either a) a complete misstatement/misunderstanding of some other field, or b) totally absurd. And people like Chopra, who rely almost entirely on sociology and philosophy manage both in spades.

That purely mechanics based thinkers often miss stuff obvious to those two classes of people is a given. The problem is, its way too easy, given the often circular methodology that must be used to derive theories and test them (ethics often not allowing *real* experiments, so forensic style examination of *existing* behavior and false projection of those considered acceptable as normative being the most common solution), to fall prey to *thinking* you know what the heck is going on, when you may simply be looking at the consequence of what happens when you force a mailable brain into a set of sub optimal conditions, in which GIGO becomes the result (garbage in = garbage out). Those that study the mechanics want to know how and why you get both the results "and" how the brain gets *programmed* to produce the results, when you are stuck with those "observable" existing conditions, as well as how much of it is more or less hard coded in enough that it can't be disrupted too easily. Sociology tries to go backwards and start with output, theorize about input, and.. well, that doesn't work so well without some sort of foundational model to explain what the heck the data means *out of context* of the society that generates it.

Put simply, the only way you get universal answers is by stepping as far *outside* of the environment as possible, so the number of variables collapse as close to "1" as possible. Philosophy can't do that, since its *based* one manipulation of variables, while sociology is forced to work within the variables, and, at best, merely tries to limit there impact, with varied and sometimes inadequate success.

Or, so my experience would seem to imply.

So HP why don't you tell us what's so wrong with the statement. Why is it so Naive? How is the question misidentified?

Steve, start here. Then keep going.

Bear in mind that Peirce was building on the work of others, and that others since him in many different disciplines have built on his work. But to my mind, he is to language what Darwin is to evolution.

The problem of language is not a simple problem. I am a mere dilettante. But it seems crazy -- bordering on intellectual dishonesty -- to ignore what's come before, simply because you have newer and more rigorous tools.

If you were building a house, would you ignore the history of architecture simply because you have a power saw?

HP: It's interesting that you brought up Pierce. I've only heard of him in passing, but I understand he was the founder of semiotics. Here's Dan Sperber and Diedre Wilson in a very influential book (Relevance) from 1986: "The recent history of semiotics has been one of simultaneous institutional success and intellectual bankruptcy."

The challenge, I think, to any scientist trying to incorporate linguistics or philosophy into their work, is that so much of it is self-contradictory. Chomsky, for instance, is currently saying there's no such thing as semantics (if I understand him correctly). Tell that to Jackendoff, a semanticist. (Just to mention two of the most influential living linguists.)

You are absolutely right that a lot of time is wasted by being ignorant of what other fields are doing. Of course, linguistics and philosophy are no less guilty of this than others.