On Definitions: They Matter

Most of my favorite long-standing discussions with friends and family tend to resolve around definitions. My good friend Paul and I have had hours upon hours of discussion about the nature of the universe - he calls his perception of the order of the universe "god," and I call myself an atheist (interestingly, that picture was taken by Paul), though in practical terms I don't think our beliefs are really that far apart. He says his definition allows him to engage with religious people, I say it just causes confusion.

In my first year of graduate school, I used to argue endlessly with my class-mates that viruses are alive. I did this not because I think viruses are alive (though I do), but because I think it's facinating the way we humans give an arbitrary definition of a word, "life," and then dogmatically exclude anything that doesn't fit into that definition. The real answer isn't that simple, viruses aren't the same as cellular organisms, but they're also not the same as unambiguously non-living things. Richard Dawkins (from A Devil's Chaplain):

This way of thinking characterises what I want to call the discontinuous mind. We would all agree that a six-foot woman is tall, and a five-foot woman is not. Words like 'tall' and 'short' tempt us to force the world into qualitative classes, but this doesn't mean that the world really is discontinuously distributed. Were you to tell me that a woman is five feet nine inches tall, and ask me to decide whether she should therefore be called tall or not, I'd shrug and say 'She's five foot nine, doesn't that tell you what you need to know?' But the discontinuous mind, to caricature it a little, would go to court (probably at great expense) to decide whether the woman was tall or short.

The use of language forces us to place boundaries on things. In order for words to have meaning, they must necessarily encompass some concepts and not others. Most of the time, it doesn't really matter - arguments about whether or not a couch is a chair or a machine gun is a robot can be entertaining, but not necessarily important or enlightening.

But this is a post about immunology. Enter NK cells:

NK stands for "Natural Killer." They kill stuff, naturally. They look kinda like cytotoxic T-cells, and they kill cells in the same way (by punching holes in their membranes and injecting proteins that force them to commit suicide), but they don't need to be primed by an antigen-presenting cell. Once they develop, they wander around inspecting cells to make sure they're behaving, and if they look screwy, the NK cell bumps them off.

Immunologists like to classify the cells of the immune system into either "innate" or "adaptive," and up until the last year or two, NK cells were unambiguously considered innate. The innate system is usually defined as those cells that respond immediately to infection, recognize non-specific signatures of pathogens, and respond the same way to each subsequent infection. The adaptive immune system (basically T cells and B cells), on the other hand, have a delayed response (they have to be activated by the innate immune system and divide), recognize very specific determinants of very specific pathogens, and generate a memory response that is stronger the next time that specific pathogen rears its head.

But in last month's Nature Immunology, there's a paper showing pretty conclusively that NK cells have features that look an awful lot like memory. But is it really memory? If so, does that mean that NK cells are really adaptive? Does it mean other innate cells could have memory too? If you're an immunologist trapped in the discontinuous mind, these questions are really hard to answer. But I think if you realize that, "innate-" and "adaptive-immunity," as well as "immune memory," have definitions that are assigned by humans, it's a lot easier to swallow.

I don't claim to be immune from this. In my first year of grad school, we discussed the precursor to this paper, and the argument (in which I played no small role) got pretty heated. My position (for that previous paper) was that they hadn't conclusively demonstrated memory, and even if they had, that doesn't mean NK cells are adaptive. Looking back, I think my first point was correct, but I think I got more worked up because I was stuck on a solid boundary between innate and adaptive immunity, and this paper was trying to challenge that.

Definitions matter, but not for their own sake. They allow us to categorize, improving our understanding and our ability to communicate, but we would be remiss to give those definitions power over our ideas. The world is what it is, and if we close our minds around a word and refuse to let our definitions wander, we may miss out on the truth.

I'll leave you with some great lines from the News and Views on this article:

There are many examples in immunology of the assignment of names before function is thoroughly understood. As a result, the words or their definitions can get in the way of the field's ability to incorporate radical new information[...]

The characterization of antigen-specific memory NK cells is important, but the experimental results have difficulty both fitting into old definitions and conclusively showing new major adaptive functions for the cells. The observations support extension of the borders of the definitions of innate and adaptive immunity, but the evidence is not yet sufficient to justify discarding the existing terminology. The main defined functions for NK cells in the context of the fullness of complete immune responses are still innate, with innate immunoregulatory effects on adaptive responses[...] Clearly, much remains to be learned, but for now it is important for immunologists to remember that "There are more things in heaven and earth...than are dreamt of in your philosophy."

More like this

Perhaps you should revise your self-description of being "... genetically incapable of growing facial hair." to reflect your new paradigm? That's much too absolute!!

[comment deleted, this is getting old - KB]

There is a similar issue in many other fields:
1) evolution: at what point does the ancestral population, sufficiently isolated from modern populations by the fortune of being long dead constitute a distinct species from direct descendants?
2) genetics: what constitutes a gene?
3) also genetics: how does one assign the label of "functional" or "nonfunctional" to sequences which have noticeable effects due to length rather than sequence?
4) general philosophy: at what point would we say a mass of organized biochemicals is alive? Apply those same criteria to computer software...now computer viruses now a computer virus targeted towards the RepRap prototyping machine. (just for fun)

Phil Plait wrote a great article in this month's Discover Magazine that also touched on this subject. He talked about how scientists really work in concepts rather than definitions and it's definitely worth reading. He was writing about how everyone has a concept of a planet, but we run into problems when we try to define a planet. I think the same would apply to life, innate immunity, adaptive immunity, and everything Jared listed above.

I disagree with your penultimate line. Granted, I come from a background that included a lot of mathematics, logic, and axiomatic systems.

The world is what it is, and if we close our minds around a word and refuse to let our definitions wander, we may miss out on the truth.

I think this is not a failure to let definitions wander, but to use definitions incorrectly - in fact a failure by letting definitions wander. It probably has to do with several points - that many people have internal definitions that don't agree with the agreed upon definitions, that people often attempt to blur lines rather than accept that dichotomies are the result of the use of definitions, and the fact that subsequent discoveries make people regret the way they formulated a definition. Any missing out on the truth that occurs is due to inconsistencies in our heads.

Sure, by one definition of "life", viruses don't "live" - but that's only problematic if you have a concept of "live" that exceeds what "life" implies - which doesn't happen if you use definitions rigidly, in axiomatic fashion.

If someone says, "what about viruses?" and they don't fit the definition, you can simply add a new word to describe life+viruses. "Oh, you are referring to vlife! That's completely different. Sure, viruses vlive, as do we. They just don't live, as you can see by the definition."

Now, perhaps original definitions can be poorly made, and maybe there are times when one should amend them, but experiencing discomfort because you feel a virus really is "alive" when the definition says it isn't is silly. Words only have meaning so long as they have definitions, and it is because we think of "not alive" things as being inert that we suffer this issue. So throw out your preconceptions. Being not alive doesn't mean "dead" or "inert", it simply means failing to have the properties required to be "alive". You can still be "vlive", or "proto-living," or whatever word you define that has the appropriate definition. The hangup is in our heads, not in the ability of definitions to serve us well.

By Epinephrine (not verified) on 23 Nov 2010 #permalink

I actually think that 'living' is a poor example, because it has an "intuitive" meaning. That is, it maps poorly onto highly precise jargon thinking because it already exists in sloppy thinking space of most words.

"Gene" is more analogous to the NK cell thing. It *started* with a concept there must be a thing driving these inheritance events. When we found these amazing bits of nucleic acid, suddenly made the thing made sense... yet what we learned doesn't map *perfectly* to the conceptual setup we'd get from Mendel. Genes in the Mendelian sense can exist outside of coding sequences, outside of primary sequence (e.g. in epigenetic methylation marks), and perhaps even outside of nucleic acid alltogether, although that's pretty speculative.

If your assessment is correct, Kevin, we identified the *function* of all-purpose cells responsible for killing before we identified *substance* of NK cells.
I had thought that it was more like so many other cell types in immunology, where we identified a general group with a function (substance and function being defined together), and then promptly went about dividing it down into more and more subgroups with specialized functions. So the later functions may map very poorly on the conceptual framework created with the first grouping (Tregs, I'm looking at you...).

I think what we're dealing with is very similar to the gene thing- the first thing was a concept - that you have different defenses before you have before any exposure to a pathogen than you have after you've been previously exposed. THEN we started about finding material reality that could support that concept (e.g. NK cells vs. cytotoxic t cells). THEN, when the material reality didn't map completely neatly to our concepts, sometimes we have trouble with that. To me, it's unremarkable- 'of COURSE NK cells are different after you've been exposed to some pathogens'- probably so are macrophages and all other cells. The problem was thinking of the two systems- innate and adaptive- as modular or discontinuous.

@ Steve - that's a whole 'nother post. Maybe someday.

@ Sam C - Probably.

@ Jared - Yup.

@ JM - I will definitely check that out on the plane ride home.

@ Epinephrine - I don't disagree, but the trouble is that actual living systems rarely adhere to the logical, axiomatic frameworks that we'd like to impose on them. Becca's last line is particularly astute, "The problem was thinking of [insert any natural system] as modular or discontinuous."

Don't get me wrong, axiomatic frameworks can be incredibly helpful (and I would even argue necessary) to understanding such complex systems, but we need to understand that they are merely approximations. When you set up such a framework with an incomplete understanding of the system, and then attempt to adhere to that framework even as new things are discovered that challenge your paradigm, the definitions actually get in the way of understanding. THAT's what I was trying to get at with that last statement.

We do need to be precise with definitions. At the beginning, I mentioned that I think my friend Paul's definition of god actually dilutes understanding (incidentally, he also has a much more expansive definition of "life," arguing that stars are alive). Perhaps it's more that I think we need to be more willing to modify our precise definitions as new information becomes available. Do you think you could get behind that statement?

@ Becca - Yes.

To your second point: the tendency of immunologists to define cell types, and then subtypes, and then sub-subtypes (and almost always based on surface markers), is a legacy of the way the field grew in the beginning and is one of my biggest scientific pet peeves. Again, it's mostly because that dogmatic adherence to the definition of "X" can make people so inured against novel information.

Great -- finally a blog about my scientific pet peeve!
(However, I come to this from a background in physics.)

Of course: We do need to be precise with definitions.
Therefore, strictly, definitions should be built up from a foundation of unambiguous, universally comprehensible terms such as the basic distinction of "same" vs. "not same" (where "same" is not the same as "equal (by some particular measure)", mind you);
or "memory (of observations made)" vs. "present observation(s)" vs. "further expectations".

Kevin wrote (November 23, 2010 1:39 PM):
> I think we need to be more willing to modify our precise definitions as new information becomes available. Do you think you could get behind that statement?

Not strictly. The hurdle is how the supposed "new information" ought to be obtained at all, as a matter of principle.

This is perhaps a more immediate problem in physics than in immunology; after all, for your example you already had a definition and precedent for "information corresponding to an immune system with memory" in connection with the model of the adaptive immune system. In physics, the question "What do you mean by ... (i.e., How should we measure whether ...)?" comes up routinely. Such definitions need to be set and can be discussed before they are used (possibly for the first time) to extract information (measurement values) from given observational data.

Therefore I'd find the following statement agreeable:
We should consider all precise definitions by which given data may be evaluated; and we should be scrupulous in distinguishing them (as well as their respective result values) by naming them distinctly.

By Frank Wappler (not verified) on 29 Nov 2010 #permalink

"The problem was thinking of [insert any natural system] as modular or discontinuous."

Well, definitions need to be precise to be useful, or at least to facilitate communication about complex topics. I agree that for the most part, things are pretty much continuous in the world - this doesn't make definitions that rely on setting boundaries useless.

When you set up such a framework with an incomplete understanding of the system, and then attempt to adhere to that framework even as new things are discovered that challenge your paradigm, the definitions actually get in the way of understanding. THAT's what I was trying to get at with that last statement.

I think I appreciate what you are getting at - and I think that many (if not all!) people suffer from this effect. We fail to recognise that we are constrained at times by our definitions - but there are two solutions to this. One involves a constant reshuffling of definitions, to attempt to align the word with the definition we think we want for it. The other involves accepting that the word we wanted to use doesn't exactly fit, and coming up with a new word to suit the phenomenon in question. It's more natural for us to shift the arbitrary lines of our definitions to try to have them fit our internal definitions (without even necessarily knowing what our internal definitions are), but this results in tremendous confusion over what a word actually means, and a huge hangup when the word doesn't match our definitions.

Perhaps it's more that I think we need to be more willing to modify our precise definitions as new information becomes available. Do you think you could get behind that statement?

Maybe, but only because of the trouble with having too many words in a language for ease of use. The only reason I think that approach has validity is that it is in some ways more practical. Better (in my mind) is to have distinct words for the various definitions.

By Epinephrine (not verified) on 01 Dec 2010 #permalink

@ Epinephrine - Oh good, I think we're approaching consensus; I don't disagree with anything you said (or at least, anything I think you said :-).

I guess the real trouble that I'm trying to get at (and I don't think I ever articulated it this clearly, even in my own mind) is that we need to be able to recognize when our definitions no longer accurately reflect the world that we're trying to describe. I agree completely that sometimes the best solution is a new word, but that (as you said) sometimes it's impractical, especially if the change in understanding is small and incremental. In those cases, we need to modify the definition, but do it in a clear and transparent way such that everyone paying attention can get on board with the new definition.

Kevin wrote (December 1, 2010 10:07 AM):
we need to be able to recognize when our definitions no longer accurately reflect the world that we're trying to describe.
The more familiar and precise terminology for expressing that some particular definition "accurately reflects the world that we're trying to describe" and some other definition doesn't is to say that
the world that we're trying to describe (or some particular subset of it) is an eigenstate to that one definition (but not the other), acquiring some particular real or Boolean value when being evaluated according to that one definition (but not the other).

Therefore, as the (observational data of) the world that we're trying to describe grows and becomes more intricate (such as with data concerning "training of NK cells"), additional definitions can become useful while legacy definitions may fail to evaluate much of the more recent data.

Definitions that would even be named by a single word (much less thereby expressed comprehensibly) are meanwhile obviously an exception. All the more importantant is documentation, so that everyone trying to pay attention can actually follow.

By Frank Wappler (not verified) on 01 Dec 2010 #permalink

You were on a good track, then a couple of axiomatists show up and you agree with them.

Math guy: look up the history of the "definition" of "function." And are you certain the concepts ("definitions") of number and real number are finished? Surreal numbers anyone?

Physics guy: what is space? energy? mass? Space isn't what it used to be, but what is it? What will it be next year after running the supercollider for a while?

Face it, you work with concepts as the Bad Astronomer said. And yes of course I know that within an axiomatic system, in order to prove anything or even make good sense you have to stick rigorously to the definitions (which are really just abbreviations). Oops! it all goes back to undefined terms. Scandal! Or concepts?

By Pete Dunkelberg (not verified) on 01 Dec 2010 #permalink

The more familiar and precise terminology for expressing that some particular definition "accurately reflects the world that we're trying to describe" and some other definition doesn't is to say that
the world that we're trying to describe (or some particular subset of it) is an eigenstate to that one definition (but not the other), acquiring some particular real or Boolean value when being evaluated according to that one definition (but not the other).

Yeah I hear that at parties all the time.

By Pete Dunkelberg (not verified) on 01 Dec 2010 #permalink

Pete Dunkelberg wrote (December 2, 2010 12:58 AM):
> Yeah I hear that at parties all the time.
You spent parties with guys discussing science blogging and their research??

Here we were on a pretty straight track, now concept chump shows up smelling undefined terms.

But the expressed complaint notwithstanding your post itself betrays a sense of familiarity with the axiomatically presumed terms (or concepts), namely

> [Either ...]! Or [...]?
i.e. the concept of making distinctions, and

> You were [...], then [...] and [then] you [...].
i.e. the concept of successive ordering.

Moral (which should be agreeable from the outset, and be reached independently):
terms or concepts that are necessary for asking questions or for raising complaints can safely be presumed.

Who would be party of presuming more than that?

By Frank Wappler (not verified) on 01 Dec 2010 #permalink

@ Frank

The more familiar and precise terminology for expressing that some particular definition "accurately reflects the world that we're trying to describe" and some other definition doesn't is to say that
the world that we're trying to describe (or some particular subset of it) is an eigenstate to that one definition (but not the other), acquiring some particular real or Boolean value when being evaluated according to that one definition (but not the other).

Do you really think that this is more familiar terminology? I have no idea what this means. Please explain.

@ Pete - I'm "agreeing" as a way of getting them to lower their guard, and then forcing them to come around to my side with my sweet demeanor. And now you've ruined it! They're on to me <.< /sarcasm

@Pete

Assuming I'm "Math guy," I'm not claiming that all concept are finished, nor that mathematicians haven't redefined things over time.

When it comes to numbers, yes, the definition of a number has certainly changed. Sure, sometimes we change definitions. It's practical, and I already admitted that - but we'd have been no worse off if we had simply developed new words for each type of number as they came up, and it could solve problems. When told to select a number between 1 and 10, one might not confuse people by saying "pi". If "number" meant "integer," and we had a different word for the reals it wouldn't even come up as a point of confusion. But instead we have many things, all called "numbers," and a resulting confusion when different people discuss "numbers."

By Epinephrine (not verified) on 02 Dec 2010 #permalink

@Kevin --
Here are some brief examples as introduction
(and as a quick reference for the rest of us &):

Suppose the particular part of the world that we're trying to describe is given as some speck of matter in a Petri dish.

It might be a textbook-perfect representative of an NK cell, fitting all criteria of the corresponding definition to the letter, as may be found out upon closer inspection. Then: "Yes (Boolean value), that's an NK cell."
and this particular part of the world is (called, starting in college course QM 101) an eigenstate to this definition, with eigenvalue "Yes".

Or it might "in no way, shape, or form" meet any of those criteria in the least (with the obvious exception of appearing as a speck of matter in a Petri dish, too).
Then: "No (Boolean value), that's not an NK cell."
and this particular part of the world is an eigenstate to the definition of NK cell as well, however with eigenvalue "No".

Or (and that seems what you're getting at) it might meet only some but not all criteria of the definition under consideration. Then the measurement evaluates neither an unequivocal "Yes", nor a flat "No".
Therefore this particular part of the world is not an eigenstate to the definition under consideration.

Consequently (at least) it appears sensible (and more "reflective of reality" and "creating value") to consider other definitions, especially those that would yield a straightout real or Boolean evaluation.

(An analogous example with real-number eigenvalues, involving the body height of women and/or the diameters of regulation basket balls, is left as an exercise ...)

The point of becoming familiar with this ("physics", or rather: scientific) terminology (in case you're not yet) is
that it is (thought to be) sufficiently precise to discuss when and why which definitions should be abandoned and which others be used.

By Frank Wappler (not verified) on 03 Dec 2010 #permalink

Is a photon a wave or a particle?
Yes.
And both.

Seems to me some important discoveries in science have come from realize definitions are not always clear and not always exclusive.

By Helen Krummenacker (not verified) on 08 Dec 2010 #permalink

Helen Krummenacker wrote (December 8, 2010 5:07 PM):

> Is a photon a wave or a particle? Yes. And both.

No, neither.

While a photon can certainly be exchanged by particle-like charge-carrying systems as small as atoms or their still smaller or elementary parts (such as excited atomic nuclei, or excited nucleons), a photon does not "itself" exchange signals with its surrounding. Therefore particular geometric relations of a photon could not be measured, and could consequently not be presumed.

On the other hand, while it can be useful to describe even an individual trial of photon propagation as wave-like (e.g. in order to determine for a given experimental region its most probable distribution of transparent "slits" or intransparant "walls", etc.), the photon nevertheless is exchanged between source and one particular receptor (that is in general only a small part of a whole "screen" configuration of receptors, for which a wave-interference pattern could result when considering several trials).

Since therefore neither the definitions of particle nor wave could confern any real value to such experiments, photons count instead as "quanta of the electro-magnetic field".

> Seems to me some important discoveries in science have come from realize definitions are not always clear and not always exclusive.

Seems to me that definitions ought to be precise in any case, in order to decide how far they would be (mutually) exclusive or compatible, and to select those definitions that draw the most value from given observational data.

By Frank Wappler (not verified) on 12 Dec 2010 #permalink