Another great basics topic, which came up in the comments from last fridays "logic" post, is the
difference between syntax and semantics. This is an important distinction, made in logic, math, and
computer science.
The short version of it is: syntax is what a language looks like; semantics is what
a language means. It's basically the distinction between numerals (syntax) and
numbers (semantics).
In terms of logic, the syntax is a description of what a valid statement looks like: what the pieces of a statement are, and all of the different ways that the pieces can get put together to
form valid statements. The way that they're put together also imply how you can take them apart - that is, if you know that a predicate is a predicate name, followed by parens containing arguments to the predicate separated by commas, then given a valid predicate, you can say what the predicate name is, and what the arguments to the predicate in that statement are.
The semantics are the meanings of the statements - and the rules that tell you how to
take a syntactically valid statement, and figure out what it means. So, for example, it includes rules that describe how to find out what object/entity is referred to by a particular primitive name, and what kind of property is meant by a particular predicate.
So, for example, I can show you a simple statement: P("m","f")
. Just by looking at it, you now that it's a simple predicate statement over two primitives. You can tell that the
name of the predicate is "P", and that the two arguments to the predicate are "m" and "f". But what does it mean? That, you can't find out until I tell you what the semantics of the statement are.
Now, suppose I tell you "P" is a predicate with two parameters, that says the second
parameter is a parent of the first; that "m" is me; and that "f" is my father, then you can see that the meaning of the statement is "My father is one of my parents".
One of the interesting things about logic inference rules is that they are semantics independent - given a set of statements in FOPL, I don't need to know what the predicates mean, or what objects are represented by the primitives. I can still perform inferences,
generating new true statements without knowing what they mean. But after I've got the
result of the inferences, if you tell me what the predicates mean and what the primitives represent, the statements that I inferred will be true - even though I didn't know what I was reasoning about when I did the inference.
- Log in to post comments
Wait.. so the kids in my classes are FOPLs?
This is exactly why I hate it when I hear people say, "oh, it's just semantics".
Indeed, everything derived by inference rules from the axioms will be valid in the semantic domain, but there may be semantically valid statements which are not derivable syntactically. This is a question of completeness which leads to Godel's incompleteness theorem
Once upon a time, programming languages were defined by writing a reference implementation, often in the language itself. One example was Pascal, whose semantics were defined using a reference implementation written in Pascal itself.
If you think this sounds like circular reasoning, you're right. A "conforming" Pascal interpreter could, arguably, read in a program, then print the number "42" and exit straight away. After all, your interpreter and the reference implementation would behave exactly the same, assuming that you used your interpreter to run the reference implementation!
Sort of a digression, but anyway:
"Oh, that's just semantics" has a point. One of its typical uses is distinguishing disputes over the meanings of expressions from disputes over the state of the world. Now, I love linguistic semantics and thus care about the disputes of the first type, but it's important to keep straight what's at issue. When one doesn't, pointless arguments can ensue.
Here's an example. In AI and philosophy of mind people often debate whether machines (or computer programs/processes) can be conscious. I regard this as a substantive debate about facts in the non-linguistic world. (Can a machine have the sort of first person experiences I do?) Attempts by philosophers to "dissolve" the problem linguistically seem quite wrong-headed to me.
On the other hand, the "Artificial Life" community often makes claims about whether a machine can be "alive". There's often a lot of commotion about such claims, but I'd argue it's "just semantics". What do we mean by "life" and "alive" and can we stretch the terms to cover cellular automata? However we decide such a question, the (non-linguistic) facts of the world come out the same.
Indeed, everything derived by inference rules from the axioms will be valid in the semantic domain, but there may be semantically valid statements which are not derivable syntactically.
Turquoise shoe fins actualize greenly?
The basic problem with syntax and semantics (and pragmatics, to be more complete), seems to lie in the question what we take as reality.
In Mathematics there is Formalism, which takes the view that as long as the system is consistent, it has some reality, although perhaps not usefulness/applicability. Platonism thinks that ideas are the reality, Formalism is only a way to make a syntax, of which the semantics are the ideas. Materialism is something else again.
There is a well known phrase "the unreasonable effectiveness of Mathematics" (see "The Unreasonable Effectiveness of Mathematics in the Natural Sciences," by Eugene Wigner in Communications in Pure and Applied Mathematics, vol. 13, No. I (February 1960)), which is about why Mathematics as formalisms, is so successful in describing the natural world so well.
The same with AI and Alife, eventually, the models must be a valid model of natural life. You can't say "Oh, that's just semantics" when the meaning is what we believe is the reality, even if it is vague and cannot express it linguistically. "The Tao that can be expressed is not the real Tao".
Logical models are our approximations of the meaning.
The shallow meaning of "Oh, that's just semantics" is that we have not agreed upon the common definition of terms.
Indeed, everything derived by inference rules from the axioms will be valid in the semantic domain, but there may be semantically valid statements which are not derivable syntactically.
Turquoise shoe fins actualize greenly?
=====================================
My assumption is that all axioms in the syntax are already proven valid, and so are the inference rules. Hence any theorem derived using the axioms and rule of inference is also valid in the semantics.
It does not mean any well formed formula in the syntax is true. One must be careful here, there are rules for forming well formed formula, and rules for generating theorems from axioms.
I thought Wigner argued for a form of platonism, which seems popular among mathematicians? (And among physicists, in the Tegmark 'mathematics constructs is the physical reality' sense.) At least, that is how I remember reading him.
Further, I don't recognize the characterization of formalism as "the view that as long as the system is consistent, it has some reality, although perhaps not usefulness/applicability".
Neither does Wikipedia, it seems: "Mathematical truths are not about numbers and sets and triangles and the like -- in fact, they aren't "about" anything at all". and another form gives "Thus, formalism need not mean that mathematics is nothing more than a meaningless symbolic game." ( http://en.wikipedia.org/wiki/Philosophy_of_mathematics )
My take here is rather like your last point "Logical models are our approximations of the meaning." The formalism is selected to describe the natural world well, by influx from physics and computer science for example. We try to ever perfect our models and that changes the formalism, both used syntax and semantics.
And of course, the axioms can lead to inconsistencies which is another consideration here.
{ Further, I don't recognize the characterization of formalism as "the view that as long as the system is consistent, it has some reality, although perhaps not usefulness/applicability"}
One of the strongest impetuses for formalism was Non Euclidean geometry, which when discovered, gave no hints about applicability at all in the physical world. On hindsight, abstract formalisms could turn out to be very useful and applicable in the physical world. Witness the heavy use of math in modern physics.
The Wikipedia article referred to, mentioned: "Formalism holds that mathematical statements may be thought of as statements about the consequences of certain string manipulation rules", but also says that the formalism game is not arbitrary, but motivated by various things.
Hence we should distinguish between math as a product, which is a formalism, and the process of doing math, which could be guided by intuition, aesthetics, philosophy, or just meaningless symbolism.
Somehow geniuses managed to digg into areas which are very abstract, but later found to be of practical value. This was one of the points made by Wigner, it is a mysterious unreasonableness: "The first point is that mathematical concepts turn up in entirely unexpected connections".
I don't know whether this should be interpreted as arguing for Platonism.
More Wigner quotes:
"It is difficult to avoid the impression that a miracle confronts us here, quite comparable in its striking nature to the miracle that the human mind can string a thousand arguments together without getting itself into contradictions, or to the two miracles of the existence of laws of nature and of the human mind's capacity to divine them."
and
"The miracle of the appropriateness of the language of mathematics for the formulation of the laws of physics is a wonderful gift which we neither understand nor deserve. We should be grateful for it and hope that it will remain valid in future research and that it will extend, for better or for worse, to our pleasure, even though perhaps also to our bafflement, to wide branches of learning."
Perhaps in the future, cognitive neuroscience can help understand how our mind works, and make the miracle less mysterious.
Well, I certainly agree that the product can be used. That doesn't mean that it has a reality outside its usefulness.
I took a new look on Wigner, and I can not find any platonism there. Perhaps I confused the text with this use elsewhere. To me his argument isn't convincing. I would certainly not view physics as unreasonable if its "concepts turn up in entirely unexpected connections". (I am also peeved by Wigner assuming from nowhere a conflict being exposed "if present laws of heredity and of physics are confronted".)
the discussion has gone a little off topic from the original syntax and semantics. So I have posted my comment in my own blog. Hope it is OK
Indeed, everything derived by inference rules from the axioms will be valid in the semantic domain, but there may be semantically valid statements which are not derivable syntactically.