Mathematical Constructions and the Abstraction Barrier

There was an interesting discussion about mathematical constructions in the comment thread on my post about the professor who doesn't like infinity, and I thought it was worth turning it into a post of its own.

In the history of this blog, I've often talked about the idea of "building mathematics". I've shown several constructions - most often using something based on Peano arithmetic - but I've never really gone into great detail about what it means, and how it works.

I've also often said that the underlying theory of most modern math is built using set theory. But what does that really mean? That's the important question, and the subject of this post.

Math is based on formal reasoning. A formal reasoning system consists of two parts: a logic - that is, a mechanical, symbolic inference system; and a model - a set of objects that our inference system can reason about. The objects are described by a basic set of axioms that form the basis of
what we know about the objects. Everything else - everything else, will be done by defining things in terms of those basic objects, and inferring things from the definitions - and the meanings that they get from the axioms of the basic objects that underly the definitions.

In modern math, we typically work by using first order predicate logic as our inference system, with sets (defined by the Zermelo-Fraenkel axioms) as their basic objects.

Often, when you ask someone who dislikes the axiomatic approach to math how to do math without axioms, they'll say "with definitions". That answer is a copout. Definitions can be one of two things: they can be axioms, under a different name. Or they can be true definition, which rely on the underlying formal
reasoning system - including its axioms.

The difference, in my jargon here, between a "true definition" and an "axiom" is that a definition defines objects in terms of something else; an axiom is foundational. While that distinction might seem
to be splitting hairs, it's actually quite important. An axiom system has the responsibility of establishing the validity of everything built upon it. If an axiom scheme contains any kind of
inconsistency or error, then everything that you do in the formal system - every theorem, every proof,
is invalid. True definitions, on the other hand, rely on an axiom scheme for their validity. A definition doesn't have the strong requirement of validity that the axiom scheme does: you define things in terms of the consistent underlying basis, and so long as you build your construction in terms of the things provided by the axioms and the logic, your definitions will necessarily be consistent.

One of the subtle points of this - and one of the things that is commonly misunderstood - is how you
build things in an axiomatic formal system like FOPL+ZF Set theory. When you're building something in a
formal system, you need to make a distinction between the thing that you're building, and the things that
you're building it with. You define the set of objects, properties, and operations that you're interested
in studying in terms of the underlying system. You can show the validity of your construct by
proving the correctness of its essential properties in terms of the underlying construction. But you always keep that clear line distinguishing between the construction and the elements.

So - let's look at an example. We generally build the natural numbers from Z-F sets. We say
that 0=∅, 1={0}={∅}, 2={0,1}={{∅},∅}, and so on. Using the usual Peano
arithmetic construction, we can say that for any integer, it has a unique successor. We show that
in set theory by showing how, given a number in set theoretic representation, we can construct its
successor: given a number n, s(n) = n ∪ {n}. By doing that, we've defined the successor operation in terms of set theory. And we can prove the property that each number has a unique successor, by using the ZF-axioms to show that for any set N which is a set-theoretic representation of a natural number, that there is exactly one set N' such that N'=S(N).

It's important to note that it's not the case that we can arbitrarily use set theoretic operations on
our set theoretic numbers, and have the result be meaningful. We can define a set difference operator -
but set difference of numbers M and N won't necessarily be a valid number. When we're working in terms of
the construction of natural numbers using sets, we can only use the operations that we defined in the
construction
. The set theory is providing us with a tool for proving the validity of our
construction.

My favorite metaphor for this is (obviously, given my interests) computer languages. When you write a program in a computer language, you're writing in terms of abstractions that are very different
from the natural abstractions of the real machine on which the program is executed. For example, a few months back, I wrote a bunch of articles about programming in Haskell. In Haskell, you work in terms of functions, without any idea of mutable state. The computer hardware has no concept of a function. But you can write a compiler that implements Haskell functions in terms of the primitive, mutable-state-based
machine language. All of Haskell becomes executable by that translation. But if you were to take a Haskell program, look at the primitive representation of a function, and perform some change to it that makes perfect sense in terms of the machine language, you could end up with something that breaks Haskell - that makes the program crash.

That's exactly the same idea as building things in math. You define them (implement them) in terms of
a low level formal system (machine language), and then build theorems (programs) in terms of the high-level constructions that you've built. But if you break the abstraction, you can get into trouble.

Tags

More like this

One of the things that we always say is that we can recreate all of mathematics using set theory as a basis. What does that mean? Basically, it means that given some other branch of math, which works with some class of objects O using some set of axioms A, we can define a set-based construction of…
Sorry for the delay in the category theory articles. I've been busy with work, and haven't had time to do the research to be able to properly write up the last major topic that I plan to cover in cat theory. While doing my research on closed cartesian categories and lambda calculus, I came across…
When Cantor's set theory - what we now call naive set theory - was shown to have problems in the form of Russell's paradox, there were many different attempts to salvage the theory. In addition to the axiomatic approaches that we've looked at (ZFC and NBG), there were attempts by changing the…
Axiomatic set theory builds up set theory from a set of fundamental initial rules. The most common axiomatization, which we'll be used, is the ZFC system: Zermelo-Fraenkel with choice set theory. The ZFC axiomatization consists of 8 basic rules which are pretty much universally accepted, and two…

The problem with construction, of course, is that you run the risk of your construction having properties that the abstract thing that you're trying to model doesn't.
Using your favourite analogy, this is a real problem for programming languages, too. Languages used to be defined by implementation, that is, whatever the reference implementation does is what any conforming implementation should do too.
We now know that's a bad idea. And while the ZF construction of natural numbers is very pretty, I've never been entirely convinced that it's a good idea either. Yeah, it satisfies the Peano axioms. But you wouldn't prove a theorem about natural numbers using the construction, because you always run the risk that your theorem is a consequence of the construction, rather than of the Peano axioms.

By Pseudonym (not verified) on 04 Nov 2007 #permalink

Excellent post. What I find interesting is the question of how mathematics relates to the "real world," the reason that we can build bridges and skyscrapers using something that is ultimately, logically reducible to a particular collection of ideas about sets and a way of combining those ideas into bigger ideas.

Related to this issue, and the topic of your post is the question of the historical development of the particular system of mathematical ideas we've inherited today. Surely the first mathematicians did not start at what we today recognize as the root or source of all mathematics, so in a certain sense the history of mathematics is upside down relative to its logical structure.

wait for it......

wait...

wait...

wait...

LONGEST POST EVAR!!!!!

By douchebag (not verified) on 04 Nov 2007 #permalink

in a certain sense the history of mathematics is upside down relative to its logical structure

I think that is less perplexing if we look at fields such as physics, where the search for more fundamental theories (by necessity) is explicit.

Also the case for constructions mentioned in the first comment has a strengthened connotation when looking at other fields. There constructions are supported by the application context of the model. For example, path integrals which have a physical interpretation works fine in practice, yet there is a theoretical problem to define a measure.

By Anonymous (not verified) on 05 Nov 2007 #permalink

Pseudonym,

If the ZF construction satisfies the PA axioms, then any results proven must be consistent with the axioms. You won't run into any properties of the ZF sets which are not properties of the natural numbers. Although, with just the idea that ZF satisfies PA, it does not rule out the idea that there are properties of the natural numbers which are not properties of the sets (I actually believe this is not a problem, but I could not tell you exactly why, at the moment).

I would, indeed, like to know your alternative to the FOL+ZF approach.

The problem with a set theoretic basis to math is that it requires a logic in order to implement it. The problem with a logical basis to set theory is that it requires a theory of sets to implement it. The accepted basis for math is completely circular.

Add to this the fact that any system which satisfies the PA axioms has sentences which are true, but cannot be proven (Godel, of course), and math starts to look like a bunch of BS (along with ZF (based on PA axioms) and FOL (based on ZF)). Then science starts to look like a bunch of BS. And reality starts to look like a bunch of BS.

Your reality has no basis in reality. Run along now, children.

*Set theory is required for FOL because there has to be a set of objects to quantify over (For all x... There exists a y...). Set theory needs FOL for its axiomatic construction, as was talked about in the article above.

By liar liar pant… (not verified) on 06 Nov 2007 #permalink

I must admit some surprise at, over the last few weeks, seeing exactly how many people find set theory so controversial :/

Or is it just the same guy with different usernames?

Mark, I'm curious as to your thought re: category theory and its relationship to
FOL+ZF.

The Incompleteness Theorems really do not suggest as much a BS factor for mathematics as Liar seems to indicate. Proof of course has a very specific notion in the context of mathematical logic.

By Shawn O'Hare (not verified) on 06 Nov 2007 #permalink

[You won't run into any properties of the ZF sets which are not properties of the natural numbers.]

What a lie. Mark actually already indicated something about this. The ZF sets for natural numbers each contain each previous number as an element and as a subset of the set. A natural number x... as in the basic counting numbers... doesn't have all smaller natural numbers as elements of x. In other words, if we start counting 1, 2, 3, 4... we conceive of a sequence. But, we don't conceive of 2 as a subset and as an element of 3.

[The problem with a set theoretic basis to math is that it requires a logic in order to implement it. The problem with a logical basis to set theory is that it requires a theory of sets to implement it. The accepted basis for math is completely circular.]

Ummm... maybe... but this claim needs more backing to it. Of course with this sort of reasoning one can use a different theory of logic or a different theory of sets to implement a different logic and get a different math. So, I actually end up liking your idea, but that doesn't mean I think such has a rational basis.

[Add to this the fact that any system which satisfies the PA axioms has sentences which are true, but cannot be proven (Godel, of course)...]

Umm... well now you have to get into philosophical ideas about the nature of truth. Techincally speaking, as I understand it, Godel's theorems end up saying that any system which satisifies either the PA axioms of the ZFC axioms has propositions/sentences which come out formally undecideable. Does that mean we have true, unprovable statements in formal mathematical systems? Not necessarily.

If we take the concept 'true' and 'provable' as identical, then we have mathematical statements which come out neither true nor false, according to Godel's reasoning. Or to state it differently, Godel showed that any mathematical system sufficient to derive Peano arithmetic (or anything stronger) necessarily has statements where the classical truth set {T, F} doesn't apply, and we need at least the truth set {T, U, F}. At least, that's how I understand such.

If we have other concepts of 'true', then there may exist true, yet unprovable statements in maths... but I don't know what concept of 'truth' gets used here (I assume the idea of correspondence with reality doesn't work)... Can you elucidate what concept of 'truth' you mean to use?

[Then science starts to look like a bunch of BS.]

I don't see this. Science rarely makes used of two-valued logic, and even when it does rely on such heavily usually ways around two-valued logic exist or can get worked out. Case-in-point: Newton's mathematical model for gravity. If we applied two-valued logic to it, it would come out false. But, scientifically speaking it still comes as useful for scientifically modeling and explaining enough scientific phenomena that basically every introductory physics textbook goes over it.

Doug:

That is *not* what I said. I think you're having some serious trouble understanding the idea of a construction, and the distinction between the constructed abstractions, and the underlying mechanism used to implement those abstractions.

If you build a model of the natural number in set theory, what you're doing is building an abstraction describing numbers using set-theoretic definitions. If you work entirely in that abstraction, then any results that you derive from it using the underlying set theoretic axioms will be valid in the defined systems.

The problem is when you *break* the abstraction. "Subset" isn't a concept defined for a number in the constructed abstraction of natural numbers - so if you break the abstraction, and insist on using the primitive set theoretic operations on the abstract construct of numbers as if they weren't numbers, then you can get results that don't make sense in the context of numbers.

I can construct the set of integers using natural numbers. It's easy: if natural number N is greater than 0, and N is even, then it represents the integer NZ=-N/2; if N is odd, then NZ=1+(N-1)/2.

Then, I define the integer operators, in terms of my construction. There's +Z, -Z, etc., each of which I define in terms of the basic + and - operators on natural numbers. Now I've got an implementation of integers. If I only use +Z and -Z, then any result I can derive using my abstraction will be valid - even though under the covers, I'm doing it in terms of the primitive natural number operators. When I see "-1", it's actually represented as "2". When I see the integer 27, it's actually represented by the natural number 53.

If I work in terms of the abstraction, I can't see that representation. I see the number 27. If I add 27Z+Z-3Z, then I'll see the result 24Z - even though under the covers, I've actually done something using the natural numbers 53 and 6. My result isn't 59 - it's 24Z, which is represented by 47.

If we take the concept 'true' and 'provable' as identical,

Doug you apparently have no idea what you are talking about. Both the work of Gödel and Tarski very clearly show that "truth" and "provabilty" within any formal language or system are seperate non equivalent concepts. What Gödel's first theorem actually states is that in any formal system rich enough to contain arithmetic there are true aritmetical statements that cannot be proved in that system.

Science rarely makes used of two-valued logic...

Case-in-point: Newton's mathematical model for gravity. If we applied two-valued logic to it, it would come out false.

Sorry but there is no other honest way to express this, both of these statements is pure unadulterated crap. They are so way of the mark that I have no idea what I could write to correct them except to say that the formal argumentation in all classical science including Newton's theory of Gravity is based on two valued logic and nothing else!

[A natural number x... as in the basic counting numbers... doesn't have all smaller natural numbers as elements of x. In other words, if we start counting 1, 2, 3, 4... we conceive of a sequence. But, we don't conceive of 2 as a subset and as an element of 3.]

Okay, okay, yeah. But the thing is, the containment relation in set theory is isomorphic to the greater than relationship among natural numbers. You're right that there are some properties which aren't technically shared, but they are translated through isomorphism. So there aren't any properties of the ZF sets which do not have equivalent properties of the natural numbers.

[Of course with this sort of reasoning one can use a different theory of logic or a different theory of sets to implement a different logic and get a different math.]

Hmmm, I'm not quite sure what you mean. For any "different math" which is based on set theory, that set theory will need to be built with a logic which can quantify over sets (otherwise it cannot talk about the objects of the language, which simply makes no sense). If it can quantify over sets, there must be a preexisting set theory there. Now, I suppose this could be a different set theory, but that one needs a logical base, which needs a set theoretical base, etc., etc... So it makes the most sense to stick with the one set theory (usually ZF or ZFC (with the axiom of choice) or some variation) and the one logic (usually first-order predicate logic) to lift each other up. But this is circular, and problematic. It cannot be avoided.

[Techincally speaking, as I understand it, Godel's theorems end up saying that any system which satisifies either the PA axioms of the ZFC axioms has propositions/sentences which come out formally undecideable. Does that mean we have true, unprovable statements in formal mathematical systems? Not necessarily.]

If we accept a semantics with more than two truth values, you are correct. But this gets us into some sticky situations, although it is a legitimate "way out".

[If we have other concepts of 'true', then there may exist true, yet unprovable statements in maths... but I don't know what concept of 'truth' gets used here (I assume the idea of correspondence with reality doesn't work)... Can you elucidate what concept of 'truth' you mean to use?]

The important attribute of the theory of truth which I was advocating is merely a two-valued {T, F} semantics. I don't know where you get the idea that this is not standard. Much, much, much of generally accepted mathematics makes use of proof by contradiction. This is impossible using three truth values, as you advocate. You are really going against the grain there, not that people (intuitionists, constructivists) don't do that.

If you assume something to be true, then prove that leads to a contradiction, you are now forced to put it in your "U" territory, until it can be proven false. Last I checked, it cannot be untrue that it is raining, and "unfalse" that it is raining at the same time. So I'm unsure how you say the two-valued semantics is not supported by correspondence with reality. I assume you are referring to the "correspondence theory of truth", but my understanding is that that theory actually goes against 3 or more valued logics.

[Science rarely makes used of two-valued logic, and even when it does rely on such heavily usually ways around two-valued logic exist or can get worked out.]

It is really only a very recent thing, to my knowledge, that people have played with 3+ valued semantics/logics. (From wikipedia on "multi-valued logic": The later logicians until the coming of the 20th century followed Aristotelian logic, which includes or implies the law of the excluded middle.) The law of the excluded middle is the rule that (p or not-p) is true of any p in the system, i.e. every statement is either true or false; there is no "middle", or your "U" value.

Now this is a very recent addition to logic. There are probably almost zero scientists who have gotten around to using it, and that certainly does not include Newton! As for the scientists who do use it, they would be radical thinkers, much like the intuitionist and constructivist mathematicians and logicians are. I am aware that many computer scientists find use in playing with fuzzy logic (where there are infinitely many truth values, not just two or three), but this is not relevant.

So, basically, it is pretty well accepted that Godel's sentence, which basically boils down to "This sentence cannot be proven," is either true (and it can't be proven), or false (in which case it can be proven, and the PA axioms are inconsistent). Which leads most who have any faith in arithmetic to conclude there are sentences which are true but cannot be proven (in fact, infinitely many).

And the whole system is built on the circular logic-built-on-set-theory-built-on-logic mess. And science is built on mathematics. And our knowledge of reality is built on science (and our senses/intuitions...do you trust those?).

So, basically, everything is quite muddled. I kind of like it that way, so I hope you can come to believe that, too.

[The Incompleteness Theorems really do not suggest as much a BS factor for mathematics as Liar seems to indicate. Proof of course has a very specific notion in the context of mathematical logic.]

A lot of people say this, and it is tempting to believe. How far the extent of Godel's proof's implications go is an interesting topic, and I would like O'Hare to expand upon his point. The best case I see is drawing a parallel with quantum mechanics and saying, yeah, all this weird stuff could happen on the quantum level, but it really doesn't happen in the world we see, so it is irrelevant to us in "our world". But the fact is that math is what we use to describe the world, and that math has extremely weird properties (much like quantum mechanics!).

To me, it doesn't make sense to ignore the quirks of our most thorough knowledge of the most basic features of our reality, or our conception of it. These things, to me, are like studying God. (But we have evidence that these exist...)

Mark,

[That is *not* what I said.]

I didn't say you said such. I said "Mark actually already indicated something about this." In other words, if you interpret your statement "We say that 0=â, 1={0}={â}, 2={0,1}={{â},â}, and so on." in a certain way, then your statement indicates or implies such.

[I can construct the set of integers using natural numbers. It's easy: if natural number N is greater than 0, and N is even, then it represents the integer NZ=-N/2; if N is odd, then NZ=1+(N-1)/2.]

But does your construction have different properties than that of the normal conception of natural numbers? Look, we'd have to specify what we mean by 'property' here. If 'property' means a characteristic attributed to a mathematical concept, then such a construction of integers has different properties than that of natural numbers.

[Now I've got an implementation of integers.]

To some degree, I'd say yes. But, your construction ends up having different conceptual characteristics to it than the normal concept of integers. So, in terms of our mental behavior, normal integers work out differntly to some degree than such a construction.

[When I see "-1", it's actually represented as "2".]

Alright, well that's actually a good example. With the regular conception of integers, we can apply a unary operator '-' on a subset of the integers to generate the rest of the integers. As far as I can see, with the even/odd construction there doesn't end up existing a similar unary operator. So, the constructions end up behaving differently in some respect.

Thony,

[Both the work of Gödel and Tarski very clearly show that "truth" and "provabilty" within any formal language or system are seperate non equivalent concepts. What Gödel's first theorem actually states is that in any formal system rich enough to contain arithmetic there are true aritmetical statements that cannot be proved in that system.]

You've already started interpreting at some level. Maybe not in the level or way in which I think you have, but at some level.
It actually states

"Proposition VI: To every w-consistent recursive class c of formulae there correspond recursive class-signs r, such that neither v Gen r nor Neg (v Gen r) belongs to Flg(c) (where v is the free variable of r)."

http://www.vusst.hr/~logika/undecidable/godel.htm

"Informally, Gödel's incompleteness theorem states that all consistent axiomatic formulations of number theory include undecidable propositions (Hofstadter 1989). This is sometimes called Gödel's first incompleteness theorem, and answers in the negative Hilbert's problem asking whether mathematics is "complete" (in the sense that every statement in the language of number theory can be either proved or disproved). Formally, Gödel's theorem states, "To every omega-consistent recursive class kappa of formulas, there correspond recursive class-signs r such that neither (v Gen r) nor Neg(v Gen r) belongs to Flg(kappa), where v is the free variable of r" (Gödel 1931)."

http://mathworld.wolfram.com
/GoedelsIncompletenessTheorem.html

I don't see anything about the nature of truth here, but rather about class signs. In other words, one can formulate class signs for such theories which don't come out provable nor disprovable. It doesn't say anything about true class signs, so far as I see. I don't see Godel talking about truth in his preface. He says "It may therefore be surmised that these axioms and rules of inference are also sufficient to decide all mathematical questions which can in any way at all be expressed formally in the systems concerned. It is shown below that this is not the case, and that in both the systems mentioned there are in fact relatively simple problems in the theory of ordinary whole numbers4 which cannot be decided from the axioms."
http://www.vusst.hr/~logika/undecidable/godel.htm Again, I see him talking about decidability, not truth.

Lastly, notice that I said what Godel's thoerem might say IF we took true and provable as equivalent concepts. Now, you say that Godel's theorems basically say that they don't qualify as equivalent concepts in formal systems. In other words, you seem to assert that we can't *conceive* them in terms of each other in a formal system. So, as far as I can see, you turn Godel into a psychologist of possible thought instead of a mathematician or a meta-mathematician.

[They are so way of the mark that I have no idea what I could write to correct them except to say that the formal argumentation in all classical science including Newton's theory of Gravity is based on two valued logic and nothing else!]

First off REAL argumentation in sciences takes place in human language and human thinking. Simply put, neither human language nor human thinking operates exclusively according to two-valued logic, and only uses two-valued logic in a trivial sense so far as we can see.

Second, assuming Newton's theory of gravity to operate in a two-valued logical system, it either comes out true or false. Well, its gotten falsified already, and consequently it can't come out true. So, logically, it comes out false. Of course it still gets taught in basic science and physics textbooks, so if you really promoted two-valued logic in science, you would logically want people to rewrite basically all basic physics textbooks. You also logically want people to completely get rid of the model where electrons orbit atoms. Also, get rid of statements which come out usually, or partially true in science or similar matters, such as "leaves are green in the summer," or "electrons behave like waves instead of particles" (and the converse). After all, in two-valued logic there exists only 'true' and 'false', and statements like 'electrons behave like waves' comes out having counter-examples if assumed either 'true' or 'false'. But, 'electrons behave like waves' comes out as permissible scientifically speaking, so at least in QM two-valued logic doesn't get used exclusively. Not to mention quantum-logic or modeling of quantum mechanics using multi-valued logic by people such as Reichenbach in books alredy 50 plus years old.

Additionally, there already exists books like "Fuzzy Logic in Chemistry" and "Fuzzy Logic in Geology." I didn't talk about "classical" science, nor deigned to define such in an almost circular manner. I talked about science.

Lastly, it doesn't really come hard to criticize ideas "way off the mark" (so far as I can tell). We can all criticize religious ideas we find "way off the mark".

In case my post with the references cited doesn't post, hopefully this will post.

Mark,

[That is *not* what I said.]

I didn't say you said such. I said "Mark actually already indicated something about this." In other words, if you interpret your statement "We say that 0=â, 1={0}={â}, 2={0,1}={{â},â}, and so on." in a certain way, then your statement indicates or implies such.

[I can construct the set of integers using natural numbers. It's easy: if natural number N is greater than 0, and N is even, then it represents the integer NZ=-N/2; if N is odd, then NZ=1+(N-1)/2.]

But does your construction have different properties than that of the normal conception of natural numbers? Look, we'd have to specify what we mean by 'property' here. If 'property' means a characteristic attributed to a mathematical concept, then such a construction of integers has different properties than that of natural numbers.

[Now I've got an implementation of integers.]

To some degree, I'd say yes. But, your construction ends up having different conceptual characteristics to it than the normal concept of integers. So, in terms of our mental behavior, normal integers work out differntly to some degree than such a construction.

[When I see "-1", it's actually represented as "2".]

Alright, well that's actually a good example. With the regular conception of integers, we can apply a unary operator '-' on a subset of the integers to generate the rest of the integers. As far as I can see, with the even/odd construction there doesn't end up existing a similar unary operator. So, the constructions end up behaving differently in some respect.

Thony,

[Both the work of Gödel and Tarski very clearly show that "truth" and "provabilty" within any formal language or system are seperate non equivalent concepts. What Gödel's first theorem actually states is that in any formal system rich enough to contain arithmetic there are true aritmetical statements that cannot be proved in that system.]

You've already started interpreting at some level. Maybe not in the level or way in which I think you have, but at some level.
It actually states

"Proposition VI: To every w-consistent recursive class c of formulae there correspond recursive class-signs r, such that neither v Gen r nor Neg (v Gen r) belongs to Flg(c) (where v is the free variable of r)."

"Informally, Gödel's incompleteness theorem states that all consistent axiomatic formulations of number theory include undecidable propositions (Hofstadter 1989). This is sometimes called Gödel's first incompleteness theorem, and answers in the negative Hilbert's problem asking whether mathematics is "complete" (in the sense that every statement in the language of number theory can be either proved or disproved). Formally, Gödel's theorem states, "To every omega-consistent recursive class kappa of formulas, there correspond recursive class-signs r such that neither (v Gen r) nor Neg(v Gen r) belongs to Flg(kappa), where v is the free variable of r" (Gödel 1931)."

I don't see anything about the nature of truth here, but rather about class signs. In other words, one can formulate class signs for such theories which don't come out provable nor disprovable. It doesn't say anything about true class signs, so far as I see. I don't see Godel talking about truth in his preface. He says "It may therefore be surmised that these axioms and rules of inference are also sufficient to decide all mathematical questions which can in any way at all be expressed formally in the systems concerned. It is shown below that this is not the case, and that in both the systems mentioned there are in fact relatively simple problems in the theory of ordinary whole numbers4 which cannot be decided from the axioms."
Again, I see him talking about decidability, not truth.

Lastly, notice that I said what Godel's thoerem might say IF we took true and provable as equivalent concepts. Now, you say that Godel's theorems basically say that they don't qualify as equivalent concepts in formal systems. In other words, you seem to assert that we can't *conceive* them in terms of each other in a formal system. So, as far as I can see, you turn Godel into a psychologist of possible thought instead of a mathematician or a meta-mathematician.

[They are so way of the mark that I have no idea what I could write to correct them except to say that the formal argumentation in all classical science including Newton's theory of Gravity is based on two valued logic and nothing else!]

First off REAL argumentation in sciences takes place in human language and human thinking. Simply put, neither human language nor human thinking operates exclusively according to two-valued logic, and only uses two-valued logic in a trivial sense so far as we can see.

Second, assuming Newton's theory of gravity to operate in a two-valued logical system, it either comes out true or false. Well, its gotten falsified already, and consequently it can't come out true. So, logically, it comes out false. Of course it still gets taught in basic science and physics textbooks, so if you really promoted two-valued logic in science, you would logically want people to rewrite basically all basic physics textbooks. You also logically want people to completely get rid of the model where electrons orbit atoms. Also, get rid of statements which come out usually, or partially true in science or similar matters, such as "leaves are green in the summer," or "electrons behave like waves instead of particles" (and the converse). After all, in two-valued logic there exists only 'true' and 'false', and statements like 'electrons behave like waves' comes out having counter-examples if assumed either 'true' or 'false'. But, 'electrons behave like waves' comes out as permissible scientifically speaking, so at least in QM two-valued logic doesn't get used exclusively. Not to mention quantum-logic or modeling of quantum mechanics using multi-valued logic by people such as Reichenbach in books alredy 50 plus years old.

Additionally, there already exists books like "Fuzzy Logic in Chemistry" and "Fuzzy Logic in Geology." I didn't talk about "classical" science, nor deigned to define such in an almost circular manner. I talked about science.

Lastly, it doesn't really come hard to criticize ideas "way off the mark" (so far as I can tell). We can all criticize religious ideas we find "way off the mark".

[But the thing is, the containment relation in set theory is isomorphic to the greater than relationship among natural numbers.]

In some respect, perhaps. But, in general that doesn't hold for the containment relationship. After all,
{2, 4} is contained in {2, 4, 6}, but {2, 4, 6} is not greater than {2, 4}.

[So there aren't any properties of the ZF sets which do not have equivalent properties of the natural numbers.]

Define 'property'.

[For any "different math" which is based on set theory, that set theory will need to be built with a logic which can quantify over sets (otherwise it cannot talk about the objects of the language, which simply makes no sense).]

One could use a fuzzy logic with fuzzy quantifiers to construct a fuzzy set theory.

[If we accept a semantics with more than two truth values, you are correct. But this gets us into some sticky situations, although it is a legitimate "way out".]

What 'sticky situations' do you foresee? Oh... and it looks like you've disagreed with Thony who says "Doug you apparently have no idea what you are talking about." Nice truth-speaking there, liar.

[The important attribute of the theory of truth which I was advocating is merely a two-valued {T, F} semantics.]

Oh... well if you define truth in terms of such a semantics, sure.

[I don't know where you get the idea that this is not standard.]

It comes out standard enough in practice, but not explicitly so. I'd prefer that most people who use this sort of assumption about truth state such upfront. It works out like the parallel postulate of Euclidean geometry in that if you reject it you can get different logics. So, ignoring such as an assumption comes out significant.

[Much, much, much of generally accepted mathematics makes use of proof by contradiction. This is impossible using three truth values, as you advocate.]

No, it's not, and I'll show this via example. It comes out as impossible for all three-valued logics taken together. But, for specific three-valued logics it can hold. For instance, let's say that our truth value set comes as {0, 1/2, 1}. Contradiction states a^c(a)=0 (where c(a) indicates the complement or negation of a). Use the max(0, a+b-1) operator for '^', and 1-a for c(a). Well, b=c(a), so we have
max(0, a+c(a)-1)=max(0, a+1-a-1)=max(0, 0)=0. If use
min(1, a+b) for 'v', then a v c(a)=1, as
min(1, a+c(a))=min(1, a+1-a)=min(1, 1)=1. So, for a three-valued logic (or really any-valued logic) with those operators for intersection and union, proof by contradiction still holds. There exist other multi-valued logics where POC and POEM hold.

[Last I checked, it cannot be untrue that it is raining, and "unfalse" that it is raining at the same time.]

Suppose we have a few raindrops here and there. Then it rains a little over some space, while it doesn't rain at all over some other space. Last time I checked I noticed that a rain of a hurricane worked very differently than a light shower of rain which worked differently than a sunny day with no clouds in the sky.

[I assume you are referring to the "correspondence theory of truth", but my understanding is that that theory actually goes against 3 or more valued logics.]

I don't think it specifies such actually.

[Now this is a very recent addition to logic. There are probably almost zero scientists who have gotten around to using it, and that certainly does not include Newton!]

Formally, as in a mathematical development of such, it comes out quite recent. But, if we take science as "the science of thinking", then I suspect that people have used non-two valued ways of thinking for quite some time. Although, admittedly such engages in mind-reading to a high degree, so I don't know how to prove it. Then again, since the other position also engages in mind-reading heavily, someone with the other view can't prove it. So, I state my opinion.

[So, basically, it is pretty well accepted that Godel's sentence, which basically boils down to "This sentence cannot be proven," is either true (and it can't be proven), or false (in which case it can be proven, and the PA axioms are inconsistent).]

You mean that many people *accept* that it says such. But does Godel say that? Did Goedel really mean to say that? After all, we KNOW that Goedel went on and did some work in multi-valued logic. "Kurt Gödel in 1932 showed that intuitionistic logic is not a finitely-many valued logic, and defined a system of Gödel logics intermediate between classical and intuitionistic logic; such logics are known as intermediate logics."

Wikipedia article on multi-valued logic (the page seems not to post when I post citations).

So what would Goedel say his own theorem says in informal language? I'm not deciding here.

[And science is built on mathematics.]

Don't forget about the Einstein quote ""As far as the laws of mathematics refer to reality, they are not certain, as far as they are certain, they do not refer to reality." I've never seen an uncertain statement in two-valued logic, have you?

I think you and I actually have somewhat similar view liar.

Doug:

You're demonstrating exactly what I mean by "not understanding the idea of a construction".

In my construction of the integers from the naturals, there is a unary minus - defined in exactly the way that unary minus is normally defined for the integers: -X = 0-X;
but since we're working in the construction, we don't use the primitive natural number "-"; we use the "-" from the construction - that is, "-Z".

That's the point of a construction. A construction is a closed world, built on a well-understood substrate.

Like I keep saying, you can think of it like a computer language. Or for an example that's maybe closer to home: you're reading text that I've written to you. It's english text, made up of letters and spaces and punctuation. You understand it as english. It is english. But it's implemented as UTF-8, an encoding of the letters and symbols into numbers. Really, when you see an "A", it's the number 65. But you never see that when you're reading it: you read it inside the construction of the characters using numbers. And in fact, than number 65 isn't really the number 65. It's really a byte - a sequence of eight 1s and 0s: 01000001. And really, it's not a sequence of 0s and 1s - it's a sequence of electrical charges inside of a circuit. At each level, a construction is built on top of the lower level. And then you work in the construction, which depends on what's beneath it. When I'm reading and writing text on the computer, I *never* stop and think "Gosh, I can divide the letter A by two by doing a right-shift on it".

If I use the natural numbers to construct the integers, what I'm doing is using a well-understood, proven-sound system, and using it as a basis for implementing a new abstraction. When I talk about integers, I'm talking about its closed world, the construction of the integers. In the construction, I'm only dealing with constructed integers and the operations on constructed integers. The construction is like the interpreter - it's there, it's filling a necessary role - but when I'm looking at the construction and its properties, I need to stay inside the construction.

[In my construction of the integers from the naturals, there is a unary minus - defined in exactly the way that unary minus is normally defined for the integers: -X = 0-X]

Your definiton here makes '-' binary, and NOT unary, since you're defined it in terms of two symbols '0' and 'X'. Look, I could start with JUST the ordered set {1, 2, 3...}. Now, take the unary operation '-' on this ordered set, and we'll have reversed order {... -3, -2, -1}. I need NOT have a concept of a '0' in order to form this set. I think I could practially define '-' as the unary operation on all less-to-greater ordered subsets of natural numbers which reverses their order. Maybe that definition doesn't work out as sufficient though, I just thought of it (although I doubt I thought of such first or someone else hasn't tried a similar definiton before... but I know of no sources offhand). Maybe you have a unary operation and I've missed it, but I certainly don't see it from your definition in terms of a binary operation.

[That's the point of a construction. A construction is a closed world, built on a well-understood substrate.]

Maybe, but historically speaking and in terms of mathematical education it simply didn't happen that negative numbers came from the closed world of the natural numbers. In fact, people rather constantly objected to negative numbers, since others formed them in terms of something other than their closed world of natural numbers. In other words, they objected to negative numbers, because they took positive numbers, introduced a '-' operator on them, and then found out that closure no longer held for '-' on the positive numbers. So, this sort of emphasis on closure ends up inviting, or at least has historically done so, rejecting of other sorts of mathematical objects.

Second, I suspect I don't agree that a construction works out as a closed world, or at least I doubt the usefulness of closed mathematical worlds. Let's say I talk about a set such as {2, 3} with the operation '*' of concatenation, as in 2*3=23. Now, suppose I ask, is {2, 3} a semigroup? According to technical definitions, I can NOT properly call ({2, 3}, *) a semigroup, since closure doesn't hold. But, this misses the point I, and I suspect others, would want to ask. I want to ask 'does associativity hold for concatenation? Does order affect the results for concatenation?' Techincally speaking, I have to extend
{2, 3} to an infinite closed set before I can properly evaluate whether order affects the results for concatenation. But, practially speaking I don't have to do this to figure it out. I can merely state that x*y=xy from the definition of '*', and then write (a*b)*c=ab*c=abc and a*(b*c)=a*bc=abc. Consequently, I have sufficient reason to declare that I know that concatenation works associatively (or that order doesn't affect the results of concatenation). No, I don't have a full proof, but I do have the main part of a full proof, and some might even say I have a partial proof. With an overemphasize on closure here I don't have sufficient to declare concatenation associative, since I didn't start with a closed set.

[You understand it as english. It is english.]

I don't see how English qualifies as a closed set (over what operation? conversation?). In fact, since English has historically borrowed MANY words from other languages, and continues to do so as it evolves, I would say that the 'operation' of conversation implies English as a non-closed set. I certainly don't see how English qualifies as a 'closed world' since it continues to evolve and act with other languages.

[If I use the natural numbers to construct the integers, what I'm doing is using a well-understood, proven-sound system, and using it as a basis for implementing a new abstraction.]

I don't perceive, nor conceive, how *soundness* has gotten proven to the natural numbers.

[When I talk about integers, I'm talking about its closed world, the construction of the integers. In the construction, I'm only dealing with constructed integers and the operations on constructed integers. The construction is like the interpreter - it's there, it's filling a necessary role - but when I'm looking at the construction and its properties, I need to stay inside the construction.]

This won't tell you how the construction relates to the "outside" mathematical world. For example, remaining in the integer construction won't tell you about the rational construction. Staying strictly inside the integer construction won't tell you that the operation '*' for non-zero integers has an inverse operation '/' among a richer construction. Staying inside such a construction won't tell you that one can form richer constructions, as one has to see what happens when closure doesn't hold to find/invent richer constructions. In other words, staying inside a construction won't lead to new maths.

Doug, your ability to miss the point so consistently, and at such length, is an example to us all.

Mark is showing you how you can start with, in this case, the natural numbers, and construct the integers, defining not only the objects (integers) but also the operators (+z, -z) for the construction. The constructed -z is not the - of the natural numbers. You seem to be misunderstanding what he means by "closed world". "Closed world" means that you can't use the raw - of the naturals on a constructed integer as if it were the constructed -z operator; that would be breaking the construction. Now, we could use our constructed set of integers and use them to construct a new set of objects, the rationals. And in the rationals, we would define a new set of operators, +r and -r. Using the integer -z operator on objects in the rationals would break the construction, just as using the natural - operator on objects in the integers would break the construction.

In short, "closed world" means you can't use the raw operators of the substrate level on objects on the constructed level. It doesn't mean that you can't construct a higher level. it doesn't forbid you constructing the rationals from the integers.

By Stephen Wells (not verified) on 07 Nov 2007 #permalink

If there's no such thing as the empty set, and I can hardly think of a better candidate for non-existence, then it's hard to see how to get the ZF ball rolling. That's not to mention any ontological scruples we might have about the existence of sets. I mean, here is my computer in front of me. Is {my comupter} also in front of me? And furthermore is {{my computer}, my computer} here before me? And also {{{my computer}, my computer}, {my computer}, my computer}? And so on?

And one is a philosopher for doubting these things? I would think it would take a philosopher to believe them! Of course, I believed wholeheartedly in sets when I was indoctrinated (to put it provocatively) in mathematics classes and have only developed doubts in studying the philosophical foundations of mathematics. That is not to say that I now reject set theory altogether, but rather that I think that the naive platonistic conception of sets is misleading. Certainly, assuming the existence of the ZF sets as a domain of interpretation over which to model the Peano Axioms now has no more appeal to me than simply assuming the existence of the numbers themselves.

Indeed, it seems in a way more plausible to think that the number zero exists, as a form in Plato's heaven perhaps (though current philosophers are more subtle in their metaphysics of abstract objects), than to think that something as bizarre as the empty set exists. Should we say that it is impossible that nothing exist because the empty set exists and it is a thing? One direction in recent philosophy of math has gone in developing an alternative to the ZF, VN, etc. style set theoretical constructions of models for the Peano Axioms is to return to Frege.

Frege got into trouble, famously, with his Basic Law V, which leads to Russell's Paradox. However, some modern commentators have noted that in Frege's logicist development of arithmetic he only uses Basic Law V to prove Hume's Principle, then derives arithmetic from that. Here is Hume's Principle:
(The number of Fs = The number of Gs) iff (F is equinumerous to G)

Hume's Principle provides an implicit definition of "number". On the left hand side of the biconditional is an equality sign that is flanked by objects (as indicated by the use of the definite article "the"). On the right had side is a relation of equinumerosity (defined in second order logic in terms of 1-1 correspondence) that holds between concepts. The Neo-Fregeans take Hume's Principle to be an analytic truth. Much of the philosophical debate has been over the analytic status of Hume's Principle. From this analytic truth we are able to derive the existence of numbers themselves as objects.

They are, to be sure, quite abstract objects. Leading Neo-Fregeans Bob Hale and Crispin Wright call them "shadows cast by our language". One might think of abstract objects such as numbers on analogy with The King in chess, not this or that particular token but the type itself. The King is a shadow cast by the rules of chess. Now, maybe you find this implausible or problematic. There's plenty of work to be done and many are skeptical that the Neo-Fregeans can deliver a foundation for mathematics based only on logic and definitions like Hume's Principle. Or maybe you think that definitions can't bring objects, no matter how shadowy, into existence. I sometimes think that. However, none of this seems any less plausible than thinking that the empty set exists. Even if one gives a similarly shadowy account of its existence, why would you interpret "0" to be it? Sure, ZF gives a model for the Peano Axioms, but so does the sequence of natural numbers. So I guess my point after all this rigamaroll is why not interpret the Peano Axioms over the natural numbers themselves?

Ok, I've said a lot and some of it I'm not so sure of, but here's some final thoughts that I'm similarly not so sure of. This is a loose account of some ideas Prof Gregory Landini is developing in the math logic course I'm presently enrolled in. For ease of exposition I'll stop doubting the existence of sets for a moment, though I think that what I have to say next can be stated in 2nd order logic without an ontology of sets as long as one is comfortable with heterogeneous functions. The empty set is a set with nothing in it. It has cardinality zero. Every set with nothing in it is extensionally equivalent to the empty set: e.g., the set of things not equal to themselves, the set of square circles, etc. The set containing the empty set is a set of cardinality one, but it is not extensionally equivalent to every equinumerous set. For example, the set {my computer} also has cardinality one, but is not extensionally equivalent to the set containing the empty set. What I'm driving at is that the ZF constructions give us exemplars of extensions of concepts (e.g., the concept "being the empty set or being the set containing the empty set") that satisfy higher order cardinality concepts (e.g., the second order concept "being a concept that exactly two-many things exemplify"). Why should we think of numbers as these or those particular exemplars of a cardinality concept and not as the cardinality concept (i.e., quantifier concept) itself?

By Jeremy Shipley (not verified) on 08 Nov 2007 #permalink

Doug: "I've never seen an uncertain statement in two-valued logic, have you?"

Here is an uncertain statement in two-valued logic: There is intelligent life in the universe outside of earth. The statement is either true or false. We just don't know which. Here is another, perhaps more controversial, example: Every even integer greater than 2 is the sum of two primes. Here's another: The number f blades of grass on the pentacrest is odd. There are countless examples of statements that are either true or false, but we don't know which.

You seem to be confusing the epistemological status of knowledge and knowability with the semantic status of truth and falsity.

(This isn't to say I don't think that many-valued logics may have their place; e.g., in dealing with statements involving so-called "fuzzy" predicates like "bald". See the Sorites paradoxes, for example).

WRT to uncertain statements in two-valued logic - I think that the most interesting things are not really uncertain, but they are unprovable while clearly being true.

The classic example of this comes from Gödel: "This statement cannot be proven." Properly stating it takes a bit of trickiness - but it ends up working out. It's clear that
you can't produce a proof of the statement. And yet, for it to be false, you'd need to show of proof that it is true. So you can see that it is true, in some deep sense of the word "true"; but it's not provable. So it *does* lie in a kind of uncertain realm: true, but not provable within the constraints of a formal system. This, in turn, shows that you can't equate truth and provability. To be geeky, the set of true statements is a superset of the set of provable statements.

Mark. The sense in which the Godel sentence is true is not just some intuitive sense as suggested by your "you can see that it is true". Hopefully someone will correct me if I am wrong since I'm working through the details of this stuff for the first time, but isn't the Godel sentence true in the well-defined sense of truth in terms of satisfaction developed by Tarski. Here's how I've been understanding it. Tarski gave "true" and "provable" different senses. Godel showed they have differing extensions, if arithmetic number theory is consistent.

Stephen Wells,

You may speak correctly about me missing Mark's point within what I write. However, I more think that Mark's point doesn't come as sufficient for what I think he wants to argue: the adequacy of (crisp) set theory. Look, even if techincally speaking one can derive much-to-all of maths from (crisp) set theory, that doesn't mean it works out as sufficient as a basic philosophy of math.

[In short, "closed world" means you can't use the raw operators of the substrate level on objects on the constructed level.]

So, 'closed world' places a restriction on how we use operators. I have a problem with exactly that, as I've indicated it leads us to think that using operators to study maths leads to invalid perspectives, or at least it did with the rejection of negative numbers.

Jeremy,

[Is {my comupter} also in front of me? And furthermore is {{my computer}, my computer} here before me? And also {{{my computer}, my computer}, {my computer}, my computer}? And so on?]

*laughs* yeah that does seem ridiculous, but according to set theory it those sets "exist" as much as the natural numbers.

[Why should we think of numbers as these or those particular exemplars of a cardinality concept and not as the cardinality concept (i.e., quantifier concept) itself?]

Maybe I should scream at this one. Look, let's suppose we regard numbers as the cardinality concept, or as you put it the quantifier concept. Then, for infinite sets, any 'there all' quantifier ends up getting interpreted as an infinite number. It also works out that such an infinite number works out as equivalent with all other infinite numbers for the infinite set in question. One can talk about infinite numbers in terms of non-standard analysis, but infinity+1 does not equal infinity+2 there, so your cardinality=quantifier proposal fails here. For the 'there exists' quantifier, we would have to assign a number to 'there exists'. You might naively think assigning '1' will work. However, the 'there exists' quantifier doesn't say 1, it says *at least* 1. So, let's say we have a set like
{1, 2, 3, 4, 5}, and I want to know if there exists x>2 for this set. Well, there does exist *at least* one x such that x is greater than 2, so the 'there exists' quantifier works. But, if I interpret the the 'there exists' quantifier as '1 x', then the 'there exists' quantifier comes out incorrect since 3 xs exist here.

Lastly, if we consider quantifiers such as 'most', 'almost all', 'very few', etc. we would have to assign a unique natural number or infinity to each of these quantifiers, since the classical cardinality concept permits only those types of cardinalities (as far as I see it). Well, these quantifiers don't have precise, unique natural numbers assignable to them accurately. We need more than just one number, and numbers other than natural numbers to approximate the meaning of such quantifiers numerically, and indicate our 'numerization' of those quantifiers as contextual (which is fine... but not so necessary with crisp cardinality).

Jeremy,

["I've never seen an uncertain statement in two-valued logic, have you?"
Here is an uncertain statement in two-valued logic: There is intelligent life in the universe outside of earth. ]

Hmm... I think you've got me there. Well... maybe not though, because I doubt that the term 'intelligent' actually happens within two-valued logic. Really, I'd state that as a non-two-valued statement, as I think intelligence comes in degrees and in enough cases talking of someone/something coming as either 'intelligent' or 'not-intelligent' comes out quite silly. So, I could reject your example here.

Still, I can see your possible point, and I think need to rephrase that. Given perfect or complete information, I've never seen an uncertain statement in two-valued logic. That's the thing, in non-classical logic, one can literally have all the information in and statements still come out uncertain.

[Here is another, perhaps more controversial, example: Every even integer greater than 2 is the sum of two primes.]

Given the complete set of all natural numbers (perfect information), a small infinite set, this comes as certainly true or false. The problem comes as finding a mathematical system which allows us to view such a set completely with respect to primality.

[The number f blades of grass on the pentacrest is odd.]

How do interpret 'blades of grass'? Does some grass which stands 5 feet high count as much as a 'blade' as a piece of grass which I've just cut? I don't feel certain that statement comes out in two-valued logic.

[There are countless examples of statements that are either true or false, but we don't know which.]

The Goldbach Conjecture comes out as a good example. But, I can actually modify that statement somewhat to say "even numbers are the sum of two primes." I can claim this as partially true. Well, I know that in millions of cases this statement works out true, since computer programs have verified such. Of course, you could retort that this comes out as an infinitesimally small number of cases, since no matter how many millions of cases we have, there always exist a countable infinity of cases where such a conjecture might not hold, as we haven't checked a countable infinity of cases. So, my claim of such a statement as "even numbers are the sum of two primes" working as 'partially true' doesn't work out as significant whatsoever, even if it doesn't have a degree of truth of 0.

[(This isn't to say I don't think that many-valued logics may have their place; e.g., in dealing with statements involving so-called "fuzzy" predicates like "bald". See the Sorites paradoxes, for example).]

Well, of course they have application here. But, you can also use them to deal with paradoxes like the liar paradox. To use a variant of it, if someone says "this statement is false," then if we take it as true, then the statement comes out false. But, if taken as false, then the statement comes out true. Or symbolically T->F and F->T. Of course, that's a big problem for classical logic (maybe one can use a theory of types to resolve it). But, with fuzzy logic one can solve it much more simply. One just assumes that if P->Q and Q->P we have logically equinvalent truth values. So, we have T=F. In other words we have an equilibrium of truth values. Given the standard 1-a for complementation, then we have the liar paradox indicating that such a statement has truth value of 1/2, as a=1-a, when a=1/2 ONLY. Most philosophers seem to think that people like Epimenides meant these statements more as a way to show the inadequacy of two-valued logic, rather than to challenge two-valued logic to develop new solutions to problems... so I would maintain that the 'fuzzy logic' solution goes more along with the original spirit of these paradoxes than something like the theory of types.

Mark,

[So it *does* lie in a kind of uncertain realm: true, but not provable within the constraints of a formal system. This, in turn, shows that you can't equate truth and provability.]

It got done for thousands of years, and people still do so. Do you think such a 'proof' will dictate to them what they can and can't do in formal systems, when they already have acted otherwise? Second, I don't see how it shows you can't equate truth and provability. Again, that consists of an *interpretation* (not to say it qualifies as a bad one). If you guys really think that Godel's statement says something about truth and provability, then you would literally have the ability to demonstrate *from the text* that Godel's proposition VI says that. Has anyone done that here? Has anyone come close? Until you've done that, all you've indicated that many people interpret Godel's proposition VI as saying that. There's the paper:

http://www.vusst.hr/~logika/undecidable/godel.htm

Mysticism: believing in something you can't prove. So, if the *hypothesis* that Godel's proposition VI says that there exist arithmetical truths for formal systems like that of PM and ZFC which come as 'true, yet not provable', then one becomes a mystic by accepting such a system as PM or ZFC or a similar formal system. Yeah... I don't think so.

I'd also argue that non-two-valued statements exist within pure mathematics, such as "the difference between 1 and 2 is about the same as the difference between 3 and 5," or "the difference between 3 and 4 is much, much smaller than the difference between -1 and 10^600."

Jeremy:

Yes, you are correct - I was trying to give an intuitive explanation, rather than get into the details - since we're already having enough trouble with the details of what we're actually talking about.

Doug. Your points are reasonable enough on the two-valued logic matter, especially since I conceded some ground on fuzzy predicates. However, I could certainly give some kind of operational definition of "blade of grass" to clarify any ambiguity you'd like and the blades of grass example will come out. Or if you'd like, try grains of sand on a beach. The point is that what is true and what is known (or certain or proven, etc.) can come apart. . . There is much more to be said here than can be said in blog comments so I'm not going to pursue this further, but you might check out Fitch's paradox of knowability.

Setting that aside because it would take a dissertation to defend either of our views, I wonder what you'd say to the following point. Even many-valued logics are two-valued in the following way: Propositions are either not at all true or at least little bit true. Consider baldness. It's a plenty fuzzy predicate, but certainly some of us are not at all bald. Or to take the blades of grass up. Perhaps you're convinced that there are borderline cases that are not clearly countable as blades, but certainly if I'm sitting in the pentacrest you won't count me!

Moving on. . . I think that you misunderstood me on the point about cardinality concepts. Maybe I shouldn't have used the term "quantifier concept". However, I wasn't suggesting anything like the view that you criticized. Certainly, I'm well aware of how existential quantification works. However, we can use existential and universal quantification to make a schema into which we can truthfully put only those concepts that there are exactly one thing of:

(there exists x) (Fx and (for all y) (if Fy then x=y))

Comprehend that schema as a second order concept. That's the sort of thing I have in mind as a cardinality concept. I called it a "quantifier concept" because it expresses the property "there are one-many Fs" in terms of a quantificational structure. I certainly didn't have in mind that that was simply (there exists x)(Fx).

Anyhow, I'm still a little sketchy on all of this so. . . yeah.

So, if the *hypothesis* that Godel's proposition VI says that there exist arithmetical truths for formal systems like that of PM and ZFC which come as 'true, yet not provable', then one becomes a mystic by accepting such a system as PM or ZFC or a similar formal system.

There is a critical-- but common-- misunderstanding here. (One can catch Roger Penrose making the same error in a different context.) Godel does not say this inability to prove is a limitation of PM or ZFC. He says anything capable of expressing the arithmetic found in PM has this limitation. The limitation is present for ZFC and PM and also any potentially more "powerful" systems. This will include, for example, fuzzy set theory, since normal ZFC can be easily constructed within fuzzy ZFC.

Now, it's worth noting that there are systems which can prove some of the statements about ZFC which ZFC cannot prove about itself. But these systems don't escape Godel's cage either; these systems may be able to prove ZFC's unprovable statements about ZFC, but the cost is the higher-order systems make it possible to make statements about the higher-order systems which those systems cannot themselves prove. You can't escape incompleteness, you can only alter where the incompleteness pops up-- unless you're willing to go the other direction and accept the Pyhrric victory of adopting a system which is too weak to allow those operations which are necessary for Godel's theorem to hold.

The problem you identify comes not in "accepting such a system as PM or ZFC or a similar formal system", but in attempting to accept a meaningful formal system at all. But of course it is only in a formal system that we can make demands like "statements which are true should be provable" in the first place...

[The point is that what is true and what is known (or certain or proven, etc.) can come apart]

Well put.

[I'm not going to pursue this further, but you might check out Fitch's paradox of knowability.]

Thanks for the tip.

[Setting that aside because it would take a dissertation to defend either of our views, I wonder what you'd say to the following point. Even many-valued logics are two-valued in the following way: Propositions are either not at all true or at least little bit true.]

I consider your statement here mostly true (true of most multi-valued logics), but not all of them. If we have a neutrosophic logic, where truth values can work out infinitesimally larger than 0, yet not 0 (using non-standard analysis), then there exist propositions which don't work all true, nor 'a little bit' true, as for any 'little bit of truth' given the proposition in question can have truth value smaller than that little bit, but not 0. Maybe you want to say I've equivocated on the use of 'little bit' here. Fine. I can say you equivocated on the use of 'two-valued', since you've used it outside of the context {T, F} or something equivalent to it.

More interestingly, perhaps, I can reject such in the following way. I treat 'truth' as a linguistic variable and assign membership functions to quantifiers of 'truth' like 'absolutely no', 'little bit', 'little', 'medium', 'high', 'very high', and 'all true'. Adjacent categories overlap at some points, while non-adjacent categories overlap at no points. Consequently, 'very high truth' works out as neither 'little bit truth', nor 'absolutely no truth.'

Oh wait... you said 'at least little bit' true. Perhaps if you modified this to say that truth values come either having 0 degree of truth or a truth value equal or greater than that of a positive infinitesimal, then it might work. I guess you can call that 'two-valued' in a sense, but we often don't call statements with truth value of 10^-200 true, we call them false. So, there may exist something techincally to such as 'two-valued', but practially speaking I don't see it.

[It's a plenty fuzzy predicate, but certainly some of us are not at all bald.]

Sure. Then again, everyone one of his has space in between hairs, since atoms come out discrete.

[Maybe I shouldn't have used the term "quantifier concept". However, I wasn't suggesting anything like the view that you criticized.]

Good... that's actually a relief.

Coin,

[There is a critical-- but common-- misunderstanding here.]

Yeah... I forgot about that condition there. Still, it remains that Goedel doesn't say anything about truth and provability, so far as I can see from the text. He talks about provability, and basically shows provability can't work out as bivalent for all formal systems (that there exist some propositions for some formal systems where you can't prove the proposition, nor prove the negation).

[The problem you identify comes not in "accepting such a system as PM or ZFC or a similar formal system", but in attempting to accept a meaningful formal system at all.]

So there exists a mismatch between form and meaning. One intepretation of that comes as that one can't use maths to analyze language, since math comes as formal and language comes as informal. But, applications of fuzzy set theory make *some* progress towards analyzing language through math. Of course, it works out as an approximation at best, but still it happens to some degree, and the problem becomes more tractable when you use higher types of fuzzy sets.

[But of course it is only in a formal system that we can make demands like "statements which are true should be provable" in the first place]

If allow degrees of provability and degrees of truth, I don't see that.

Grrr... I started thinking too numerically. In a three-valued logic with truth values of {T, U, F}, there exists the ordering of Fhttp://www.xanga.com/Spoonwood/583936702
/a-possible-positivist-three-valued-logic.html

Yeah... I forgot about that condition there. Still, it remains that Goedel doesn't say anything about truth and provability, so far as I can see from the text. He talks about provability, and basically shows provability can't work out as bivalent for all formal systems (that there exist some propositions for some formal systems where you can't prove the proposition, nor prove the negation).

Afaict, truth still works its way into there. Goedel only talks directly about provability, to be sure, but it is also true that some different formal system may be able to successfully prove/disprove the statement. The statement would thus still have a truth value, as it is (dis)provable in some systems.

However, I am unaware of the implications of allowing truth to be 'shared' across formal systems. This may end up being an incoherent statement.

By Xanthir, FCD (not verified) on 09 Nov 2007 #permalink

Doug I really have no desire to try and teach you meta-logic within the confines of a blog comment column. If you really want to know why Gödel's theory is about the fundamental difference between truth and provability in formal systems then read Raymond M. Smullyan, Gödel's incompleteness theorems, New York, 1992. If you still think we are wrong after that then nobody can help you.

I will get back to the subject of classical science and two valued logic at the weekend when I have more time and less work.

I have conversed at length, face to face as well as in blogosphere, with well intentioned crackpots who simply cannot stand, for emtional reasons, Gödel's theory.

It arouses visceral hatred in perhaps the same way that Einstein does to anti-Relativists (especialy Aetherists) and Darwin (especially Intelligent Design-kooks).

And they are even less likely to chip away at it, let alone demolish it, as it does not depend on the Physical universe (Special or General Relativity) or Life as we Know It (neodarwinian synthesis).

Worse to me are smart but aggressively self-promoting and well-funded cultists such as the uncredentialed Bostrom sidekick Eliezer currently having an ignorantly phrased DNA information theory question well-answered on Scott Aaronson's blog by people who found a good question buried in the falsifed ignorance, and gave the cultist a series of interesting and mathematically valid answers. The loon has won over Scott Aaronson, and other bloggers who should know better, in part because of an alleged nonprofit foundation with Big Names in the board, which has had to move from state to state 3 times due to malignant financial irregularities, making one of my favorite blogs someplace that I won't post to for quite a while.

Sorry. I think we'd all rather discuss Gödel, and why people rcoil in horror from its conclusions. I'm tempted to start a diwscussion about Gregory Chaitin, who was so generous to me with his time at the ICCS-2007 conference last week. But not now.

Xanthir,

[Goedel only talks directly about provability, to be sure, but it is also true that some different formal system may be able to successfully prove/disprove the statement.]

It may work out that way, but that still doesn't address proposition VI, Goedel's "first theorem". Nothing in proposition VI addresses the matter of truth and provability, so to say that it talks about truth and provability still comes as an interpretation of Goedel.

[Doug I really have no desire to try and teach you meta-logic within the confines of a blog comment column. If you really want to know why Gödel's theory is about the fundamental difference between truth and provability in formal systems then read Raymond M. Smullyan, Gödel's incompleteness theorems, New York, 1992.]

As a first warning here, not all meta-logic works out as two-valued, so don't try to pass that off. More on point...

Go ahead and stand here and say that "it's too complicated to explain here", "it'll take too long", "I don't want to do it", etc. It remains that I presented as close to the actual text as possible for discussion, and I still want to refer to it. As comes as rather well-known so many people want to talk about Goedel and his ideas (including people like Newman and Nagel in volumes like _The World of Mathematics_), without actually getting back and citing Goedel. And what do you do? Present another *interpretation* of Goedel instead of getting back to the text itself, and then have the *arrogance* to say that I simply can't understand such a text in such a space.

Honestly, a humanities or a philosophy scholar of a historical text acting this sort of way repeatedly with respect to something Ovid or Schopenhauer wrote would most likely get called a hack. Now, maybe all the claims about 'truth and provability' don't work out as "hack" statements, but until these exists a demonstration of this from the text, and what the symbols in the text mean/symbolize, such a claim remains unsubstantiated. An unsubstianted claim remains speculative until substantiated.

Jonathan Vos Post,

[I think we'd all rather discuss Gödel, and why people rcoil in horror from its conclusions.]

I invite a discussion about Goedel's propositions, no matter the emotional reprecussions. I just ask that we stick to the actual text as much as possible, instead of citing interpretations of it, including "expert" mathematicians... as everyone of us here, I think, could speak of interepretations of Goedel's text we consider outlandish.

Everyone,

Lastly, consider the following, which you might find interesting. Consider the proposition A, "this statement is unprovable in X." Well, if I prove such a statement in X, then I've demonstrated proposition A, so it becomes provable in X. But, if it works out as provable in X, then I demonstrate the propositon "this statement is unprovable in X," which if we *consider even if contrary-to-fact* truth and provability as equivalent, then such a demonstration makes such a proposition true, and thus the statement become unprovable. In other words, if we assign a provable value of 'unprovable' to our proposition, we imply a value of 'provable' to the proposition, and conversely. In other words, we have P->U, and U->P, which means we have
P<->U. This poses a problem if statements work out as either provable or unprovable. But if provability, can have provable values in the unit interval, then such a statement has provability value of an equilibrium, for which we could select the standard complement 1-a, and get provable value of 1/2 for such a proposition. I'd certainly hope someone would anticipate I might suggest this since I already referenced the liar paradox. I know of nothing other than bare assumption which implies that mathematical statements must work out as either provable or unprovable, so I see no problems with consider, *even if you think it contrary-to-fact*, the hypothesis that provability can come in degrees.

Concerning classical science and two-valued logic:

Newtonian mechanics comes as indispensibable in classical science. The central text of Newtonian mechanics comes as Newton's own _Principia_. In terms of form as well as content, Newton's _Principia_ comes as heavily reliant on the _Elements_ of Euclid. As 19th century mathematicians demonstrated, the proofs of Euclid often don't fully prove their propositions, but will prove their propositions with slight enough emendations. In other words, enough of Euclid's proofs come as partial, or in other words have a degree of proof in (0, 1). Degrees of proof in (0, 1) falls outside of at least some two-valued framework (a two-valued framework of proveness, at least). Consequently, a rather important part of classical science ends up falling outside of a two-valued framework.

Euclid's and Newton's work consisted of formal argumentation for their day. Since we know today that such argumentation forms came as sufficiently well demonstrated, but didn't come as totally proved, we know that the argumentation form in classical logic, in reality, allowed and used a non-two-valued form of some sort. Whether this includes truth as well as proveness, I have not indicated here. Although, *if* we assume truth and proof as equivalent, then classical science relied on non-two-valued logic in a rather important way.

At this point, I am honestly confused about what we're talking about. ^_^

By Xanthir, FCD (not verified) on 09 Nov 2007 #permalink

With respect to science (the term 'classical' doesn't work as relevant, since I'd argue the example here happens on the borderline between 'classical' and 'modern', so it can fall either way or both), evolutionary theory employs the concept of species which certainly doesn't come out two-valued. If biologists attempted to use two-valued logic in formal arguments with respect to the behavior of pandas as more like racoons or bears, there exist many problems. Of course one can talk about evolutionary history and resolve such a problem in terms of history (but not in terms of behavior), but by doing so one introduces the non-two-valued evolutionary concept of species as a formal concept. As I understand it, in the language of physicists, evolutionary change happens at the classical (non-quantum, non-relativistic) level, and consequently talking about evolutionary theory qualifies as part of classical science, by such a linguistic standard.

Xanthir,

Many topics have gotten broached. Rather clearly, I'll type quite a bit, perhaps to excess at times... so what do you find interesting and what do you want to say?

Doug ranted:

Go ahead and stand here and say that "it's too complicated to explain here", "it'll take too long", "I don't want to do it", etc. It remains that I presented as close to the actual text as possible for discussion, and I still want to refer to it. As comes as rather well-known so many people want to talk about Goedel and his ideas (including people like Newman and Nagel in volumes like _The World of Mathematics_), without actually getting back and citing Goedel. And what do you do? Present another *interpretation* of Goedel instead of getting back to the text itself, and then have the *arrogance* to say that I simply can't understand such a text in such a space.

"From the remark that [R(q);q] says about itself that it is not provable it follows at once that [R(q);q] is true for [R(q);q] is indeed unprovable (being undecidable)."

(bold emphasis mine, italic emphasis in original) Kurt Gödel, On Formally Undecidable Propositions, quoted from Jean van Heijenoort ed., Frege and Gödel: Two Fundamental Texts in Mathematical Logic, Harvard University Press, Cambridge, Massachusetts, 1970 p. 90.

And before your return to claim that the wording of the translation that you have linked to in the internet is different you should read the following;

"The translation of the paper is by the editor (van Heijenoort), and it is printed here with the kind permission of Professor Gödel and Springer Verlag. Professor Gödel approved the translation, which in many places was accommodated to his wishes." Van Heijenoort 1970 p. 86

Doug continued his rant with the following

Honestly, a humanities or a philosophy scholar of a historical text acting this sort of way repeatedly with respect to something Ovid or Schopenhauer wrote would most likely get called a hack. Now, maybe all the claims about 'truth and provability' don't work out as "hack" statements, but until these exists a demonstration of this from the text, and what the symbols in the text mean/symbolize, such a claim remains unsubstantiated. An unsubstianted claim remains speculative until substantiated.

Before you scream and rave about other people not respecting the historical texts that they refer to I think it would be wise for you to read the text first yourself!

I shall return to deal with the half-baked nonsense that you have written concerning Newton and Euclid tomorrow.

Doug: I'm just getting confused by the constant references to places where strict two-valued logic isn't used. What's the purpose of bringing all this up?

I mean, it seems like a bunch of non sequiturs to begin with. Second, of course we use a fuzzier conception of logic in most cases, including that of science. I don't take into account the quantum state of every particle in the air when I try to determine how far a thrown ball will go, even though that information directly determines the answer. I just don't care to get an answer that exact, and don't care to do that much work. I'll work with an approximation, which is by it's very nature 'fuzzy', because that's all that's needed.

It seems like you're under the impressions that we're all binary logic dogmatists, logical reductions all the way down. Dude, we're people. Our brains are designed to work fuzzily, because it's faster. What is your purpose in pointing out that obvious fact?

By Xanthir, FCD (not verified) on 10 Nov 2007 #permalink

I'm also unsure why there is an insistence that we can use formal logic to describe complicated processes such as science and its society. We don't even have a theory of it yet.

It reminds me of religious dogmatists who believes in Truth, or even truth.

In terms of form as well as content, Newton's _Principia_ comes as heavily reliant on the _Elements_ of Euclid.

While I'm curious about Thony's description of logic, I can comment on this. You can't compare science with math. In science you test your theories directly.

we know that the argumentation form in classical logic, in reality, allowed and used a non-two-valued form of some sort.

Sure, but in the same way that set theory and fuzzy set theory map to each other, classical and predicate logic do too in a way.

Besides, logic is an idealization to describe scientific knowledge, because you have different degrees of well defined concepts, data coverage ("don't know"), uncertainty, observational and theoretical support, et cetera. There isn't really any "truth" besides the truth values that this idealization engender, what you have is facts and theories.

evolutionary theory employs the concept of species which certainly doesn't come out two-valued

Not at all. The problem is that the concept of species isn't well defined. (See the earlier point.)

AFAIU you can model the basics of evolution without it by looking at the different populations and their genomes. (Bottom-up by "population genetics" and top-down by "quantitative genetics".) Species is a secondary concept for convenience.

In fact, there is at least 26 different species conceptions that tries to fulfill the requirements for the species concept.

The common biological species conception, definable as "interbreeding natural population isolated from other such groups" depends upon endogenous reproductive isolating mechanisms. It works well for sexual populations of Eukarya. But it can't be used for asexual (duh!) populations or other domains (Bacteria, Archaea) that have different forms of sexual mechanisms and much lateral gene transfer between populations. Nor can you use it to describe extinct populations that fossils sample.

By Torbjörn Lars… (not verified) on 10 Nov 2007 #permalink

"that have different forms of sexual mechanisms" - that have several different forms of sexual mechanisms. (AFAIK they can have up to three different forms of sex mechanisms simultaneously. Juts one form of sex is for amateurs. :-P)

By Torbjörn Lars… (not verified) on 10 Nov 2007 #permalink

Thony,

["From the remark that [R(q);q] says about itself that it is not provable it follows at once that [R(q);q] is true for [R(q);q] is indeed unprovable (being undecidable)."]

Thanks! The thing comes as that the translation of whatever German word which gets translated into 'true' might mean 'correct', as the source referenced reads:

"From the remark that [R(q); q] asserts its own unprovability, it follows at once that [R(q); q] is correct, since [R(q); q] is certainly unprovable (because undecidable)."

I don't know German. But, if Godel might have said 'correct' instead of 'true', he doesn't necessarily say 'true, yet unprovable'.

["The translation of the paper is by the editor (van Heijenoort), and it is printed here with the kind permission of Professor Gödel and Springer Verlag. Professor Gödel approved the translation, which in many places was accommodated to his wishes."]

O.K., but does this imply that Godel worried about differences between words like 'correct' and 'true' or understood such a distinction? What happens if there exists a distinction between "true" and "correct", as perhaps the statement "this statement is false."? How does interpret Godel's ideas in such a context?

Also, notice that he goes on to say:

"So the proposition which is undecidable in the system PM yet turns out to be decided by metamathematical considerations."

In other words, to interpret it, the statement doesn't work as true, yet undeciable in any sort of absolute sense. It works out as true and decideable through the introudction of another mathematical "level".

Xanthir,

[Doug: I'm just getting confused by the constant references to places where strict two-valued logic isn't used. What's the purpose of bringing all this up?]

I originally said "Science rarely makes used of two-valued logic, and even when it does rely on such heavily usually ways around two-valued logic exist or can get worked out." To which Thony responded "Sorry but there is no other honest way to express this, both of these statements is pure unadulterated crap. They are so way of the mark that I have no idea what I could write to correct them except to say that the formal argumentation in all classical science including Newton's theory of Gravity is based on two valued logic and nothing else!"

[What is your purpose in pointing out that obvious fact?]

It comes in response to how Thony responded to my statement about science. He still says "I shall return to deal with the half-baked nonsense that you have written concerning Newton and Euclid tomorrow." almost, if not exactly, as if they were people who didn't use approximation in scientific matters.

Torbjon,

[In science you test your theories directly.]

I certainly don't understand your statement here. I don't see anyone doing direct tests of the theory of the Earth revolving about the Earth. I don't see direct tests of the atomic theory of matter until we got high-powered microscopes, by which the atomic theory of matter already came as long accepted. I don't see direct tests of facts about fossils for sure.

[Besides, logic is an idealization to describe scientific knowledge, because you have different degrees of well defined concepts, data coverage ("don't know"), uncertainty, observational and theoretical support, et cetera. There isn't really any "truth" besides the truth values that this idealization engender, what you have is facts and theories.]

You can well put it that way perhaps, and I think I understand and agree with what you mean. Although, I personally think of 'truth' of working in such a context also.

[The problem is that the concept of species isn't well defined. (See the earlier point.)]

By non-two valued, I just mean that we can't tell whether an organism belongs or does not belong to a species... especially since species do change, we can't tell when one organism actually falls outside of its ancestors species classification and fits in the classification of another, new species.

[AFAIU you can model the basics of evolution without it by looking at the different populations and their genomes. (Bottom-up by "population genetics" and top-down by "quantitative genetics".) Species is a secondary concept for convenience.]

Sure, you can do evolution that way, in terms of microevolution (just so you're clear as to what I'm NOT implying by this, I do think microevolution works as sufficient for macroevolution). But, I don't see how we can talk about species changinging into other species, without a species concept. We would have population changes at best. This works out different in terms of grouping, as a population could get grouped by location as opposed to similarity of some sort (thus in a population, the deer and us could fall into the same population).

Also, the species concept doesn't work out as a secondary concept for studying organisms... in other words for doing biology. If we didn't have some species concept it would come as rather difficult, if not impossible to talk to people about "rats" and compare them between an Australian and an American environment.

AFAIK, AFAIU... I don't get what they mean.

Doug: Ah, okay. I stopped following what Thony said a little while ago, so I didn't realize that you were addressing him directly (and it seems that Torbjorn was lost somewhere along the way as well!). Please directly address things like that so the conversation doesn't get so confused. ^_^

I certainly don't understand your statement here. I don't see anyone doing direct tests of the theory of the Earth revolving about the Earth. I don't see direct tests of the atomic theory of matter until we got high-powered microscopes, by which the atomic theory of matter already came as long accepted. I don't see direct tests of facts about fossils for sure.

Define 'direct tests'. I think it's rather obvious that we directly tested all of those theories. Testing involves more than just looking at something. For one thing, in order to *get* scanning tunnelling electron microscopes precise enough to see individual atoms, we had to have an extremely good understanding of atomic theory, which requires testing. It's not like we just theorized the microscopes into existence, and only afterward checked to make sure that they were correct.

Sure, you can do evolution that way, in terms of microevolution (just so you're clear as to what I'm NOT implying by this, I do think microevolution works as sufficient for macroevolution). But, I don't see how we can talk about species changinging into other species, without a species concept. We would have population changes at best. This works out different in terms of grouping, as a population could get grouped by location as opposed to similarity of some sort (thus in a population, the deer and us could fall into the same population).

Well, surely you recognize that, no matter how we define species, it's still an artificial concept that we use merely for easier categorization. There's nothing at all in the genetic code that specifies what species something is, or when two things are different species. You've said as much, so this shouldn't be anything controversial.

When we *do* talk of species, we're merely talking about population changes. I don't see why 'grouping' them would necessarily result in something different. It just depends on what your grouping criteria are (and, as Torbjorn noted, there are at least 25 different methods of grouping that we call 'species').

AFAIK, AFAIU... I don't get what they mean.

As Far As I Know/Understand

By Xanthir, FCD (not verified) on 11 Nov 2007 #permalink

Doug:

You are flailing about, admitting direct tests and disregarding known tests. Just to humor some of your odder claims:

I don't see anyone doing direct tests of the theory of the Earth revolving about the Earth.

First, as stated it is technically a hypothesis. A theory describing this would be something like the theory for planetary systems evolution.

Second, such theories have been tested plenty. Direct parallax tests on Earth (a specific test, admittedly), observations of other planets and planet systems within and without our solar system confirming speeds, angular momentum, eccentricities, et cetera.

I don't see direct tests of facts about fossils for sure.

Transitional forms are a direct test, and so are phylogenetic trees. What you find is that a rather robust subset of all possible trees will have an appreciable likelihood.

In fact, this should be more known, because this, a part of biology, gives probably the best precision in all of science:

Nevertheless, a precision of just under 1% is still pretty good; it is not enough, at this point, to cause us to cast much doubt upon the validity and usefulness of modern theories of gravity.

However, if tests of the theory of common descent performed that poorly, different phylogenetic trees, as shown in Figure 1, would have to differ by 18 of the 30 branches! In their quest for scientific perfection, some biologists are rightly rankled at the obvious discrepancies between some phylogenetic trees (Gura 2000; Patterson et al. 1993; Maley and Marshall 1998).

However, as illustrated in Figure 1, the standard phylogenetic tree is known to 38 decimal places, which is a much greater precision than that of even the most well-determined physical constants.

A modern famous prediction of phylogenetic trees is when researchers predicted the form, age and habitat (geological formation) to find the sarcopterygian Tiktaalik with in their later successful dig.

If you don't understand how and why scientific theories are tested, you will have a decidedly odd view of science and empirical knowledge. It isn't dreamed up, but painstakingly measured and tested.

And I would argue that it has little to do with the math that you initially discussed.

But let me also answer some of the remaining:

By non-two valued, I just mean that we can't tell whether an organism belongs or does not belong to a species...

Exactly, it isn't well defined. And even the rather more well defined concept of populations is split up by events that isn't part of evolutionary theory as such, for example geographical (environmental) isolation.

If you have a non-well defined concept, you can hardly expect to model it with a formal model and arrive at a consistent "truth" within it.

We would have population changes at best. This works out different in terms of grouping,

I must admit that I'm currently unclear how biologists handle this too. I think some group other populations as the studied populations environment. Coevolution with another species would then be a cochanging environment. But I'm not sure.

By Torbjörn Lars… (not verified) on 11 Nov 2007 #permalink

Doug wrote:

O.K., but does this imply that Godel worried about differences between words like 'correct' and 'true' or understood such a distinction? What happens if there exists a distinction between "true" and "correct", as perhaps the statement "this statement is false."? How does interpret Godel's ideas in such a context?

Both MarkCC and I explained to you very clearly what exactly the first Gödel incompleteness theory means. You then in the manner of a petulant child denied this and claimed that Gödel does not say this anywhere in his paper. I then showed you exactly where Gödel does say exactly that which both Mark and I had said. Now you are claiming that Gödel didn't know or maybe understand what he was saying! If that is the level at which you conduct a serious scientific discussion then as far as I am concerned you have totally disqualified yourself as a discussion partner and I for one am not going to waste any more of my time correcting you ridiculous errors.

Xanthir,

[Define 'direct tests'. I think it's rather obvious that we directly tested all of those theories.]

Direct observation or experiment of the theory in question. lol... Earth about the Sun. We haven't directly observed that, or if someone now has done so, it got accepted scientifically long before we had direct observation.

[Well, surely you recognize that, no matter how we define species, it's still an artificial concept that we use merely for easier categorization.]

I don't see how concepts work out as "natural" or "artifical".

Torbjörn Larsson,

[Second, such theories have been tested plenty.]

Sure, but they still work out as indirectly tested. We don't have direct observation of the Earth moving about the Sun.

[Transitional forms are a direct test, and so are phylogenetic trees. ]

Phylogenetic trees don't give us 'in vivo' observation or an experiment which allows us to see such changes happening 'in vivo', as I understand them. I guess I needed to first clarify what I mean by 'direct test'.

[However, as illustrated in Figure 1, the standard phylogenetic tree is known to 38 decimal places, which is a much greater precision than that of even the most well-determined physical constants.]

Don't get me wrong, I don't doubt this sort of testing as extremely well-done and highly accurate. However, the comparison here might not make too much sense. First off, I don't see how he gets "decimal places" from figure 1 nor units of any sort, but I certainly could have missed something. Second, comparison in precision between different units doesn't make much sense to me in general. Such comparison can make sense if we can translate each units into each other, but I don't see how we can do that here. We don't know how to translate every physical unit into every other physical units. Comparing precision of a measurment with joules in comparison to that of a measruement with feet doesn't make sense. And, of course, we can't use JUST numbers to compare units. We could have a basic measruement such as 3.0 feet, and 1.0 yards. Well, the margin of error for feet indicates we have a number in [4, 6] for feet, while the yard measurement indicates we have [0, 2] yards, which translates into [0, 6] feet... so our measurement in yards actually ends up less precise than that of feet. I certainly don't see how comparing the precision of the phylogenetic tree to that of the universal gravitation constant makes much sense.

[A modern famous prediction of phylogenetic trees is when researchers predicted the form, age and habitat (geological formation) to find the sarcopterygian Tiktaalik with in their later successful dig.]

Well, that's just awesome.

Thony,

[I then showed you exactly where Gödel does say exactly that which both Mark and I had said.]

You haven't shown that, because you haven't shown an equivalence between correct/true, as well as that Goedel understood such a possible distinction. Also, the statement there consists of Godel intrepretating his own formal statement proposition VI. You haven't shown that the foraml statement proposition VI itself says that.

[Now you are claiming that Gödel didn't know or maybe understand what he was saying!]

Well, it's happened before. After all, Euclid didn't understand his own demonstrations as not so rigorous. Many people who worked in logic haven't understood that a statement like the property of contradiction assumes the truth set {T, F} as the set of all possible truth values. They thought it an absolute, undeniable property. You've claimed that I don't know what I'm talking about or what I'm saying. Goedel lived like a human like everyone here, so he could have made a mistake.

I haven't claimed that he didn't know or understand what he was saying... I've presented that as a possibility. Lastly, from his own analogy and commentaries on the problems we think Goedel tried to deal with, these sorts of problems come as very, very close to resolving the liar antinomy, if not structurally the same as solving such. We know that the liar antinomy comes as rather simply solved in even a three-valued logic, and many people would claim it more simply solved in such a way. Goedel didn't do such in his paper with respect to truth values (but does make it so that statements have at least three values of proveness), but he did go on to do work in multi-valued logic later. Would Goedel have prefered an approach to resolve liar-like antinomines in a more multi-valued manner? I don't know, and the paper simply doesn't say. But, if Goedel would have prefered of such, had he known of it, then Goedel may not have understood what he said, insofar as to *how well* it solves the problems he sought to solve. By all means criticize such... but I do acknowledge such as speculative.

We don't have direct observation of the Earth moving about the Sun.

I have described direct observations regarding the theories described, i.e. direct corresponding to their observables. If that isn't your idea of "direct observation", please explain what you mean.

Phylogenetic trees don't give us 'in vivo' observation or an experiment which allows us to see such changes happening 'in vivo', as I understand them.

I'm not sure what you mean. Organisms fossilize continually.

If you mean that they are a proxy of (mostly) extinct populations, sure. But how would that invalidate what we measure on the observed process?

Such comparison can make sense if we can translate each units into each other

I'm not sure what you mean, since units doesn't "translate into each other".

Measurements and/or variables can be compared by combining them to be dimension free. But more importantly, physical equations naturally have parameters on the order of ~ 1, and that makes it possible to compare them by normalizing them. (Fine-tunings is a special subject, too long to get into here. There is a possible concept of naturality there too.)

Precision is normalized. And the numbers given here was the precision with which the small subsets of likely trees get picked.

By Torbjörn Lars… (not verified) on 12 Nov 2007 #permalink

[If that isn't your idea of "direct observation", please explain what you mean.]

Seeing it with your own eyes, or through the lens of a microscope/telescope with your own eyes.

[If you mean that they are a proxy of (mostly) extinct populations, sure.]

Sorry, I suppose I would have clarified this better. Inference from fossils to characteristics of extinct populations doesn't come as direct observation, as we didn't observe the organisms themselves.

[ut how would that invalidate what we measure on the observed process?]

It doesn't invalidate such. It does however mean that we use human inferential process, which work more like natural language reasoning, which works with exceptions and 'usual' cases, than classical bivalent mathematical reasoning.

[Measurements and/or variables can be compared by combining them to be dimension free.]

This part of the discusison I've found most interesting so far. Dimension free variables, what? How would implementation in equations then work, when dimensional homogenity comes as requisite? What about the feet/yard example above... wouldn't dimensional free variables cause a misinterpretation of precision in such a case? Maybe I don't get this and its just me, but I know I certainly don't understand what you've said here.

[Precision is normalized.]

I certainly don't get this at all. It sounds like you mean to say there exists a method to convert pounds to coulombs without other variables involved. Sure, if F=MA and we have a pound force measurement and a mass measurement, we can convert our units into acceleration. But, you almost seem to say given ONLY a pound force measurement we can convert that into a charge measurement. Does such normalization happen by taking ratios of say a pound force to total pound force in comparison to charge to total charge? I certainly don't see how comparison of such ratios make sense without reference to the same object, so I don't know how in general one compares such quantities. To the totally charge/mass/force/etc. of the universe?

Comparing charge to mass seems like comparing apples to tofu. I honestly don't get what you mean to say, or if I did, I don't get how one can compare charge to velocity.

Doug, as a scientist, I find your definition of "direct observation" absurdly restrictive. I think "seeing" was the term you were looking for. What about hearing, by the way?- can we observe the existence of a sound? In any case, might I point out that every human being who has ever looked up at the sky has directly observed the earth moving around the sun. Most of them didn't realise that at the time, but that is a different question :)

Has anyone ever "directly observed" an electron, for example, in your sense? No. Do we have any doubts as to the existence of electrons? No. Do we have a large body of data which only makes sense in the light of the existence of electrons? Yes. We can spray them. We can use them in electron microscopes. Scientific knowledge doesn't depend on your definition of "direct observation". By the later seventeenth century we knew, with as much certainty as you know your address, that the earth goes round the sun; because a very large body of observation made sense in no other way.

Re. dimension-free combinations of variables- we use them all the time! Think of Reynold's number, for example, a dimensionless quantity made from a combination of a length, a speed and a viscosity; it tells you if your fluid flow is viscosity-dominated or not. Or think of calculating a quantity in exponential decay. The exponent needs to be a dimensionless quantity, as it doesn't make sense to raise e to the power of a dimensional quantity! So the exponent will often be something like (a distance)/(a scale factor, with dimensions of distance).

Re. truth and provability. You've been shown that Goedel's proof gives you a statement which is evidently true but not formally provable in the system that generated it. Now you're complaining that the others have not "shown an equivalence between provable/true." What position are you arguing for anyway?

By Stephen Wells (not verified) on 13 Nov 2007 #permalink

Nod to Torbjorn. Doug, you're not talking about direct observation, you're talking about seeing. The two cannot be conflated, because we have at least four other natural senses that we can *also* observe things with, in addition to the multitude of augmented senses that technology brings us.

It's precisely that restrictive sense of observation that often gives creationists such fits - they define science as the act of looking at things, more or less. When we can't *watch* something plainly occur in front of us, they figure we can't say anything about it.

They are, of course, wrong.

By Xanthir, FCD (not verified) on 13 Nov 2007 #permalink

Doug wrote:

Lastly, from his own analogy and commentaries on the problems we think Goedel tried to deal with, these sorts of problems come as very, very close to resolving the liar antinomy, if not structurally the same as solving such. We know that the liar antinomy comes as rather simply solved in even a three-valued logic, and many people would claim it more simply solved in such a way. Goedel didn't do such in his paper with respect to truth values (but does make it so that statements have at least three values of proveness), but he did go on to do work in multi-valued logic later.

I have already stated above that I am no longer interested in discussing anything with you Doug but just for the record Gödels paper on Undecidable Propositions has absolutely nothing to do with solving or not solving the liar paradox and the fact that you obviously think it does shows that you haven't got a clue what you are talking about!

Because you keep demanding that other people prove things I have a small task for you concerning your very erroneous claims about Euclidean Geometry. Please show, based on Euclid's original proofs, one single proof of his that is in anyway invalid or that in anyway deviates from two valued logic, as you claim to be the case. Unless you can do so I would politely suggest that you should withdraw your ridiculous statements concerning both Euclid and Newton.

Stephen Wells,

[Doug, as a scientist, I find your definition of "direct observation" absurdly restrictive.]

Well, if taken as the only way to do science, I would agree that such a definition comes out as absurdly restrictive. I didn't mean to imply that though.

[What about hearing, by the way?- can we observe the existence of a sound?]

Yes, it involves direct perception. Alhtough, I suppose you might argue that there's a problem with that in that every perception involves the brain and more than just the senses, so the qualifier 'direct' becomes a bit ridiculous at this point. Still, many scientific inferences don't work out as direct as observations of sounds from a radio.

[In any case, might I point out that every human being who has ever looked up at the sky has directly observed the earth moving around the sun. Most of them didn't realise that at the time, but that is a different question]

You can say that, but the brain doesn't perceive that. Everyone perceives the Sun moving in the sky with the Earth not moving. An inference to that the Earth stays still and the Sun moves (relative to each other) produces a sloppy model.

[Scientific knowledge doesn't depend on your definition of "direct observation".]

laughs... by NO means did I assert such. I don't see how I implied such either. Look, Torbjon said "In science you test your theories directly". Well, we don't test scientific theories *as directly* as we test say a recipe for how well it cooks, or test to see if a radio works, or if a broken washing machine will start by trying to turn it on. I use the term 'directly' to indicate such a high degree of directness. I suspect many other people, meaning most non-scientists, would use the term in such a way. Maybe I'm wrong though. However, thank you all for clarifying many scientists and mathematicians don't use the term 'directly' in such a way.

[By the later seventeenth century we knew, with as much certainty as you know your address, that the earth goes round the sun; because a very large body of observation made sense in no other way.]

Here, I don't agree. I know my address definitionally, as it defines the location of where I live (though it doesn't tell me the position of my house). The Earth going about the Sun doesn't work definitionally. I don't think we have the same degree of certainty with respect to the Earth going about the Sun as we do for that the washing machine works when someone successful turns it on. That's the use of the 'direct observation' notion. Again, the Earth going about the Sun doesn't have the same degree of certainty as the light working after I've turned the light switch does when I turn it on. Maybe they have the same degree of certainty in your mind, but I disagre, and I think we have different degrees of evidence here. The 'direct test' of turning on a light switch, in comparison to an 'indirect test' like tests involving stellar parallax, gives us a higher degree of evidence than an 'indirect test'. Please note this isn't about one holding and the other not... it's about the strength of evidence involved.

[You've been shown that Goedel's proof gives you a statement which is evidently true but not formally provable in the system that generated it.]

No, I haven't. He didn't show this in the context of proposition VI, as I already stated. Godel states that in his second informal explanation of his own proposition.

[Now you're complaining that the others have not "shown an equivalence between provable/true."]

I didn't say provable/true, I said "You haven't shown that, because you haven't shown an equivalence between correct/true, as well as that Goedel understood such a possible distinction."

Xanthir,

I did say "Seeing it with your own eyes, or through the lens of a microscope/telescope with your own eyes." So, *literally* speaking I did only talk about seeing. However, I could have written "seeing with your own eyes, hearing with your own ears, tasting with your own mouth, etc." and nothing in meaning would get changed. I thought the 'seeing with your own eyes' would have come as sufficient to indicate any sense perception. I allowed for augmented senses.

[It's precisely that restrictive sense of observation that often gives creationists such fits - they define science as the act of looking at things, more or less.]

True enough, and such a definition of science does NOT make sense. As many have pointed out before, we simply wouldn't need science if we could just directly observe the event in question. However, the degree of evidence (and for me the degree of certainty) from a *correct* direct observation comes out stronger than that of an observation which needs more inference. That doesn't mean any such inferences work out wrong whatsoever, as plenty of students can think of examples where they had less certainty in answering a question, yet still got the right one. But, it still does mean that such scientific hypotheses have a lower degree of evidence than statements which we've tested in a more direct manner, such as the Earth going about the Sun in comparison to the computer in front of you working. Or do you think we actually know the Earth revolves about the Sun with more certainty/has a higher degree of evidence than the computer in front of you working right now? I'd certainly like to know your reasoning if you do think such, as I haven't imagined it.

Doug:

The problem with your definition of "direct observation" is eminently discussed by Stephen and Xanthir. Scientific observations are made by experiments and instruments specifically made for the purpose, as our senses isn't capable. I will add that we make observations to ultimately test predictions and learn new things, not to observe each process each detail while it happens for as long as it happens.

Inference from fossils to characteristics of extinct populations doesn't come as direct observation, as we didn't observe the organisms themselves.

That isn't the purpose, see above.

I will amend what I said earlier on this point, btw. Thinking further I see that I made the same mistake as I argued against, to not think in terms of the theory. Phylogenetic trees observe the nested hierarchies (here of traits) that we expect from common descent, we aren't directly checking out descriptions from population genetics and quantitative genetics.

It does however mean that we use human inferential process,

I will claim that inferences is one (great) way to form hypotheses, but it isn't how we test them. We can test by making predictions and check them, preferably by comparing with a null hypothesis.

Dimension free variables, what?

Sorry, I didn't make time for a closer description.

Here is a primer on dimensional analysis, that may be a way to start. It mentions dimensionless constants and their natural magnitude of unity and gives links to further material. I would stay away from the discussion on natural units though, since it is a contentious issue with little value unless you work with them.

It leads up to the real meat of dimensionful vs dimensionless physical constants, and why theories parsimony is judged by the set of the later.

So we are discussing comparing variables (or parameters or constants, depending on what you want to vary) on naturality and theories on parsimony, by the same expedient of making them dimension free in some sense.

I certainly don't see how comparison of such ratios make sense without reference to the same object, so I don't know how in general one compares such quantities.

Acute. Yes, I was rather carelessly thinking of ratios which gives us a relative value of the uncertainty, and that by comparing them against each other we discuss relative precision between different observations. And further, by comparing them against some conventional values we can use them for the tests I mentioned, or to ascertain if a new effect is a likely explanation for an observation.

Physics often use 3 sigma limits on uncertainty to judge if a tested effect is strong enough to validate a theory, and 5 or more sigma to say if there is a new and unexpected observation. (Typically, controlled lab experiments may use a lower limit, while I believe astronomers wants considerably more to be satisfied.)

I don't have a good reference on this as my own books are somewhat dated and worse, in swedish. And I didn't find a good enough web description right now. But typically you would like to find a book on measurement techniques.

Btw, this is one topic that can be much easier to work through than to discuss theoretically.

Hmm. I doubt I have anything more to say at this point, except to return to the topic that the methods around facts and theories in science isn't practically caught in either just plain language or formal (math) theories. And if you want to think about the later it is easier to concentrate on that. I'm sure it is hard enough. :-)

By Torbjörn Lars… (not verified) on 13 Nov 2007 #permalink

Doug:

Okay, I was wrong. Here is something new to comment on:

I don't think we have the same degree of certainty with respect to the Earth going about the Sun as we do for that the washing machine works when someone successful turns it on.

When we discuss reliability of observations we need to relate them to an underlying theory, as facts and theories lend each other trust. And the reliability of a theory isn't captured by if it is violated (for example, the classical formulation of 2LOT can be violated) or if we observe the process it describes the whole time.

Planets circle their sun until they are for some reason or other ejected. The Earth has gone around the Sun for at least 4.6 Gy, it is very likely to continue so even if long-time predictions (IIRC, more than a couple of My) is impractical. And we would certainly notice an ejection, so in your simile we are seeing the washer being on and haven't seen someone reach for the off switch yet.

On a more general note, IIRC Joe Polchinski described the difference between mathematical and physical theories roughly as 'a math proof is as strong as its weakest link, while a physical theory is as strong as its strongest support' to describe why theoretical physicists don't necessarily need fully formal theories to work with. Physical facts and theories have another coherence there theories support each other, and there physicists often find that if an description works out to be just coherent and also testable it is often the correct one.

By Torbjörn Lars… (not verified) on 13 Nov 2007 #permalink

[I will claim that inferences is one (great) way to form hypotheses, but it isn't how we test them. We can test by making predictions and check them, preferably by comparing with a null hypothesis.]

As I understand it, 'predictions' refers to logical consequences of a hypothesis. Logical consequences involve inference. A 'prediction' says that if a hypothesis A holds in reality/came as a sufficiently suitable model/had utility, then phenomenon B will also happen. For example, if Newton's theory of gravity held, then the peturbations of Uranus's orbit 'predicted' or 'implied' the existence of Neptune. I perceive an inference from Newton's theory in such a case happening. If you mean to claim predictions as not qualifying as inferences, I certainly don't get it.

Maybe I've misunderstood you here. Maybe you mean to say that science doesn't work as *purely* inferential. Well, sure, but I didn't claim that. I just meant to say that inference gets involved in the process of science, and that it relies on such inferencing in tests (though not exclusively so).

[Sorry, I didn't make time for a closer description.]

An incomplete post is better than none, so no need to apologize. Thanks for the links. The Buckingham pi theorem looks interesting here.

[Planets circle their sun until they are for some reason or other ejected.]

Well, I don't know practically, but what if their velocity comes as too small to maintain their orbit? Wouldn't the planets then eventually get "sucked in" by the gravity of the star, like how a meteor gets "sucked in" by the Earth's gravity?

Doug Wrote:

As I understand it, 'predictions' refers to logical consequences of a hypothesis. Logical consequences involve inference. A 'prediction' says that if a hypothesis A holds in reality/came as a sufficiently suitable model/had utility, then phenomenon B will also happen.

Doug doesn't know the difference between inductive inference and deductive implication.

I think we _do_ have the same degree of certainty regarding the earth orbiting the sun as we do re. facts of everyday life. I rely every day on facts (like "stepping in front of that speeding car would kill me") which rely on chains of inference, not direct observation (I infer, from the fatal results of other people stepping in front of rapidly moving vehicles, that I shouldn't do it). If I'm allowed to use inference and induction for this kind of everyday knowledge, I don't see why I can't use it for scientific knowledge.

By Stephen Wells (not verified) on 14 Nov 2007 #permalink

The Common Sense of Science
by Jacob Bronowski
First Published by Heinemann, 1951.

IN ORDER TO ACT in a scientific manner, in order to act in a human manner at all, two things are necessary: fact and thought. Science does not consist only of finding the facts; nor is it enough only to think, however rationally. The processes of science are characteristic of human action in that they move by the union of empirical fact and rational thought, in a way which cannot be disentangled. There is in science, as in all our lives, a continuous to and fro of actual discovery, then of thought about the implications of what we have discovered, and so back to the facts for testing and discovery - a step by step of experiment and theory, left, right, left right, for ever.'

Thony,

[Doug doesn't know the difference between inductive inference and deductive implication.]

I don't consider the argumentation form inductive. If we superimpose classical logic on the argument here, it has the form
A->B
B
Therefore, A.

Now, in classical logic that consists of affirming the consequent, so it comes as clear that science doesn't use classical logic. We have *abductive* reasoning here. A better way of putting this would use a Polya-like style
A->B
B
Therefore, A more credible. Or
A->B
B
Therefore, A better supported by evidence.

Either way we certainly don't use classical logic here.

Stephen Wells,

[I rely every day on facts (like "stepping in front of that speeding car would kill me") which rely on chains of inference, not direct observation (I infer, from the fatal results of other people stepping in front of rapidly moving vehicles, that I shouldn't do it).]

You've seen, or at least someone else has seen, another person step in front of a car. So far as a I know, and even if we have we accepted the heliocentric theory long before we observed such, we haven't observed another planet revolving about the Sun. Consequently, I don't see such as analogous. Although, perhaps you consider the Medicean moons moving about Jupiter as sufficient similar, so I can see how you think such.

It still remains that in common life we have some events which we DO directly observe, such as turning on the computer which confirms the hypothesis "the computer works."

Dou:

Logical consequences involve inference.

As Thony C notes, inductive inference vs deductive implication.

But in any case it doesn't matter, as we don't test how prediction works however arrived at but how theories works. The prediction is what the theory describes, however it is arrived at.

A->B
B
Therefore, A.

That isn't the logic used. Falsification can be described as deduction with modus tollens, not inference with modus ponens:

The falsification of statements occurs through modus tollens, via some observation. Suppose some universal statement U forbids some observation O:

U â ¬ O

Observation O, however, is made:

O

So by modus tollens,

¬ U

And then by noting that it is the differing predictions between theories that are tested, not each statement in isolation as in naive falsification.

Well, I don't know practically, but what if their velocity comes as too small to maintain their orbit?

Then they would have lost their orbit a long time ago. Practically, planets are objects that have cleared their orbits from other debris long time ago, or they would spin down as you say.

Any change now would be noticeable, because the only thing that would make a sufficient impact now would be another large body passing nearby to accelerate Earth.

By Torbjörn Lars… (not verified) on 14 Nov 2007 #permalink

[That isn't the logic used. Falsification can be described as deduction with modus tollens, not inference with modus ponens:]

Falsification ends up getting rid of hypotheses. Sure, that does get used, but NOT to confirm a hypothesis. If we talked about Newton's hypothesis and the perturbation of the orbit of Uranus, one doesn't use falsification to infer the existence of another planet. When they found Neptune, they didn't falsify the hypothesis, they confirmed it. I can't write that historical example out like this:
Let G stand for Newton's theory of gravity as useful/approximately true
Let P stand for perturbation of Uranus.
Let N stand for existence of another planet.
Then formally, we have
G->P
P->N
P
N
Therefore, G.

Of course, it COULD have happened that we would have ~N, so P->N would come out as logically invalid from classical logic. So, the hypothesis works out as falsifiable. But, when the hypothesis doesn't get falsified, and we actually have the consequents holding, we confirm/verify the theory by using abductive inference. So, really we have classical logic as a way of rejecting a hypohtesis or letting us know that such a theory can get rejected by tests, but we have abductive logic to confirm/verify/buttress the hypothesis.

[Then they would have lost their orbit a long time ago.]

Even if the velocity is just 10^-10,000 m/s under stable orbit? Hmmm... well perhaps this would hold for out planets which have existed for millions of years. But, I thought more generally in terms of any system and its planets.

Doug, how can you with a straight face tell us that we've never seen another planet orbiting the Sun? ALL the other planets in the solar system are orbiting the Sun!

Cue Wittgenstein: What would it look like, if it looked as if the earth were rotating?

By Stephen Wells (not verified) on 14 Nov 2007 #permalink

Sure, that does get used, but NOT to confirm a hypothesis.

Here we part ways.

I was discussing testing theories, something which is known to be important for scientists. We can in principle easily eliminate erroneous theories, and provisionally accept them. But we can never have a guarantee that a theory is entirely correct in all its details.

What you are saying, is that falsification doesn't work, so we can disregard it. (And that scientists are aiming for verifiable truth, not testable theories.) I don't agree, especially as inferences can't eliminate erroneous theories as well as direct tests.

I think your example of Neptune is excellent, since it was a test. They found Neptune where it was predicted to be, otherwise the hypothesis had sooner or later been abandoned for other explanations. It wasn't a blind search for mere existence, as your logic imply:

In 1821, Alexis Bouvard published astronomical tables of the orbit of Uranus. Subsequent observations revealed substantial deviations from the tables, leading Bouvard to hypothesize some perturbing body. In 1843, John Couch Adams calculated the orbit of an eighth planet that would account for Uranus' motion. He sent his calculations to Sir George Airy, the Astronomer Royal, who asked Adams for a clarification. Adams began to draft a reply but never sent it.

In 1846, Urbain Le Verrier, independently of Adams, produced his own calculations but also experienced difficulties in encouraging any enthusiasm in his compatriots. However, in the same year, John Herschel started to champion the mathematical approach and persuaded James Challis to search for the planet.

After much procrastination, Challis began his reluctant search in July 1846. However, in the meantime, Le Verrier had convinced Johann Gottfried Galle to search for the planet. Though still a student at the Berlin Observatory, Heinrich d'Arrest suggested that a recently drawn chart of the sky, in the region of Le Verrier's predicted location, could be compared with the current sky to seek the displacement characteristic of a planet, as opposed to a fixed star. Neptune was discovered that very night, September 23, 1846, within 1° of where Le Verrier had predicted it to be, and about 10° from Adams' prediction. Challis later realized that he had observed the planet twice in August, failing to identify it owing to his casual approach to the work.

By Torbjörn Lars… (not verified) on 15 Nov 2007 #permalink

Doug:

Your logical analysis of the scientific methodology is not correct.
Given G :=Newton's theory of Gravity
Then TOU:= Theoretical orbit of Uranus deducted from G
And AOU:= Actual orbit of Uranus obtained through empirical observation.
TOU is not equal to AOU
Hypothesis: There exists another unknown gravitational body "X" acting on Uranus.
TOX:= Theoretical orbit of X deducted from G and difference between TOU and AOU
X discovered through empirical observation of TOX.

This process uses only the hypothesis / deductive method, at no point is inference used.
The discovery of X is called a confirming instance of G but we can draw no valid logical conclusion about the truth of G from the discovery of X as the truth flow in the deduction when G +(TOU - AOU) then TOX goes only from the premise to the conclusion and not in reverse. That is if G+(TOU - AOU) is true then TOX is true but not vice verse. You invoke abduction but it is not a valid form of reasoning exactly because it does not transport truth values.

Stephen Wells,

[Doug, how can you with a straight face tell us that we've never seen another planet orbiting the Sun?]

We haven't seen the planets moving with the Sun standing still for the entire orbit of the planet. From Earth we don't see the motion of the planet about the Sun, we don't see its orbital path.

Torbjorn Larsson,

[We can in principle easily eliminate erroneous theories, and provisionally accept them.]

I think in principle as a key phrase here. Philosophers of science sometimes reject falsification partially because of practical difficulties.

[What you are saying, is that falsification doesn't work, so we can disregard it.]

No. Falsification works for getting rid of hypotheses and giving us a standard by which to judge the testability of a hypothesis. It just doesn't work out as sufficient.

[(And that scientists are aiming for verifiable truth, not testable theories.)]

Perhaps, but one could say that scientists aim for hypotheses which have *more* evidentiary support.

[It wasn't a blind search for mere existence, as your logic imply]

What???? I don't see how I implied that at all. The search basically came about because of Newton's theory. Some philosophers of scientists, like Imre Lakatos, have said this indicates more about science than anything else. Science works out, not because of its testability so much, but because it predicts/implies new or 'suprising' facts.

Thony C,

[The discovery of X is called a confirming instance of G but we can draw no valid logical conclusion about the truth of G from the discovery of X as the truth flow in the deduction when G +(TOU - AOU) then TOX goes only from the premise to the conclusion and not in reverse.]

One can't do this validly in classical two-valued logic as I've pointed out already (if we talked only about disconfirmation of the null hypothesis, then we still could have classical logic at work... but this lacks giving us a positive theory). But, scientists and other people DID make an inference about G from such a successful test, so they couldn't have used classical logic. People STILL DO make inferences about the usefulness/truthfulness of G as a reasonable model from such a successful test.

One could also phrase it by saying that the observation of Neptune gives *greater* evidentiary support to G. Or one could paraphrase Polya and say that the existence of Neptune N makes G *more credible*. If Polya's ideas about mathematical thinking lead in a good direction, then mathematical thinking ALSO uses such forms of plausible reasoning (though not necessarily mathematical verification). Either way, if N says anything about G beyond that of disconfirmation of the null hypothesis, then even classical science works outside the framework of classical logic. One can draw valid inferences about G in frameworks other than classical logic.

[You invoke abduction but it is not a valid form of reasoning exactly because it does not transport truth values.]

There exists no *necessary* 'transportation of truth values' beyond that of a two-valued context (there do exist some multi/infinite valued logics where this will hold, but not for all of them). The ONLY requirement comes as that 0->1, 1->1, 0->0 qualifies as valid, and 1->0 qualifies as invalid. This means that logics which have the above holding, but also have .9999->.0001 coming out as valid, or partially valid qualify as legitimate multi/infinite logics.

Again, one can theoretically avoid placing science in a non-two valued context by saying that tests merely disconfirm the null hypothesis. However, scientists rarely, if we can even accurately use the term ever which means about 0, have used this form of speaking. They have and still talk about hypotheses as suitable, useful, true (enough), etc.

Doug, I hate to break this to you, but we do indeed see the other planets orbiting the sun. Transits of Venus, anybody? You seem to be redefining "Observe" ever further away form any sane usage of the term.

By Stephen Wells (not verified) on 15 Nov 2007 #permalink

Thony C.,

Getting back to your characterization of the Neptune disovery, I'll first formalize it in letters. Again,
Given G :=Newton's theory of Gravity
Then TOU:= Theoretical orbit of Uranus deducted from G
And AOU:= Actual orbit of Uranus obtained through empirical observation.
TOU is not equal to AOU
Hypothesis: There exists another unknown gravitational body "X" acting on Uranus.
TOX:= Theoretical orbit of X deducted from G and difference between TOU and AOU
X discovered through empirical observation of TOX.
So, we have
G->TOU
AOU
TOU/=AOU
X
G+(|TOU-AOU|)->X
X
Scientists, then used X to claim G as correct or better supported, or
X implies G, or
X confirms (implies to a degree greater than 0) G
That's the key point here. Your construction of intermediate steps in the analysis comes as immaterial the matter in question. The formalization I gave still has X->G in it. Sure, in classical logic this would work out invalid. But, the point of my argument comes in that science does NOT use classical logic. To say we have a confirming instance uses non-classical logic reasoning.

Stephen Wells,

[Doug, I hate to break this to you, but we do indeed see the other planets orbiting the sun. Transits of Venus, anybody?]

Good example with the transit of Venus. But, this doesn't mean we've observed it for all planets, and we certainly, so far as I know, haven't *'directly' observed* (in the aforementioned sense) it for the Earth, so the point still stands. Our (admittedly wrong) eyes see the Sun move and the Earth stand still.

On a digression, a few years ago when the most recent transit of Venus happened I played a march with my concert band written by John Phillip Sousa, which he wrote around the time of the last transit of Venus. A fairly good march as I recall.

You don't get to say that your point still stands, when you've conceded the point! In any meaningful sense of "observe", we have observed the orbit of planets (including this one) about the sun. If your definition would require us to go and hover outside the solar system looking at the ecliptic plane and track the earth round for one full orbit, before we could say we'd observed it, then your definition is the problem. I mean, what if you blinked?

It isn't even true that "our eyes see the sun move and the earth stand still." Our eyes _see_ patterns of light and colour. The _interpretation_ of those patterns as, e.g. the sun moving, is something that happens in your brain. Privileging an ignorance-based interpretation over a knowledge-based one is not going to advance knowledge.

By Stephen Wells (not verified) on 15 Nov 2007 #permalink

That's the key point here. Your construction of intermediate steps in the analysis comes as immaterial the matter in question. The formalization I gave still has X->G in it. Sure, in classical logic this would work out invalid. But, the point of my argument comes in that science does NOT use classical logic. To say we have a confirming instance uses non-classical logic reasoning.

Wrong! I have not constructed intermediate steps but reconstructed the actually process used by scientists, which your scheme had failed to do. We call the hypothetical prediction of X and its empirical confirmation a 'test' or 'confirming instance' of G not because of any implication between X and G but because we are able to deduce, using classical two valued logic a prediction from G that has been proved true. We know that we can never in anyway prove the truth or even give an estimate of the truth of G. We can only (and your claim earlier that it is no longer accepted is bullshit) falsify G. This however does not take place purely because G cannot logically provide a hypothesis for an observed phenomenon, for example the orbit of Mercury, but first when there is an alternative theory of gravity, for example General Relativity, from which we can deduce hypotheses to explain all of those observed phenomena that G can explain and also those for which we have failed to deduce proved hypotheses for G, such as the orbit of Mercury or the gravitational curvature of light. At no point do we appeal to any form of logic other than the classical two valued logic.

Re: #56
"... Planets circle their sun until they are for some reason or other ejected...."

Did our Solar System once have another planet?
Thursday, 8 November 2007
by Ker Than
Cosmos Online
Did our Solar System once have another planet?

Late and heavy: Did a fifth rocky planet, long subsumed by the Sun, create the bombardment that pockmarked the surface of the Moon?

NEW YORK: The fiery demise of a fifth rocky planet in our Solar System might have led to a flurry of asteroid impacts that pockmarked the Moon and Earth billions of years ago.

The Late Heavy Bombardment (LHB) is a relatively brief period, about 3.9 billion years ago, when wayward space projectiles heavily pelted the Moon and inner planets. Craters from that chaotic time are still visible on the Moon, but have been erased from Earth, where the crust is continually recycled.

All shook up

Try as they might, astronomers have not yet been able to pin down a cause for the bombardment. Some experts have postulated that a shuffling up of the arrangement of the planets in their youth may have been responsible. One popular theory is that the outward migration of a young Neptune perturbed rocky bodies in the distant Kuiper Belt, causing some to veer into the inner Solar System.

But John Chambers, an astrophysicist at the Carnegie Institution in Washington DC, now says the size distribution of craters on the Moon better match asteroids from the Asteroid Belt, located beyond the orbit of Mars. And he thinks the misbehaviour of a long-lost, fifth rocky planet called 'Planet V' was the trigger that upset the gravitational balance of the belt and ejected some of its inhabitants. Our Solar System currently contains four rocky planets: Mercury, Venus, Earth and Mars.

Using a new computer model detailed in a recent issue of the journal Icarus, Chambers provides the most compelling evidence yet that a hypothetical Planet V could have existed for hundreds of millions of years before minute gravitational tugs from Mars and Jupiter destabilised its orbit, causing it to fall into the Sun.

"Before it was lost, its orbit would have moved across the Asteroid Belt for quite a long period of time, scattering asteroids as a result," Chambers said. "My model would predict that it's only asteroids, and not comets, that caused the impacts, and that the asteroids would tend to come from the inner asteroid belt."

Unknown planet

Planet V's orbit was between that of Mars and the Asteroid Belt, Chambers predicts, and it may have been smaller than Mars but larger than our Moon. "If it was bigger than Mars, then Mars should have been the one that was lost," he said. "If it's smaller than the Moon, that's not really big enough to disturb the Asteroid Belt much."

David Kring, a planetary scientist at the U.S. Lunar and Planetary Institute in Houston, Texas, agrees that the Planet V hypothesis is interesting because it suggests an inner Solar System origin for the impacting debris. But he added that Chamber's "model relies on the invention of an unknown planet... so it is a hypothesis that will need to be rigorously tested with the geologic record."

One such test could be done if scientists had more lunar rock samples, he said. It was using these samples, returned by NASA's Apollo missions, that scientists were able to date the Late Heavy Bombardment in the first place.

Stephen Wells,

[You don't get to say that your point still stands, when you've conceded the point! In any meaningful sense of "observe", we have observed the orbit of planets (including this one) about the sun.]

The hypothesis involved (heliocentric) doesn't concern SOME planets. It concerns ALL planets. We've observed SOME, not ALL, *directly* so the point stands. I've defined the term 'direct observation' in terms of words that have meaning, and thus my definition (though not all that applicable) comes out as meaningful.

[If your definition would require us to go and hover outside the solar system looking at the ecliptic plane and track the earth round for one full orbit, before we could say we'd observed it, then your definition is the problem. I mean, what if you blinked?]

Blinking comes as irrelevant since we still would have first-hand experience of the event. The definition DOES work out as practicable in some cases of common life, as in testing whether your computer works or not.

[It isn't even true that "our eyes see the sun move and the earth stand still." Our eyes _see_ patterns of light and colour. The _interpretation_ of those patterns as, e.g. the sun moving, is something that happens in your brain.]

Sure, but with respect to 'direct observation' this comes as irrelevant. There exists a distinction between first-hand observations like that of turning on your computer, and that of many scientific observations.

Thony C,

Scientist DO talk about theories as the leading theories of science. They DO talk about them as if they tenatively held true. They DO talk about them as if they worked out more useful than other hyopotheses. Why? Confirming instances make them more useful, allow us to select one hypothesis over another, they do indicate that one can hold onto such a theory tenatively.

[We can only (and your claim earlier that it is no longer accepted is bullshit) falsify G.]

Newton's equation for gravity as a universal rule for all two-body systems has gotten falsified and no longer comes as accepteed.

Confirming instances confirm *to some degree or in some way according to the 'use' view of science* hypotheses as at least usable tenatively. This alone lies outside of classical two-valued logic. Don't worry Thony C., having seen what you wrote about my comment on reductio ad absurdum, I honestly don't believe you'll get this.

[If your definition would require us to go and hover outside the solar system looking at the ecliptic plane and track the earth round for one full orbit, before we could say we'd observed it...]

"Full orbit" is not a joke here. As wikipedia begins:
"Yuri Alekseyevich Gagarin (Russian: ЮÌÑий ÐлекÑеÌÐµÐ²Ð¸Ñ ÐагаÌÑин), [9 March 1934 - 27 March 1968), Hero of the Soviet Union, was a Soviet cosmonaut. On 12 April 1961, he became the first person in space and the first to orbit the Earth."

However, the USSR lied about where he was launched and where recovered. He actually did NOT make a full orbit of the earth. He made roughly a 16/17 orbit.

Hence no direct observation would have established that the most famous of all cosmonauts orbited the earth, by the given definition. That would be first done by "Gherman Stepanovich Titov (Russian: ÐеÑман СÑÐµÐ¿Ð°Ð½Ð¾Ð²Ð¸Ñ Ð¢Ð¸Ñов) (September 11, 1935 - September 20, 2000) ... a Soviet cosmonaut and the second person to orbit the Earth [6 August 1961]... The mission lasted for 25.3h and accomplished 17 earth orbits."

And did anyone observe that "directly?" By the silly definition, no, because there was nobody far enough from Earth to have had Titov in line-of-sight for a full orbit, during which: "he was the first person to suffer from "space sickness" (i.e. motion sickness in space)..."

Bad definition, see?

[Bad definition, see?]

If I had proposed such as a definition for science altogether, I would consider such a bad definition. I would also consider it a bad definition if I required all of knowledge to work that way. And, of course as one can find, people can very poorly use such a definition, as "Creationists" attest. However, I did NOT use the definition in any of these ways. I used the definition to point out that there exists a distinction in how-we-know (and therefore some distinction in how well we know) objects 'directly observed' in comparison to 'indirect observation'. In this sense, the definition works... in other words in this sense, the definition qualifies as good. I didn't propose it beyond this sort of context.

I don't see how cosmonaut/astronaut examples come out as relevant to the discussion about the nature of science. They didn't observe the entire orbit of the Earth, and observing part of the orbit means we extrapolate the rest (which qualifies then as indirect observation). Lastly, none of this matters in the context of the discussion concerning the nature of science. Scientists LONG, LONG, LONG accepted the Earth revloving about the Sun once a year prior to 1960s when people first ventured beyond Earth's atmosphere.

Again, I didn't define 'direct observation' as a proposed definition for science or as anything useful in MOST contexts. I defined it to indicate something about how-we-know and how well we know.... that... and basically NO more.

FWIW, catching up on old threads:

@ Doug:

Philosophers of science sometimes reject falsification partially because of practical difficulties.

Practical difficulties in theory - in practice there are other problems.

It just doesn't work out as sufficient.

I never claimed it did. One must also have at least a constraint to know which theory to test first. Say likelihood or parsimony.

*more* evidentiary support.

And what would that evidence be outside of number of passed tests?

I don't see how I implied that at all.

By denying testing through prediction in the discussion. But since you know describe this it seems you accept testing, knowingly or unknowingly.

By Torbjörn Lars… (not verified) on 26 Mar 2008 #permalink

I know I haven't put this comment in the right place, but I can't find the right thread. I remember someone saying you could basically write fuzzy set theory in terms of crisp set theory, or something like that. Here's the problem though:

Let's just change Russell's paradox a bit. Let's say "a set of sets which are not subsets of themselves." In crisp set theory, no such set exists. But, in fuzzy set theory, such a set does exist since subsethood can come as a matter of degree.

By Doug Spoonwood (not verified) on 25 Aug 2008 #permalink

No doubt someone will claim that as semantic play, but it's not since "not" can get defined in a way that works for both theories in the same way as can the notion of "subsethood".

By Doug Spoonwood (not verified) on 25 Aug 2008 #permalink