Ways of Knowing

Spend any time immersing yourself in science/religion disputes and you will quickly encounter the idea that “science is not the only way of knowing.” Nearly always this is intended as a way of carving out intellectual space for religion. For atheists like me this claim raises a red flag. I want to know what ways of knowing religion contributes to the discussion, and what sorts of things we learn from these methods. The problem is that one of the biggest ideas religion offers, especially in the West, is that of revelation. Apparently God sometimes talks directly to certain human beings, and at least some of the time these conversations get recorded in holy books. I talk about red flags because this sort of thing has a very poor track record, but is still taken very seriously in some quarters.

In this context, arguments about “ways of knowing” are really arguments about “ways of defending knowledge claims.” If the best findings of science suggest the Earth is over four billion years old, while the Bible strongly implies that it is less than ten thousand years, then among educated people it is the Bible that must yield. When dealing with empirical claims about nature, science is a far more reliable way of knowing than is reading the Bible.

What, then, are these other ways of knowing we all must acknowledge on pain of being accused of something unsavory like “scientism”? What sorts of things do we know by taking advantage of them? How am I to defend a claim of the form “I know X” if I do not avail myself of the sorts of evidence that scientists use in their work?

Josh Rosenau offers some candidtaes in this lengthy post, written in reply to some earlier remarks from Jerry Coyne. According to Josh there is something called “literary truth,” which stands in contrast to the empirical truth so beloved of science. Let's turn the floor over to him:

To call these [certain purported truths gleaned from non-literal readings of the Bible] “empirical” claims then seems to miss the point. They are certainly truth claims, but not claims about what literally happened. I like to compare this to the non-literal truth claims of good novels, or good stories more broadly. I think we can all agree that literature offers a different “way of knowing” than science does.

I'm pretty sure we can not all agree to that, since I can not imagine what it means to say literature offers a way of knowing. Perhaps I should read further. Referring to this post from a blogger named slacktivist Josh writes:

Vampires don't exist, and slacktivist makes it absolutely clear that he knows this. But telling stories about vampires is a great way to convey certain truths about the world we all live in. These aren't truths that science can independently verify, but they are still true in a meaningful way.

No one should watch Buffy the Vampire Slayer they way they would watch a documentary, but they should certainly watch the show. It's brilliant, and it uses this exact sort of literary truth to tackle tricky subjects like drug addiction, spousal abuse, peer pressure, bullying, and the challenges of adolescence in late 20th century America with a sophistication and humor that would be impossible in any other form.

Slacktivist, himself an evangelical Christian, truly does seem to read the Bible through the same literary lens that he uses for vampire stories. For instance, he has offered a sensitive reading of the story of Noah's flood, again making the point that it is not true in the sense of actually having happened, but it is still true in an important and interesting sense.

I think those two posts (and many others) give a pretty good example of what one might mean by a “way of knowing” other than science. Is it literally true that “selfishness is destructive” (slacktivist's summary of the story of Noah's flood)? What would it even mean for that to be “literally” true?

If we are really going to be this casual with language, then just about anything can now be construed as a way of knowing.

A while back I was participating in a chess tournament. In one of my games I had gotten myself into a bit of a pickle with sloppy opening play. I had defended grimly for a few hours and had worked my way down to an endgame that was objectively lost but posed some challenges for my opponent. Then he blundered and allowed a small tactical combination in which, after sacrificing some material, I was able to win my opponent's queen. One of the crucial moves in my combination was made by a lowly pawn. I went on to win the game.

Just look at all the fundamental human truths revealed in this one game! I learned that you can recover even from serious mistakes if you stick to it and remain patient. I learned that the meekest among us can have a profound impact on life, as exemplified by my humble pawn. I learned that fortunes can change quickly and that one must be ever vigilant for opportunities. I learned that cockiness and overconfidence can get you into trouble (the look on my opponent's face when I sprang the combination on him was truly a sight to behold).

Is playing chess now to be considered a way of knowing? If it is, then the phrase has truly lost all meaning.

Josh has simply confused a “way of learning” with a “way of knowing.” Fiction can be a marvelous device for conveying truths, it can call your attention to truths you had not previously considered, and it can present familiar truths in insightful new ways. There is much to be learned from reading great literature. But it is simply bizarre to say there is something we know from reading great literature that we can not verify in more conventional ways. Would anyone want to be Smith in the following conversation:

SMITH: I know X is true.

JONES: How do you know X is true?

SMITH: Because I watched an episode of Buffy the Vampire Slayer.

I wonder how many Christians would be happy to hear that extracting truths from the Bible is the same kind of thing as extracting truths from Buffy the Vampire Slayer. This hardly seems like the refutation of the claim that religion contributes nothing to the store of human knowledge. If reconciling science and religion means reducing the Bible to the level of Aesop's fables (fictional stories from which we can nonetheless extract important lessons), then I think the anti-reconcilers can comfortably declare victory.

Trivial abstractions like “Selfishness is destructive” (which is certainly untrue as a general proposition and, at any rate, seems perfectly clear when taken literally) are not really what is at issue in science/religion disputes. Instead, how ought we to react to someone who says, “Jesus loves me this I know, because the Bible tells me so?” Science can not comment on whether Jesus actually loves him (though it can certainly make some of the more flamboyant stories about Jesus seem pretty unlikely). Is that person in possession of a way of knowing that the rest of us should take seriously? Can he plausibly claim to know something that nonetheless has no meaning for the rest of us?

This is not just an academic question. People like Alvin Plantinga have argued that what Chrisitans “know as Christians” ought to be considered perfectly reliable knowledge for the purposes of doing science. Is he right?

Moving on:

We judge the truth of a novel differently than the truth of a documentary. Nothing that happens in a novel need ever have happened, but a novel must hang together in a certain way for it to feel true. Fictional characters can feel false or true based only on how they are written. Even though they have no empirical, objective reality, they have a reality that readers can use to measure the characters' validity, the truth of the story, and the truth of the author's underlying intent. To top it off, different readers can react very differently to a story, or to a character in a story, despite working from the same source material.

Josh continues to use language in ways I don't understand. What does it mean to judge the truth of a novel? Surely it makes more sense to talk about the level of realism in a novel. It is an outright category error to talk about the truth of an author's intent. An intention is not a proposition, and therefore is not the sort of thing that can be true or false. And I can't imagine what an invalid character would be.

You can certainly argue that a given work of fiction is weakened to irrelevance by its manifest departures from reality or believability, but what has that to do with literature as a way of knowing?

But the weirdness just keeps on coming:

These fans are devoted to the various incarnations of Star Trek, and were willing to spend $40-75 for just one signature from one of the Star Trek captains. And they don't all want the same signature. Some think Picard is the greatest captain in the Star Trek Universe, some think Kirk is the better captain, and a few prefer Janeway.

These, again, are truth claims, but none of those fans is objectively, empirically wrong, nor are any of them objectively right.

Let's get the important point out of the way first. There is an unambiguously correct ordering of the various captains in the Star Trek universe. Captain Kirk is first. Period. Captain Janeway is a distant second, followed by Sisko. Pussy Picard, meanwhile, ranks somewhere behind Chekhov's hapless captain from the start of The Wrath of Khan.

Okay, back to business. I can't believe Josh is serious when he says that assertions about the merits of Starfleet captains are truth claims. Isn't it obvious that they are statements of opinion? We all understand that I was kidding a moment ago when I talked about an unambiguously correct ranking, right? I suspect even the most diehard Trekkie understands that when he says, “Captain Kirk is the best captain,” (which all sensible Trekkies do say), he is really just saying, “I like Captain Kirk the best.”

Josh also seems to have some strange ideas about the empirical basis for Judaism and Christianity. He writes:

This is an odd claim. From what I know of religions like Taoism, Buddhism, and Hinduism, empirical claims are not so central (unless you think reincarnation, kharma, etc. count as empirical claims, which I don't). Nor do I think that's a fair assessment of Judaism or Christianity. It's certainly true that the Jewish Bible can be read as making a number of empirical claims, for instance about the timing of human origins, whether bushes can burn without being consumed, that thousands of people wandered the Sinai for decades without leaving any obvious archaeological evidence or human records in nearby civilizations, etc.

But that's not how Jews have understood the Bible for the last couple thousand years. Maimonides, writing well before any of the modern squabbles over evolution, explained:

Ignorant and superficial readers take them [certain obscure passages] in a literal, not in a figurative sense. Even well informed persons are bewildered if they understand these passages in their literal signification, but they are entirely relieved of their perplexity when we explain the figure, or merely suggest that the terms are figurative.

Augustine of Hippo has made comments similarly supportive of non-literal (non-empirical) readings of the Bible. For more on this topic, NCSE has a nice article on the history of non-literal Biblical interpretations.

Josh obviously likes the kind of religion that treats its holy texts as ciphers into which you can read whatever you like. The fact remains, however, that this paragraph is a ludicrous distortion of both Judaism and Christianity.

The whole foundation of Judaism is the idea of a covenant between God and the Jewish people, in which we agree to live according to God's law in return for which we are guaranteed the land of Israel. This is a literal covenant, not a symbolic one. Furthermore, the Torah has traditionally been understood as the history of the Jews, and not as a bunch of fictional stories designed to teach us spiritual lessons. It is not a figurative exodus we commemorate on Passover.

Maimonides endorsed the idea that the findings of reason ought to be taken into consideration when interpreting scripture, but he was not some theological liberal. He was not just tossing out non-literal interpretations of scripture willy-nilly. And since Josh specifically talks about how Jews have interpreted the Bible for thousands of years, we should note that Maimonides was bitterly opposed by many of the rabbis and scholars of his day.

As for Christianity, has Josh not read the Nicene Creed? It is chock full of empirical claims. Jesus was resurrected, not “resurrected.” That they are not the sort of things that science can resolve once and for all does not make them non-empirical. As for Augustine, he, like Maimonides after him, was willing to countenance non-literal interpretations of scripture when a literal interpretation was contradicted by reason. But he was also perfectly convinced, based on scripture, that the Earth was on the order of 6000 years old. He was also not the last word on proper Christian practice. For example, St. Basil's homilies on Genesis (Basil was a contemporary of Augustine) sound a lot like what modern YEC's say.

And let us not forget the big empirical claim at the heart of both Judaism and Christianity: that God exists. That's usually taken to be a literal God who literally exists. I'd like to know what way of knowing undergirds such a claim.

Granted, most modern American Jews would reject the claim that there is a covenant between man and God, just as many Christians dissent from points of doctrine. The fact remains that it is absurd to say that Judaism and Christianity do not rest on a foundation of empirical claims (highly dubious ones in my opinion.)

Josh's defense of religion, such as it is, comes at a very high price. We religion-bashing types take religious claims seriously, consider the evidence for them, find them wanting, and argue that they ought to be dismissed on that basis. We do not strip Judaism and Christianity of all their most interesting assertions, and cherry-pick the most liberal folks we can find as representative of the religion generally. (And in the case of Maimonides and Augustine they weren't even all that liberal.)

As a further example of what I mean, consider this:

As a scientific claim, “vampires fear crosses” is as meaningless as “Picard is a better captain than Kirk” or “The Cubs are the greatest team in baseball's history,” and none of those is any more scientifically meaningful than “Jesus is my personal savior.”

I know a lot of Christians who would be insulted by this. “Jesus is my personal savior” is not the same kind of statement as any of those other three. Rankings of sports teams and Starfleet captains are matters of opinion (more precisely, they are based on standards the worthiness of which are matters of opinion). Assertions about vampires only make sense in the context of certain fictional narratives.

Someone who claims Jesus as his personal savior, by contrast, typically envisions that Jesus literally exists and has literally caused some change in his life. It is usually accompanied by the claim that anyone could experience the same change by accepting his own need for a savior. It entails certain empirical claims about the world. The other statements do not.

If we are specifically discussing the world of Buffy the Vampire Slayer, then the statement “Vampires fear crosses,” is readily defended by the meticulous collection of evidence. To discuss rankings of Starfleet captains or sports teams we would first have to agree on some rating criteria, but once we agree on a set of criteria we can presumably come to some agreement about the proper ranking with respect to them.

But since this is a post about ways of knowing, I would like to know how someone claiming Jesus as his personal savior purports to know the empirical claims on which that assertions is based.

Let's wrap this up:

I write this not to defend the latter claim, but to defend the worthiness of non-scientific enterprises. I like novels. I like TV. I like art. I like baseball. I think there is truth to be found in such endeavors, and I think any brush that sweeps away the enterprise of religion as a “way of knowing” must also sweep away art and a host of other human activities. I've tossed out the comparison before, and have yet to get any useful reply to it.

What a bizarre conflation of ideas. I'm pretty sure that even the most aggressive atheists spend most of their time indulging in nonscientific pursuits. Arguing that science is the only reliable way of knowing is hardly the same as saying that science is the only thing that is worthy of our consideration. I also like novels, TV and art, and while I don't care for baseball I am a big fan of Ultimate Fighting. In a trivial sense there is truth to be found in almost any human endeavor, but there is a big difference between finding truth and defending what we believe the truth to be. There are far more ways of learning than there are ways of knowing.

My reply to Josh's comparison is that he does not do justice to religion. The ways of knowing that are unique to religion, namely revelation and the words of holy texts, have today been utterly discredited. If we are talking about religion without those attributes then we are talking about something vastly different from the mainstream of religious belief, at least in America.

In short, if Josh wants us to take his comparison seriously, he needs to answer some simple questions. What do we know from religion that we do not know by other means? What lessons can we learn from the alleged insights of the world's religious traditions that we can not learn more clearly in other ways?

I don't think Josh, or anyone else, can give a compelling answer to those questions.

Tags

More like this

In his latest HuffPo piece, Karl Giberson writes: The story of Adam and Eve originated as a Hebrew oral tradition, which is a long ways from an English prose translation. And there are more complex filters related to culture, author intent, literary form, historical setting, anticipated audience…
I am happy to report that my back is now completely healed up from its recent travails, and I can now sit in perfect comfort for arbitrarily long periods of time. So let's see if we can wake up this sleepy little blog... My friend Dave Pruett, recently retired from a long and successful career…
Via Jerry Coyne I came across this essay regarding the interpretation of Genesis. (Click here for Part One of the essay.) The article is by Kenton Sparks, a professor of Biblical Studies at Eastern University. His argument will be entirely familiar to connoisseurs of this issue. The Bible, you…
I often write posts arguing that it is difficult to reconcile evolution and Christianity. When you consider that evolution challenges certain claims of the Bible, refutes the traditional design argument, exacerbates the problem of evil, and suggests that humanity does not play any central role in…

Let's keep in mind that although fiction is a great way to spread knowledge about a subject through popularization, it's also a fantastic way to spread and popularize misinformation if it contains junk science. I had a friend who watched that Michael Bay movie The Island, and was convinced afterward that your genetic clone would have all your memories, because mental memories are stored in your DNA.

So how do you tell the difference between bad "knowledge" and good knowledge presented in a work of fiction? Obviously not from the fiction itself. You do it with that very empirical science we can't be too enamored of for fear of being overly dismissive of alternative epistemologies.

I hesitate to bring up the sludge called Atlas Shrugged, nor even the earlier Fountainhead. These were written expressly by Rand to illustrate the "truths" of objectivism and the worthlessness of the ungrateful "common man."

Fiction and theme are interesting ways of communicating the author's beliefs, but they are hardly to be considered ways of knowing. Joseph Campbell wrote well of the power of myth, but rather than revealing truths, myths revealed fluid solidarity, the sort of "truths" needed to hold societies together.

I might suggest that truth here is coming into the discussion with just plain too much baggage. As you say early on truth only applies to propositions and when it does it's strictly a property of the proposition. Saying the truth of literature is something akin to saying the green of hot air. The only way the statement hopes to make sense is by changing the definition of truth so that it roughly gets translated as message or meaning but what happens then is that we are no longer dealing with the epistemic definition of truth so it becomes useless when talking about knowledge. It's a straight up fallacy of equivocation.

There's clearly a strawman being flogged in the idea that science is the only way to gain information (I think 'gaining information' is what is really meant by the terms "other ways of knowing" or other "truths".)
You can get information from a variety of sources, literary religious, social or scientific. The difference is that we have only one way of telling whether the information is false, and that is the scientific method.

Literature is very good at pointing out true things, and false things as well. But the assertion that these are not scientifically verifiable is bizarre.

To me it's all about reliability. Say you are in a spacecraft orbiting Jupiter. Physics and math will get you home. "Alternative ways of knowing" will most likely get you dead.

And what do you have against Picard? He's not Kirk, but I would certainly place him above Janeway (although that might be because I liked TNG so much better than Voyager)

Roadrunner cartoons are a way of knowing funny, but not of learning physics.

Sigmund overstates things a little. There's no single scientific method, though it's true that science makes distinctive use of hypothetico-deductive reasoning, mathematical modeling, instruments that extend the senses, etc. What's more, there are plenty of ways of getting reliable information that are not distinctively scientific. E.g., often we can just ask someone or just look for ourselves. There's nothing distinctively scientific about those approaches, but they're often very reliable.

But his main point is correct. There's a strawman being flogged here. No one is claiming that there is only one way of getting information. But when it's suggested that revelation, drug-induced trances, mystical insight, and so on are "ways of knowing" we should reach for our phased-plasma rifles. These are certainly not reliable ways of obtaining knowledge, i.e. forming (roughly, since there is some genuine debate at the margins about how "knowledge" should be defined) justified, reality-tracking, true beliefs. They are ways of forming beliefs, no doubt, but they are not reliable ways of obtaining knowledge.

Just say, prosaically and accurately, that there are many ways of obtaining information, and admit that some are much more reliable than others; stop using this high-sounding "ways of knowing" formulation, which fudges the unreliability of revelation, etc.

There are many reasonably reliable ways of obtaining information, but revelation is not one of them. Neither is mystical insight. Neither is listening to a priest make supernatural claims. Neither is going into a drug-induced trance. There is only one way of actually knowing - you know something by believing it to be true on grounds that track reality and justify your belief, and in circumstances where it actually is true. There's no other way.

Hey Jason, longtime reader, first time commenter. I must say this may be my favourite of all your posts. However, I've been so shocked that I have to pass comment.

Kirk better than Picard!!? And Janeway 2nd? WTF? Lets establish a formal set of standards against which we can measure the smarmy Kirk and the noble Picard...

I notice you didn't even include that shmuck from Star Trek Enterprise... :).

Of course, Scott Bakula sucked in Quantum Leap too :).

As far as captains go I'd place him somewhere below squeeze cheese.

By DazedNConfuzed (not verified) on 18 Sep 2009 #permalink

Jason -
A related question: Do you think there are moral truths? When Thomas Jefferson talks about "inalienable rights" do you think there actually are any?

I think if you'd subscribe to some form of ethical realism you would necessarily have to believe in "non-scientific" ways of knowing.

And by the way, I vote for Picard, then Kirk, then new-Kirk, then Janeway, and Sisko last.

Publicly admitting that you like CPT Archer is like admitting you're an atheist in a theocratic society. Not that I liked him...

Anyway, great post! I was time-constrained so had to read fast, but I bookmarked it and will definitely be returning/referring to it in the future!

Jason, I disagree with your distinction between "knowing" and "learning". In this context I think they both refer to acquiring beliefs.

It seems to me the important distinction here is between rational ways of knowing (which I'll call "rational inference") and non-rational ways of knowing. Not all rational inferences are typically labelled "science". For example, the inferences made by historians are typically labelled "history", not "science". When I hear barking and infer that my neighbour has acquired a dog, I'm making a rational inference, but we would probably call that practical reasoning, not science. So in this sense there are certainly other ways of knowing besides science. History and practical reasoning (among others) are also ways of knowing. But these are still rational ways, and they employ similar forms of logic to science, albeit possibly less rigorous ones.

Because these different categories of rational inference have so much in common, and are often difficult to distinguish, some people feel it's appropriate to use the word "science" in a broad sense that encompasses all of them. I don't think that's unreasonable. But I myself prefer to avoid it. Rather than saying that science is the only reliable way of knowing, I prefer to say that rational inference is the only reliable way of knowing.

Note: Philosophers have traditionally defined knowledge as "justified true belief". The "justified" criterion is added in order to exclude accidentally true beliefs. If a lottery winner believes (before discovering the result) that he's won the lottery, but has no rational basis for that belief, should we say that he "knows" he's won the lottery? Philosophers have traditionally said no. But the "justified" criterion is now widely known to have serious problems. Philosophers have tried to repair it, but without success, as far as I know. Also, if you include the criterion "justified" (which I take to mean rational), then non-rational ways of knowing are excluded by definition. I think it makes more sense to define knowledge simply as "true belief", and then accept that it's possible to acquire knowledge by sheer luck. That's why I said that rational inference is the only reliable way of knowing. In fact, I don't think rationality is binary. It's a matter of degree. So it might be better to say that the more rational an inference is, the more likely it is to be true.

By Richard Wein (not verified) on 18 Sep 2009 #permalink

1. Your assumption about what the Bible "strongly implies" shows that you are not up on your reading. As always. Begin with John Feinberg's discussion of "literary framework" interpretation in No One Like Him

2. Captain Pike rules. ;-)

3. I'd like to know what way of knowing undergirds such a claim.
William Craig, VanTil, and Plantinga have addressed this matter.

4. The Cubs are the greatest team in baseball's history
Some things are just self-evident. ;-)

Apart from these, I am really, really surprised that you would make such a strong appeal to a positivist criticism of knowledge. Polanyi's Personal Knowledge might be of great assistance to your arugmentation method.

@12

Hey now, I'm an Athiest, living in what might as well be a theocratic society, and even I had enough taste in acting to dislike Scott Bakula as CPT Archer.

I must admit some days it seems more like an IDiocratic society... we just haven't gotten around to watering our plants with sports drinks yet :P.

By DazedNConfuzed (not verified) on 18 Sep 2009 #permalink

I hesitate to bring up the sludge called Atlas Shrugged, nor even the earlier Fountainhead. These were written expressly by Rand to illustrate the "truths" of objectivism and the worthlessness of the ungrateful "common man."

Or even more to the point, what are the "truths" that we "know" thanks to The Turner Diaries? Those are the "truths" that Timothy McVeigh, a fan of the book, "knew" when he bombed the Murrah Federal Building in Oklahoma City.

By JHGRedekop (not verified) on 18 Sep 2009 #permalink

I've tossed out the comparison before, and have yet to get any useful reply to it.

"People simply stared at me, bug-eyed, as though I were insane."

By Bayesian Bouff… (not verified) on 18 Sep 2009 #permalink

I enjoyed this - the learning/knowing distinction jumped out at me reading Josh's post.

You were getting at this near the end of the post, but I wanted to make clear that there are senses in which "Kirk is a better captain than Picard" can be a truth-claim in the same way that an ordering of two actually existing captains can be really saying something. Obviously it can only be true in the same way that "(Buffy) vampires fear crosses" is true, but it's the sort of claim that one could gather evidence for or against - we have detailed records of some of the most significant actions of both captains. Because of differences in context, it's going to be really hard to actually settle, but it's still as truth-apt as the claim "George Washington was a better general than Dwight Eisenhower".

Russell Blackford (#8) wrote:

There are many reasonably reliable ways of obtaining information, but revelation is not one of them. Neither is mystical insight. Neither is listening to a priest make supernatural claims. Neither is going into a drug-induced trance.

A girl I know once took an excessive dose of dried psilocybin mushrooms and, after the standard the-walls-are-breathing business, heard a voice, or rather something beyond a voice, something of which regular words are only shadows, saying, "By convention there is sweetness, by convention bitterness, by convention colour, in reality only atoms and the void."

Richard - Is that particularly helpful, though? Redefining knowledge as merely true belief "rescues" knowledge, but at the cost of making it extremely uninteresting. It's also simply not how we actually use the word in conversation - the reason that "justified true belief" was so popular is that it's incredibly intuitive. And I think that most would agree that, even though Gettier problems create issues for that definition, 'knowledge' is still clearly something to do with having a belief which is justified by knowledge of other things which stand in a certain sort of causal relationship to the truth of the belief.

Pussy Picard, meanwhile, ranks somewhere behind Chekhov's hapless captain from the start of The Wrath of Khan.

Never reading... this blog... again.

By foolfodder (not verified) on 18 Sep 2009 #permalink

I agree with you foolfodder. Even said in jest, those words cut too deep. The only way it could have been worse is if he had claimed Kes was more entertaining than Wesley Crusher. *shudder*

Whose jesting? I despise Picard, even though Patrick Stewart is certainly the best actor ever to appear in a Star Trek series. For heaven's sake, Picard couldn't even win a fist fight against Malcolm McDowell. He had to recruit a geriatric, out of shape Captain Kirk to do his fighting for him. Enough said.

Comparing Star Trek captains is a bit like comparing athletes from different eras. The original Star Trek seems a bit camp now, but when I was young and it was the only sf show on TV, with the occasional "big idea" script exploring real issues from a different perspective, I formed a bond with it than no later series or captain could displace.

But then I also think Rod Laver was the best tennis player ever, and Bill Russell was the best basketball player.

I read that Roseneau post (before seeing this one), and thought about posting a rebuttal comment, but from the defensive responses that other commenters like Sigmund got, I could see it would do no good.

Making up stories seems to be a basic part of human nature, so I expect there is an evolutionary reason for it, maybe an offshoot of our pattern-recognition ability. I enjoy reading, but I am not sure that story telling has done more good than harm. When people take it seriously as another way of knowing, separate from science but equal or better in certain cases, then I think it does the most harm.

One of the all-time great comments I've read on ScienceBlogs (alas, I don't remember the author), on science vs. religion:

"The two epistemologies:
1. Finding stuff out.
2. Making shit up."

The way I'd put it: There are many ways of forming an opinion, but only one way of knowing.

Or in the immortal words of Dicky D, "Science is the only way to know, y'all."

Whose jesting? I despise Picard, even though Patrick Stewart is certainly the best actor ever to appear in a Star Trek series. For heaven's sake, Picard couldn't even win a fist fight against Malcolm McDowell. He had to recruit a geriatric, out of shape Captain Kirk to do his fighting for him. Enough said.

Are you kidding me?! Who but Picard (and maybe Worf) could withstand a few days in a Cardassian torture chamber?! Yeah, the best Kirk could do was say Khan really long and loud or have a painfully slow brawl with a plastic lizard suit. Sorry, Picard wins.

There are four lights!

Richard -

The distinction I am making between “way of learning” and “way of knowing” is essentially the same as the one between “context of discovery” and “context of justification.” Reading literature is great for exposing you to new ideas or for getting you to think about things in new ways, but I do not think it makes sense to use a great novel as a justification for some truth claim.

I was, indeed, thinking of science very broadly in my post, as something effectively equivalent to your rational inference. If Josh had said something like, “Science isn't the only way of knowing. There is also historical scholarship!” I would not have disputed the point except to argue that such an observation is not really so useful if we are trying to carve out room for religion.

I agree that terms like “knowledge” and “justification” can become very murky when you try to define them too sharply. But they do have everyday meanings and usages that ought to be sufficient for this discussion. Maybe not, though, since at least some of the disagreement between me and Josh really does seem to boil down to disagreements over usages.

Changing the subject, we might point to something like “Word of mouth” as a way of knowing that is genuinely unscientific. It seems perfectly reasonable to say, “I know X is a reliable plumber because a dozen of my colleagues at work told me he is reliable,” even though a scientist might chide me for using a nonrandom sample, or for not looking too closely for possible sources of bias.

Since all of this typically comes in the context of trying to carve out space for religion (as I noted at the start of the post) perhaps the solution is simply to say that, while there may be many reliable ways of knowing, religion does not contribute anything to any of them.

That way we can bash religion without having to quibble about what does, and does not, qualify as science!

You wrote: "Josh's defense of religion, such as it is, comes at a very high price. We religion-bashing types take religious claims seriously, consider the evidence for them, find them wanting, and argue that they ought to be dismissed on that basis. "

It is easy to prove the existence of an Intelligent Creator.
See the blog: bloganders.blogspot.com (left menu). A proof using formal logic and science.

Anders Branderud

Gotchaye said: "Because of differences in context, it's going to be really hard to actually settle, but it's still as truth-apt as the claim "George Washington was a better general than Dwight Eisenhower"."

Maybe true, but on the other hand we do know that Washington outranked Eisenhower, even if it had to be done posthumously.

For heaven's sake, Picard couldn't even win a fist fight against Malcolm McDowell. He had to recruit a geriatric, out of shape Captain Kirk to do his fighting for him. Enough said.

So Space Opera Kirk has his dramatic demise in Generations, and is, therefore, better than the cerebral, realistic Picard who survives?? Only in a twelve-year-old's dreams.

By natural cynic (not verified) on 18 Sep 2009 #permalink

I'm confused -- there were other captains besides Kirk?

"For heaven's sake, Picard couldn't even win a fist fight against Malcolm McDowell. He had to recruit a geriatric, out of shape Captain Kirk to do his fighting for him. Enough said."

That's evolutionary survival strategy for you. Outsource warfare. Make the muscley brutes do you physical fighting for you, so you can pass your genes onto the Next Generation.

Hi Jason:

My guess is that you either being provocative or have been hoodwinked by philosophy. You seem to have left science and its way of knowing - those of us in the field call it empiricism - out of the analysis.

Lets look at how it plays a role in the truth claims of revelation. BTW, I notice you toss revelation out the door with nary a comment. Presumably, this is based on your own personal way of knowing, as opposed to the public kind.

Anyway, back to truth claims. How do you judge such comments as those made by Mohammad to the effect that the earth revolves around the sun, or that we should look for truth even in China? Empiricism! Like any truth claims, you check it out. When whole cultures check out revelation and decide it is trustable as a source of knowledge and wisdom, they ramp up powerfully.

My advice to mathematicians getting on their high horse about religion is to not forget empiricism!

By Stephen Friberg (not verified) on 18 Sep 2009 #permalink

14 - "3. I'd like to know what way of knowing undergirds such a claim.
William Craig, VanTil, and Plantinga have addressed this matter."

Plantiga? Plantiga! Hee Hee Hee. His arguments are seriously vacuous, with tons of special pleading. I can not figure out why anyone takes him seriously. Craig is a hack also, but then most apologists are. Don't know of Van Til's work, but have heard the name, so I can't comment on him. Too bad I'm at work and can't figure out who tore him apart - Nicholas Everitt maybe? - in one of the books I read. I couldn't believe who lame Plantiga's argument was, until I read his work and saw that it was.

The most apologists seem to be able to do is hand wave and special plead for their claims. But if they have new evidence, they can bring that, instead of refuted claims.

Kirk had babes, but Picard was a Borg. Hmm. Choices, choices....

That's evolutionary survival strategy for you. Outsource warfare. Make the muscley brutes do you physical fighting for you, so you can pass your genes onto the Next Generation.

Well played, Siamang, well played. +5 bonus for Wittiness

Blake - I think of New Kirk and New Spock like I think of New Coke. Nuff said.

#35 Stephen Friberg: When whole cultures check out revelation and decide it is trustable as a source of knowledge and wisdom, they ramp up powerfully.

Why is it that the worst arguments are accompanied by the most flagrant arrogance? Argumentum ad populum anyone?

By Bayesian Bouff… (not verified) on 18 Sep 2009 #permalink

This analysis is both sanctimonious and confused, for at least 3 reasons.

1. It's obvious that revelation is a valid way of knowing. If I ask my wife if it's raining and she says it is, I've learned through her revelation to me. JR's actual concern is alleged divine inspiration and either (a) our typical inability (unlike the situation with my wife and me described above) to test the alleged revelation or (b) the many examples of alleged revelation that turned out to be false. Any number of people will disagree with that conclusion based upon things they may have observed (cf. the Vatican's testing procedures for alleged miracles); JR rejects them a priori on account of philosophical presupposition.

2. JR repeatedly confuses the discipline (e.g., science) with the tools used within the discipline (e.g., empiricism). Both science and history employ empiricism, but history cannot demand repeatability the way science can. That difference will impact the level of certainty with which we may hold various conclusions within a given discipline, but doesn't give science some special cult-like status. Of course, some claims within some disciplines are not subject to empirical analysis, making the investigative task more difficult still. But that doesn't mean we cannot and should not use empiricism and logic to test such claims.

3. JR claims a bright line distinction between fact and opinion that doesn't exist. We can employ deduction only by using some baseline assumptions, some of which will necessarily remain unevidenced. Using evidence alone, the best we can achieve is falsification and thus an inductive (and therefore limited) conclusion which, since that conclusion can necessarily be altered based upon additional evidence, looks suspiciously like an opinion. In many cases, we can be really sure about inductive conclusions (e.g., the general tenets of evolution), but in many others (e.g., the relative utility of freedom and equality and which should yield and it what ways with respect to particular policy questions) we cannot.

Therefore, the final question proffered is necessarily incoherent. Empiricism and logic have the best track records I'm aware of for examining truth claims, even though the conclusions we derive from then, even when properly tentative, are far from perfect. Thankfully, time typically allows truth to be outed with respect to claims subject to empirical analysis. But such claims are far from the only ones that we need answers to in order to live a full and meaningful life.

If I ask my wife if it's raining and she says it is, I've learned through her revelation to me.

And if she tells you it is, but it isn't, have you still "learned" through revelation? If you hear voices telling you that aliens in spaceships are beaming thoughts into your head, does that count as "revelation"?

In these kind of cases, the only way to determine if you have in fact "learned" anything true is via empirical testing, otherwise we could "learn" all sorts of things that don't line up with reality.

indignant -

A Cardassian torture chamber? Pshaw. Kirk got tortured in every other episode, and when it was over he didn't go blubbering to his ship's counselor about how he was about to break. Come to think of it, Kirk's Enterprise didn't even have a ship's counselor, on account of what a total badass he was!

natural cynic -

Space Opera Kirk died heroically using his badassery to save the world. He would not have needed to die heroically if cerebral, realistic Picard had managed to do his job.

And let's not forget that Space Opera Kirk also had quite the brain. Who devised the Corbomite maneuver? Who figured out how to destroy the Doomsday machine? Who figured out that bizarre alien mantra was actually the Pledge of Allegiance? Do you honestly believe Picard could have MacGyvered his way to victory against the Gorn? It is to laugh.

Granted, there was a Picard maneuver. But it took Data all of thirty seconds to beat it.

In these kind of cases, the only way to determine if you have in fact "learned" anything true is via empirical testing, otherwise we could "learn" all sorts of things that don't line up with reality.

That's a point I made in my post, though the reiteration isn't a bad thing. Every "way of knowing" is subject to error. The scientific method offers the best checking mechanism I'm aware of short of logical disproof and empiricism and logic offer the the best results overall. But your point yields the argument since we've moved past how we "know" to how we can be sure and how sure we can be.

Sure, there are ways of knowing outside of rational inference and conventional science. In the verstehen tradition, knowledge can be obtained through direct experience or through empathy. When it comes to aesthetic experiences for example, a person "knows" that Freebird is one of the greatest guitar riffs in rock and roll, or a Bach cantata is uplifting. How does the person know? Via experience. Similarly, we know when we are in love or upset. We know or understand others, even fictional literary figures, when we can empathize with them. Religious people may say they know God loves them because they have experienced God's love, and "know" they are saved.
As others above have noted, however, the problem with knowledge via direct experience or empathy is reliability and validity. To assume that religious experiences reflect some transcendental world violates the principle of parsimony, not to mention rationality. To religious believers, their experience may be real to them, but the rest of us do not have to accept it for ourselves, any more than everyone has to like Freebird. Neither do we have to accept any of the often goofy ideas the go along with a person's reported religious experience.

By manoffewwords (not verified) on 18 Sep 2009 #permalink

The Corbomite Maneuver? Seriously, that's the best you can come up with, Jason? Good grief, it took Kirk an entire episode to suddenly think up the option to bluff! Picard could bluff his way out of a Ferengi debtor's prison while saving an entire race of helpless sentient primates at the same time. For Spock's sake, by the end of TNG they had taught emotionless Data of all people, er, androids, to bluff.

And you can't honestly keep using the Gorn as an example of Kirk's prowess. He almost got killed by a plastic lizard suit stuck in slow-mo world. He needn't have gone for a makeshift cannon. Picard would have been smart enough to realize he could simply WALK AWAY from the lethargic thing, take a rest every hundred feet and still have outrun the Gorn. Never again should any man use the "I defeated Gorn" as a sign of prowess.

Admit it, you're just in it for Kirk's smile.

As I wrote on Jerry Coyne's blog, the problem is that âways of knowingâ meme trotted out by religious apologists every time they are at a loss for arguments in a debate against science.

We don't need to be philosophers to recognize itâs a lazy, incorrect usage: âways of seeing the worldâ would be a more accurate phrase! And of course, there are ways of seeing that are more reliable than others for getting a true picture. Tinted glass, strobe lights and kaleidoscopes may be pretty and wonder-inducing, but to see where you put your feet when you cross the street, youâd better have clear eyes. And clear glasses, and microscopes and telescopes for the rest of the universe, of course!

Jason Rosenhouse: The distinction I am making between âway of learningâ and âway of knowingâ is essentially the same as the one between âcontext of discoveryâ and âcontext of justification.â

Hm. I am of the position that both such traditional contexts are subtly mistaken; that more correct contexts of science are Experience, Inspiration, Formalization, and Testing. (Referring to science as anthropological practice, add the context(s) of Design. Philosophically, I would consider Design the demarcation to engineering. Experiment works in Design, Experience, and Testing; traditional Discovery encompasses all but Testing, while traditional Justification encompasses Formalization, Testing, and Design.)

We experience the universe, and are inspired to conjecture patterns. We describe the experience via the conjecture, formalizing a hypothesis. All candidate hypotheses are tested, first for ability to give the correct description of experience (Falsification) and for length of given description (Simplicity). (For those who care, there is math, although one assumption in the paper is stronger than philosophically necessary; one can work ordinally within AH rather than RE.)

While religion is perfectly valid for Inspiration (e.g. Georges Lemaître), and perhaps even sometimes in Formalization, the context of Testing at the core of science as philosophical discipline is limited to mathematics, and religion (until it shows itself of at least equivalent self-consistency to ZF set theory) is ineligible to apply.

And Belief that is not Tested has no reason to be treated as Knowledge.

Jason Rosenhouse: I was, indeed, thinking of science very broadly in my post, as something effectively equivalent to your rational inference.

I find useful to distinguish Science as a philosophical discipline, versus Science as it is anthropologically practiced.

Jason Rosenhouse: Since all of this typically comes in the context of trying to carve out space for religion (as I noted at the start of the post) perhaps the solution is simply to say that, while there may be many reliable ways of knowing, religion does not contribute anything to any of them.

Religion can contribute as once source feeding into Inspiration, and perhaps Formalization. It is not the only source, and it does not matter in either Experience or Testing.

Sinbad: If I ask my wife if it's raining and she says it is, I've learned through her revelation to me.

You have acquired belief; whether you have acquired knowledge depends on the truth of her statement. EG, she might be irritated at you from having slept in the wet spot last night, and be lying to you as petty revenge.

Thus, Revelation works in Inspiration, not Testing.

Sinbad: Both science and history employ empiricism, but history cannot demand repeatability the way science can.

History is a science, both philosophically and anthropologically. The limits on experiment and degree of repeatability impact the character anthropologically, but not philosophically.

Sinbad: Using evidence alone, the best we can achieve is falsification

Untrue; the BEST we can achieve is testing for minimum description length. Which, although probabilistic, is rather more than an opinion.

Sinbad: We can employ deduction only by using some baseline assumptions, some of which will necessarily remain unevidenced.

Correct. However, assumptions that Wolfram's Axiom is valid for philosophical inference, that the joint affirmation of the ZF axioms (excluding Choice and Power) is self-consistent, and that experience has some pattern within complexity class AH appear to be sufficient for Science to follow.

I mean, next you'll be saying that Spock's long-lost half-brother once hijacked the Enterprise using cookie-cutter psychobabble and a little mood music, drove it to the centre of the Galaxy in an afternoon, and found that the "God" he sought was a malevolent energy being with lustful intentions toward starships.

That's a point I made in my post, though the reiteration isn't a bad thing. Every "way of knowing" is subject to error. The scientific method offers the best checking mechanism I'm aware of short of logical disproof and empiricism and logic offer the the best results overall.

By your criterion, typing monkeys produce "knowledge" -- sure, the texts they generate have to be "checked"...

If a claim isn't justified, it's not knowledge.

No.

Just no.

Sisko and Picard in a tie for first, then Kirk, then Janeway, then Archer.

But the best ship was Voyager by far.

but, I think we can all agree, that William Adama was better than all the namby pamby trek captains. :P

But to the main point, the distinguishing difference between scientific knowledge and literary 'knowledge' is in the scope of it. Scientific knowledge is universal in scope, given two people, Alice and Bob, they will both agree on what a given piece of scientific knowledge means. there wont be argument on what it means to say "F=ma". But literary knowledge is quite different, when I read a novel I bring with it my personal history and tastes and thoughts, when you read it you bring yours, what we get out of it depends partially on what we bring to it. Alice and Bob very well may not agree on what the underlying message of "The Great Gatsby" is. that is why literature is powerful. religious texts and teachings may be powerful for the same reason, when viewed in a literary sense, but people need to be aware when they are viewing things in a literary sense. it is not 'knowledge' in the way that science is, it is at best, inspiration and inspiration can come in many forms, in fact, it can come from anywhere, but that does not mean that knowledge can also come from anywhere, knowledge is a much more limited category than inspiration.

Didn't Adama fall sway to that religious mumbo-jumbo? I gave up on the show once they put all that magical garbage in. I'll take the human(oid)ist Star Fleet captains any day. Of course, I'll take the scientist Captain Dexter over them all (http://www.tv.com/dexters-laboratory/star-check-unconventional/episode/…) - after all, he had his own Laboratory and did his own work!

"It's obvious that revelation is a valid way of knowing. If I ask my wife if it's raining and she says it is, I've learned through her revelation to me."

No, you haven't. You've learned because you're making several abductive inferences about the reliability of her claim, such as the notion that she would have little to gain through lying about that particular fact.

"Both science and history employ empiricism, but history cannot demand repeatability the way science can."

There are several types of repeatability. There is experimental repeatability, which is the most reliable but actually fairly rare in the natural science, and there is multiple corroboration, which history and archeology employ. To use a religious reference, if we want to verify the claim that the Gospel of Thomas dates back to first century Christianity, we'd need to appeal to independently verifiable evidence.

"Using evidence alone, the best we can achieve is falsification and thus an inductive (and therefore limited) conclusion which, since that conclusion can necessarily be altered based upon additional evidence, looks suspiciously like an opinion."

No, the best we can achieve (as another reader pointed out) is locate a computer program that exptrapolates a potentially infinite one-way sequence of binary digits encoding observations (see any exposition of Solomonoff induction). We can do a lot more than falsification, Popper was wrong.

And just to be clear to others, when I say "Popper was wrong", I'm specifically talking about his argument that induction was impossible. I do not disagree that falsifiability is an important heuristic for distinguishing science from metaphysics.

It seems obvious to me that science is not the only "way of knowing", though it seems far from obvious that religion counts as a "way".

There is at least one other way which we all should know, and that's philosophy.

Science purposefully limits itself to that which is empirically testable. If you don't understand this, then you don't understand science. That is, after all, how it maintains its track record of reliability: it doesn't ask questions that it knows it can't answer.

Yet there are other matters, such as justice, morality, ethics, that are important, but cannot be empirically tested. We need to know these things, and that's why we have philosophy.

By Pseudonym (not verified) on 18 Sep 2009 #permalink

The thing is, metaphors and analogies can help us in justifying opinions about real things and processes in the sense that they improve intelligibility. And it doesn't seem right to deny that the process of improving the intelligibility of propositions is part of justifying them, up and beyond the context of discovery.

Sure, though, the benefits of analogies are nothing like a sufficient condition for justification (that depends on things like simplicity and reliability) or even a necessary condition (think structural realism).

Sinbad @ # 41: When Jason Rosenhouse criticizes (statements made by) Josh Rosenau, pithy overgeneralizations about what "JR" said contribute little to anyone's way of knowing.

By Pierce R. Butler (not verified) on 18 Sep 2009 #permalink

Badger3k, sure the series developed this crazy mystical thing, which, i don't really mind, it helped tell the story, its not like they were saying its true in real life, just the setting of the story, but Adama never followed the spiritual thing, he constantly took in the facts on the ground and adapted them to his needs. I also don't think that leadership and authority as captain of a spaceship is entirely based on how good you are as a scientist. However, Dexter did have a way cooler lab than anyone in the world, if only he could find a way to keep Dee Dee out of it...

Pseudonym, I'm not sure how much we are able to 'know' through philosophy, other than our own ignorance. The way I see it is (and admittedly this is coming from a math background) we have rules of inference and logic that can give us many truths in the form of tautologies, but other than tautology, we cannot confirm any of it without looking at the statements we are linking together, Science, and similar scientific endeavors such as history (as was mentioned before) are based on evidence to give us confidence in the sentential variables we link together ( i.e you don't need to take it on faith that the acceleration due to gravity on the earth is 9.8 m/s^2, you can test that to be sure, once we have confirmed the truth of all of our statements, we can link them together logically and be assured of the truth of the resulting statement, (not that anyone actually works that way, but bear with me). the problem with philosophy is that we are not given evidence for the suppositions made so unless it is a tautology we cannot be certain of its truth because we cannot independently test the assertions of the sentential variables. If the statement is a tautology, well its true, but it imparts no information.

From this I have come up with a slightly expanded idea of a definition for what it means to 'know' something, we need to have a logical justification for its truth, and we need to have logical justification or evidence for the truth of its component parts.

Admittedly, I cant 'know; under my definition if my statement of 'what it takes to 'know' something is true' is true, but I think it.

Tyler,

"No, the best we can achieve (as another reader pointed out) is locate a computer program that exptrapolates a potentially infinite one-way sequence of binary digits encoding observations (see any exposition of Solomonoff induction). We can do a lot more than falsification, Popper was wrong."

Can you elaborate on this? In what way does Solomonoff induction solve the (logical) problem of induction? From what I have read, it merely gives you the probability of what the future members of a series will look like given a specified past series. However, probability falls short of certainty, so the problem is still unresolved.

Btw, it is Hume's argument; Popper merely accepted it and tried to devise a solution.

"From what I have read, it merely gives you the probability of what the future members of a series will look like given a specified past series. However, probability falls short of certainty, so the problem is still unresolved."

Depends on which logical problem of induction you're talking about. If you're talking about Hume's argument that there is no reason to believe any contingent proposition about the unobserved, that logical problem has always been a non-sequitor. It relies on the notion that the only valid arguments from a proposition P to Q are those that entail Q, which is never justified and rarely made explicit.

Solomonoff induction solves a logical problem with Bayesian probability, namely, where the prior probability is supposed to come from. Solomonoff proved that the class of enumerable probability distributions contains a universal element. This, along with his proof that the expected squared-error of prediction between this universal distribution and any other enumerable distribution converges to zero faster than 1/n, proves that his learning method is optimal.

Pseudonym,

Science purposefully limits itself to that which is empirically testable. If you don't understand this, then you don't understand science.

Then I must confront your limited definition of science. There is no empirical testing of historical evidence. Yet the fossil record is studied with the same methods as happen in courtrooms every day. There is no empirical testing of theoretical physics, like string theory, or theoretical mathematics. Yet all of these are accepted under the umbrella of "science".

Science is broader than those tests which produce results. Logical positivism was supposed to have died long ago. Looks like it's still kicking a bit.

Gotchaye @21 - On reflection I think you're right that there must be some sort of "causal relationship" between the reality and the true belief if we're to call it knowledge. But I don't think the concept of justification accurately captures this relationship, and that's why we end up with the Gettier problems.

Suppose that you have secretly developed a technique for directly manipulating people's beliefs while they sleep, and that without my knowledge you have given me a true belief which I have no other reason to hold. You probably wouldn't consider my belief to be justified. But there is a causal relationship, and you would probably consider my belief to be knowledge.

P.S. I've just read a little about Goldman's causal theory of knowledge, which I hadn't heard of before. Perhaps you had that theory in mind. From the little I read, it seems appealing.

By Richard Wein (not verified) on 19 Sep 2009 #permalink

From Rosenau: "From what I know of religions like Taoism, Buddhism, and Hinduism, empirical claims are not so central (unless you think reincarnation, kharma, etc. count as empirical claims, which I don't)."

I'll admit to not having studied exactly what Hindu people are talking about when they talk about "reincarnation" (if, in fact, they do talk about it), but how could it possibly be anything other than an empirical claim?

'll admit to not having studied exactly what Hindu people are talking about when they talk about "reincarnation" (if, in fact, they do talk about it), but how could it possibly be anything other than an empirical claim?

In Buddhism, there is actually an empirical procedure for determining whether a child is actually the reincarnation of the previous Dalai Lama -- the child is presented with some objects, of which only some belonged to the prior Dalai Lama, and if the child chooses those objects, that is seen as evidence that he is the reincarnation.

Re: Bryan at #67

I don't know either, so I did a bit of googling (which may be a dangerous thing):

----------googling excerpts begin----------
http://en.wikipedia.org/wiki/Taoism

Taoists believe that man is a microcosm for the universe.[12] The body ties directly into the Chinese five elements. The five organs correlate with the five elements, the five directions and the seasons.[33] Akin to the Hermetic maxim of "as above, so below", Taoism posits that man may gain knowledge of the universe by understanding himself.[34]
In Taoism, even beyond Chinese folk religion, various rituals, exercises, and substances are said to positively affect one's physical and mental health. They are also intended to align oneself spiritually with cosmic forces, or enable ecstatic spiritual journeys.[35][36] These concepts seem basic to Taoism in its elite forms. Internal alchemy and various spiritual practices are used by some Taoists to improve health and extend life, theoretically even to the point of physical immortality.

http://www.himalayanacademy.com/resources/pamphlets/KarmaReincarnation…
The akashic memory in our higher chakras faithfully records the soul's impressions during its series of earthly lives, and in the astral/mental worlds in-between earth existences.
â¦

Indeed, when beseeched through deep prayer and worship, the Supreme Being and His great Gods may intercede within our karma, lightening its impact or shifting its location in time to a period when we are better prepared to resolve it. Hindu astrology, or Jyotisha, details a real relation between ourselves and the geography of the solar system and certain star clusters, but it is not a cause-effect relation. Planets and stars don't cause or dictate karma. Their orbital relationships establish proper conditions for karmas to activate and a particular type of personality nature to develop. Jyotisha describes a relation of revealment: it reveals prarabdha karmic patterns for a given birth and how we will generally react to them (kriyamana karma). This is like a pattern of different colored windows allowing sunlight in to reveal and color a house's arrangement of furniture. With astrological knowledge we are aware of our life's karmic pattern and can thereby anticipate it wisely. Reincarnation: A Soul's Path to Godness.

http://www.buddhist-temples.com/buddhism-facts/buddhist-belief.html

As per Buddhism, there is nothing such as a soul or atman. Rather, a human being is believed to be constituted of five elements, namely physical form, feelings, ideations, mental developments and awareness. These components combine to form a human being at the time of birth. However, since Buddhism believes in reincarnation and karma, one finds a little contradiction here.

http://www.accesstoinsight.org/lib/authors/bodhi/bps-essay_46.html

Even modernist interpreters of Buddhism seem to have trouble taking the rebirth teaching seriously. Some dismiss it as just a piece of cultural baggage, "ancient Indian metaphysics," that the Buddha retained in deference to the world view of his age. Others interpret it as a metaphor for the change of mental states, with the realms of rebirth seen as symbols for psychological archetypes. A few critics even question the authenticity of the texts on rebirth, arguing that they must be interpolations.

A quick glance at the Pali suttas would show that none of these claims has much substance. The teaching of rebirth crops up almost everywhere in the Canon, and is so closely bound to a host of other doctrines that to remove it would virtually reduce the Dhamma to tatters. Moreover, when the suttas speak about rebirth into the five realms â the hells, the animal world, the spirit realm, the human world, and the heavens â they never hint that these terms are meant symbolically. To the contrary, they even say that rebirth occurs "with the breakup of the body, after death," which clearly implies they intend the idea of rebirth to be taken quite literally.

--------googling excerpts end------------

My conclusions from that and other stuff along the way:

Taoism has a lot of pure philosophy in it, but also makes some truth claims which can be empirically tested (and which I think are false).

Hinduism has some philosophy too, but makes a claim that can be empirically tested (that memories from past lives can be accessed, with some difficulty) - which I also know no good evidence of.

Buddhism in its purest form is mostly philosophy, with some untestable claims thrown in. That is, it says that our consciousnesses are "reborn" physically from past lives, but as far as I can see does not claim that we retain testable memories from those past lives.

My empirical test for Hinduism would be to have 1000 devout Hindus bury things in random locations under controlled conditions such that no one else could know the locations, and where they would not be disturbed. Then 100 years later, find someone who remembered that past life and could go directly to the right spot and dig up the object, after first having described it.

If only aspects of personality and not memories are preserved under Buddhism, the test would not work for them.

The term "empirical claims" has always seemed confusing to me. Does it mean claims based on empirical evidence, or any claims about the real world? I assume Rosenau meant the former, since his comment makes little sense with the latter interpretation.

On the other hand, the post he was responding to had Scott (allegedly), Coyne and Blackford referring to empirical claims derived from revelation, which seems to make more sense with the latter interpretation.

By Richard Wein (not verified) on 19 Sep 2009 #permalink

P.S. On second thoughts, maybe Rosenau did simply mean claims about the real world, and he thinks that those religions' claims about reincarnation are only metaphorical, and are not literal claims about the real world. That would seem to be consistent with his view of the Bible.

By Richard Wein (not verified) on 19 Sep 2009 #permalink

"Depends on which logical problem of induction you're talking about."

Well, since you mentioned Popper, I took it as a matter of course that you were referring to Hume's argument, since Popper does the same.

Incidentally, I am not sure what you mean by different problems of induction. They may be worded differently, but ultimately refer to the same logical problem.

"If you're talking about Hume's argument that there is no reason to believe any contingent proposition about the unobserved, that logical problem has always been a non-sequitor. It relies on the notion that the only valid arguments from a proposition P to Q are those that entail Q, which is never justified and rarely made explicit."

I think you state it the wrong way round here.

As far as we can determine, deductive inference is valid in the sense that true premises will always lead to true conclusions. (Being a fallibilist, I would not rule out that some major upheavel in the future will lead us to reverse this opinion. But at the moment, I do not see it.)

However, there is no equivalent procedure for inductive inference. What would be required would be an inductive principle P that would enable the reliable inference from true particulars to a true universal. But no such principle has ever been found. And as Popper argued (quite convincingly, IMO), it can not exist, since you would either have to justify P via induction (leading to an infinte regress) or via postulating it to be true a priori (as Kant unsuccessfully tried to do).

Thus, it will not do to simply label Hume's argument a non sequitur in order to defeat it. You will have to show how any form of inductive reasoning could be valid in the sense described above.

"Solomonoff induction solves a logical problem with Bayesian probability, namely, where the prior probability is supposed to come from. [...] This, along with his proof that the expected squared-error of prediction between this universal distribution and any other enumerable distribution converges to zero faster than 1/n, proves that his learning method is optimal."

This is all fine and dandy. However, you still end up with a result that is probably true, right?

For me, this disagreement depends on one's definition of knowledge. The concept of "justified true belief" has been and is disputable. How much "scientific knowledge" would pass the JTB test? As was pointed out earlier, both theoretical physics and mathematics hold many fields of "knowledge" which are suspect.

As to the conflict between science and religion, how could there be? In my experience the conflict is between scientists (often hard atheists) and religionists (frequently creationists),but beyond these, there are the religious scientists (Robert Miller) and the scientific religionists (Dalai Lama) who find no conflict between the two.

For me, it's a question of what we call knowledge and non-knowledge, then how we deal with the expansion of knowledge into the borderland.

"Incidentally, I am not sure what you mean by different problems of induction. They may be worded differently, but ultimately refer to the same logical problem."

In my view there has always been two separate problems of induction. There is A.) the problem of extrapolating consequences for the unobserved from the observed and B.) the problem of circularity, i.e., inductive reasoning is not self-justifying. The former is a problem that is, indeed, unique of induction. The latter is a problem for any system of reasoning.

"As far as we can determine, deductive inference is valid in the sense that true premises will always lead to true conclusions."

But it is interesting that this conclusion itself cannot be analytically deduced without employing the same circular logic that is considered a fatal problem for induction. But regardless, that something will always lead to true conclusions is unnecessary to the validity of inductive inference.

"However, there is no equivalent procedure for inductive inference. What would be required would be an inductive principle P that would enable the reliable inference from true particulars to a true universal."

But once again, it is an unstated assumption that "reliable inference" is equivalent to "entails/necessitates the inference". For instance, we have the guaranteed truth of the statement "Socrates is a moral if Socrates is a man" if we have some a priori knowledge that "all men are mortal", but if instead we only have knowledge like "99 percent of all men are mortal", we are still justified in accepting the conclusion. The method of inference is not as reliable, but that doesn't equate to not reliable at all.

"Thus, it will not do to simply label Hume's argument a non sequitur in order to defeat it."

But Hume is making a specific claim that isn't justified by his premises. He argues that there is no reason to accept any contingent proposition about the unobserved if that proposition cannot be analytically deduced. He assumes that all logic is deductive.

"This is all fine and dandy. However, you still end up with a result that is probably true, right?"

Yes, in the sense that the sequence is described by a probability distribution.

Shouldn't the "ways of knowing" referred to really be "ways of articulating" what we (think we) know. I know much more than I have ever articulated. So it seems to me that what I know is whatever my brain knows, most which has never found its way into language.

But if I articulate what I know and assert that the language I have used not only captures the truth of it but is binding for others also, those others should ask for evidence. Not reasons, not story line, but evidence.

But if I assert only that the language I have used captures the truth of it, leaving off the binding claim, we can have an conversation and go our separate ways.

So science and religion both assert truths in their own languages, and, make claims that those truths are binding on me. Show me the evidence.

By PiffleTosh (not verified) on 19 Sep 2009 #permalink

"He argues that there is no reason to accept any contingent proposition about the unobserved if that proposition cannot be analytically deduced."

Minor self-nitpick: "contingent" shouldn't be in the above sentence, I changed my wording halfway through and forgot to change it.

It is all very simple.

The bible and all religious works are works of fiction.

Josh Rosenau wants the bible to be true.

He then comes up with a half-assed scheme to justify fiction as non-fiction.

This tactic is either malevolent or insane.

By NewEnglandBob (not verified) on 19 Sep 2009 #permalink

Poor confused Dr. Rosenhouse,

Picard is a much better captain than Kirk. I would attempt to argue this intellectually, but if you have sided with Kirk, then you have already chosen brawn over brains. So, let's settle this Kirk-style by arm wrestling. You know where to find me!

I am going to assume that you were just trying to provoke people by putting Picard on the bottom. You are clearly too intelligent for that. If by chance you were being serious though it will have to be pistols at dawn. I will not have Patrick Stewart's greatness questioned.

By Neill Ra[er (not verified) on 19 Sep 2009 #permalink

Silly people.

Sisko was best, followed closely by Kirk.

The bravest and most honest of all was, of course, Quark.
------------------

On this knowledge thingy.

All of us, Jason included I suspect, are taught by others and accept many things on authority.

Unless someone has personally reproduced and confirmed all scientific tests of accepted science to their own satisfaction.

So is it knowledge? Do you know it or do you just accept it because a scientist or group of scientists outside your speciality say it is so?

On ways of knowing, we have mathematics and science that most posters will accept, but science isn't mathematics. Is mathematics a way of knowing?

Most accept the statement 1+1=2, but most will think of that as 1 apple + 1 apple = 2 apples but is that what the mathematician means?

Note: I'm using apple to say that you have to add the same type of things, feel free to use Natal Lemon if you prefer.

By Chris' Wills (not verified) on 19 Sep 2009 #permalink

"In my view there has always been two separate problems of induction. There is A.) the problem of extrapolating consequences for the unobserved from the observed and B.) the problem of circularity, i.e., inductive reasoning is not self-justifying. The former is a problem that is, indeed, unique of induction. The latter is a problem for any system of reasoning."

Well, problem A.) is what Hume is referring to and what prompted Popper to devise his methodology of falsificationism. As to problem B.), I believe that there is a misunderstanding on your part involved, see below.

"But it is interesting that this conclusion itself cannot be analytically deduced without employing the same circular logic that is considered a fatal problem for induction. [...] But once again, it is an unstated assumption that "reliable inference" is equivalent to "entails/necessitates the inference"."

OK, let's do some basic logic.

A deductive inference is characterized inter alia by the fact (or assumption, strictly speaking) that true premises will always lead to true conclusions.

Why is that? Because a valid deductive inference is tautologous, i.e. if you compile its truth table it will always have the same value, i.e. "true", no matter what the truth values of the premises are. Or in other words: the conclusion of a valid deductive inference is necessarily true (logically true, that is) due to the formal structure of the inference.

Another way to look at it is to state that deductive inferences move from the generic to the specific. They retain the information. The conclusion does not add any new information that was not already contained in the premises.

Now, is it conceivable that what I stated above is incorrect and that deductive inference from true premises would lead to a wrong conclusion? I would say, theoretically yes. But what would be required for this is to show how our concept of tautology is somehow flawed. Presently, I do not see a way to achieve this.

"But regardless, that something will always lead to true conclusions is unnecessary to the validity of inductive inference."

Not if you use "validity" in the logical sense, since the necessary transfer of truth from true premises to the conclusion is precisely what makes an inference "valid".

"For instance, we have the guaranteed truth of the statement "Socrates is a moral if Socrates is a man" if we have some a priori knowledge that "all men are mortal", but if instead we only have knowledge like "99 percent of all men are mortal", we are still justified in accepting the conclusion."

I think you are conflating an inference being valid with being sound.

Deductive logic will guarantee that if the premises "Socrates is a man" and "All men are mortal" are true, the conclusion "Socrates is mortal" will also be true. The reason is that such a deductive inference is logically true due to its logical form.

However, that does not mean that the inference is necessarily sound, i.e. factually correct. The reason is that one of the premises might be false. To say it again: A valid deductive inference merely guarantees transmission of truth from the premises to the conclusion - but only in case that all premises are true.

"The method of inference is not as reliable, but that doesn't equate to not reliable at all."

I am not sure what you mean here. Either something is reliable or it is not. Of course that does not mean that an inductive inference will always yield a false conclusion. But the point is that you have no way of knowing whether the conclusion in your particular case is true or not.

"But Hume is making a specific claim that isn't justified by his premises. He argues that there is no reason to accept any contingent proposition about the unobserved if that proposition cannot be analytically deduced. He assumes that all logic is deductive."

Again, this is not a defeater to Hume's argument. We "know" (as good as anything we know) that deductive inference is valid.

Now, what you are basically saying is that there might exist a version of valid inductive inference, but we have not found it yet. However, unless it is found, inductive reasoning remains unreliable. And as I already mentioned, Popper has argued that such a thing as a valid inductive principle P can not exist, since it leaves you only the two options of an infinite regress or apriorism.

"Yes, in the sense that the sequence is described by a probability distribution."

But then it does not constitute a defeater to Hume (and Popper), because a result which is probably true might just as well be false. Consequently, it does not constitute a method of valid inductive inference ("valid" in the sense used above).

Of course inductive arguments are never deductively valid. A good inductive argument is referred to as "cogent" or "inductively strong". Such arguments can certainly give good reasons for believing their conclusions to be true, and it is certainly possible to form relevantly justified true beliefs based on nothing but inductive reasoning (which includes hypothetico-deductive reasoning). Knowledge is not the same as certainty.

Re Chris Wills
You ask "Most accept the statement 1+1=2, but most will think of that as 1 apple + 1 apple = 2 apples but is that what the mathematician means?

Note: I'm using apple to say that you have to add the same type of things, feel free to use Natal Lemon if you prefer."

Actually, that is what a mathematician means, well sort of, what a mathematicians means is that yes 1 apple plus 1 apple = 2 apples, and 1 pencil + 1 pencil = 2 pencils and 1 lemon + 1 lemon = 2 lemons and so on.

Mathematics is the study of the relationships between objects without regard to what the objects are. thats why mathematics is so powerful in science. mathematics makes no claims of the form "X exists" all mathematical claims are of the form "if X then Y", the the way we define number is an abstraction on sets where we hold the elements in the set as irrelevant and look only at how many elements there are. Addition is the function that looks at the number of elements in the union of the two sets which we are trying to add. the abstractive principle is what leads mathematics to be so dense. once we have abstracted out the objects to make integer addition, we can abstract once more and make integer multiplication, and then we can abstract again to make real number multiplication and addition, and we can keep on abstracting and abstracting and create very powerful functions and formulae that have no 'existence' in the sense of a rock or an apple. Mathematics is not a subject you can use to say ""I know that X exists" however it is very powerful when you want to say "If we assume that X exists, then I can confidently say that Y, Z, P, Q, R, Et all also must exist, because if they didn't exist then X could not exist"

"A deductive inference is characterized inter alia by the fact (or assumption, strictly speaking) that true premises will always lead to true conclusions."

But it is still an assumption, and I would argue that it is not demonstrated by appealing the notion of a tautology.

"Why is that? Because a valid deductive inference is tautologous, i.e. if you compile its truth table it will always have the same value, i.e. "true", no matter what the truth values of the premises are."

But not all valid deductive inference is tautological. Tautological formulas are true for any valuation of their propositions, i.e., "A or not A". Most deductive inference is not achieved this way, most of it relates propositional values, i.e., "if A contains X, then A must necessarily contain a subclass contained in X" (which would, among other things, rely on the axiom of choice).

"Not if you use "validity" in the logical sense, since the necessary transfer of truth from true premises to the conclusion is precisely what makes an inference "valid"."

I'm using it in a less formal sense than that. What I mean is that the method of inference is justified, I'm not referring to the guaranteed transitivity of truth values.

"I am not sure what you mean here. Either something is reliable or it is not. Of course that does not mean that an inductive inference will always yield a false conclusion."

And this is the primary fallacy of Popper and others. Reliability is not a discrete concept, much less a binary one. Bayesian probability combined with Solomonoff's universal distribution provides a way to say that induction will lead to true conclusions in most instances. We do not have to sacrifice the validity of inductive inference if we simply admit that it's not the best.

"But then it does not constitute a defeater to Hume (and Popper), because a result which is probably true might just as well be false."

This isn't true. The missing premise here is that "might be true" and "might be false" are of equal status if there is nothing in our premises that entails one or the other. What we know about probability shows us that we can know which conclusion is more likely.

Tyler,
Do you think Popper was entirely in error? Might it be that the scope of application for his principles was limited and cannot be applied to every type of test?

Collin,

I'm not sure I understand your question. As I've said, I agree with Popper that falsifiability is an important heuristic for distinguishing science from non-science, since it implies testability. Where I disagree with Popper is where he says that we can't confirm certain hypotheses but can only know which hypothesis have survived attempts at disconfirmation. If we take Popper's reasoning to its logical conclusion, scientific knowledge as we understand it is basically impossible, since we can't assign higher or lower probabilities to any given hypothesis (we can't, for instance, make the claim that the chance of a baby being born male is 50 percent, which is absurd).

(Can everybody who notices pleeeeeeaaase, pretty please overlook the fact that I used "i.e." where I should've used "e.g." in my response above? :-) I just noticed that after obsessively reading over my response and it's irritating me.)

"But it is still an assumption, and I would argue that it is not demonstrated by appealing the notion of a tautology."

I do not understand what you mean here. How would you go about demonstrating that a tautology always has the truth value "true"? After all, this is how a tautology is defined. And it makes eminently sense to me.

Now, if you want to claim that this definition is somehow suspect/incomplete/inadequate, I think the onus is on you to show exactly where the problem is.

"But not all valid deductive inference is tautological."

I do not see how it could be otherwise, since the "validity" of the inference presupposes the transitivity of truth from premises to conclusion, which in turn is guaranteed due to the tautological nature of the inference.

"Most deductive inference is not achieved this way, most of it relates propositional values, i.e., "if A contains X, then A must necessarily contain a subclass contained in X" (which would, among other things, rely on the axiom of choice)."

I do not follow what you are trying to say here. What are "propositional values" supposed to be? Are they different from truth values?

"I'm using it in a less formal sense than that. What I mean is that the method of inference is justified, I'm not referring to the guaranteed transitivity of truth values."

But then you are not addressing Hume's argument. After all, Hume himself stated that we are practically justified to use inductive inference since it is ingrained into our nature, despite the fact that we would rationally be obliged to reject it.

"And this is the primary fallacy of Popper and others. Reliability is not a discrete concept, much less a binary one."

What good is a method for separating truth from falsehood if it is not reliable? For instance, if a person holds 10 beliefs and knows from inductive inference that his beliefs have a 95% chance of being true, how does that give him any guarantee or even merely confidence that his beliefs are in fact true? How do you logically go from "Proposition X has Y% probability of being true." to "Proposition X is true."?

"This isn't true. The missing premise here is that "might be true" and "might be false" are of equal status if there is nothing in our premises that entails one or the other. What we know about probability shows us that we can know which conclusion is more likely."

That's great. However, the problem is that you need to provide a logical connection between "likely true" and "true". If you can not do this, you have no reliable way of excluding the possibility of your proposition being false. Unless you move from a concept of absolute truth to relative truth or allow for more truth values than "true" or "false", I do not see how any form of "probably true" could ever be a substitute for "true".

"Where I disagree with Popper is where he says that we can't confirm certain hypotheses but can only know which hypothesis have survived attempts at disconfirmation."

But Popper does not say this. What he maintains is that instances of confirmation of a theory can give us no certainty or even confidence about whether the theory in question is true or not, regardless of how many instances of confirmation we have accumulated. In contrast, one instance of falsification is sufficient to label a theory "false".

"If we take Popper's reasoning to its logical conclusion, scientific knowledge as we understand it is basically impossible, since we can't assign higher or lower probabilities to any given hypothesis (we can't, for instance, make the claim that the chance of a baby being born male is 50 percent, which is absurd)."

Of course you can make such a claim. Popper merely says that such a claim is worthless if you want to know whether the baby will in fact be a male or not.

Generally speaking, what is claimed by critical rationalists like Popper and Miller is that the goal of science is to separate true propositions about our reality from false ones and that inductive inferences that produce propositions which are "likely true" fall short of helping in this goal, since nobody has come up with a method to get from "likely true" to "true".

Furthermore, they maintain that inductive reasoning is not required for the process of separating truth from falsehood, since the deductive process of falsification is sufficient for this.

@Galen Evans (83)

Fair enough.
So is mathematics knowledge?
It isn't science, just something scientists sometimes use.

By Chris' Wills (not verified) on 20 Sep 2009 #permalink

"I do not see how it could be otherwise, since the "validity" of the inference presupposes the transitivity of truth from premises to conclusion, which in turn is guaranteed due to the tautological nature of the inference."

You're using circular reasoning here. You're essentially saying that things that are true are true by definition. For some propositions that is true, such as "if A, then A". But a much larger class of logical formulas are not true by definition, they are gleaned analytically by the relation of those definitions.

"I do not follow what you are trying to say here. What are "propositional values" supposed to be? Are they different from truth values?"

Perhaps values of the clauses would be more appropriate here. Consider the satisfiability problem, a sequence of boolean clauses separated by logical operators, with the goal being to find an expression that evaluates to "true". This provides a clear example of a deductive inference that is not true invariant of the values of the clauses, like "A OR ~A", and therefore isn't tautological.

"What good is a method for separating truth from falsehood if it is not reliable? For instance, if a person holds 10 beliefs and knows from inductive inference that his beliefs have a 95% chance of being true, how does that give him any guarantee or even merely confidence that his beliefs are in fact true? How do you logically go from "Proposition X has Y% probability of being true." to "Proposition X is true."?"

And this is the mistake Hume made. He was correct so that inductive inference was never deductively correct, I'm not disputing that. Where he was wrong was to say that the only way we can be given good reason to believe in a proposition was if it is deductively correct. Simply because high probability doesn't entail a conclusion doesn't mean that it doesn't provide us with good reason to believe it.

"That's great. However, the problem is that you need to provide a logical connection between "likely true" and "true". If you can not do this, you have no reliable way of excluding the possibility of your proposition being false."

But we already have a strong conceptual distinction between high probability and low probability. We exclude the falsehood a proposition when that falsehood has a distinctly low probability.

"Unless you move from a concept of absolute truth to relative truth or allow for more truth values than "true" or "false", I do not see how any form of "probably true" could ever be a substitute for "true"."

These kinds of propositions have already been formalized. Ever heard of fuzzy logic/sets?

"But Popper does not say this. What he maintains is that instances of confirmation of a theory can give us no certainty or even confidence about whether the theory in question is true or not, regardless of how many instances of confirmation we have accumulated. In contrast, one instance of falsification is sufficient to label a theory "false"."

I fail to see how this is distinguishable from what I said. I said that Popper claimed that instances of apparent confirmation confer no knowledge was instances of falsification do. This is what I'm disputing.

"Of course you can make such a claim. Popper merely says that such a claim is worthless if you want to know whether the baby will in fact be a male or not."

That's a red herring. What is clear from the example is that, if Popper's claims are taken to their logical conclusions, we can't make justified claims about perfectly ordinary probabilities. That's what I claimed was absurd.

I'm sorry I didn't notice this post until this late stage of the discussion. I was happy to note that Jason fleetingly mentioned historical knowledge as a non-theological alternative to scientific knowledge, though for some reason poetic truth and similar notions seem to have attracted much more comment. If somebody mentioned legal reasoning, I missed it. I didn't pick up much discussion of the various venues in which people think about right and wrong or politics.

Mathematicians, or so I've been told, once found it rather daring to bring up "pathological" functions like the Wierstrass function as exceptions to nice, well-behaved polynomial and trigonometric functions. The problem is the overwhelming majority of functions turn out to be pathological. I think the situation is similar in this discussion. The people who promote what I myself have called scientism think of non-scientific knowledge as some sort of freakish case when it is scientific knowledge that is the tiny exception. Recognizing this fact isn't a way of bad mouthing science, and it has nothing to do with promoting theology except to the extent that religious apologists seize on absolutely any available argument including this one--when Chomsky shot down B.F. Skinner, various nuns wrote theses explaining how transformational grammar proved that St. Thomas had been right all along. Anyhow, to revert to my mathematical analogy, it may pay to linearize, as the engineers recommend, even if there are almost no spherical cows.

"You're using circular reasoning here. You're essentially saying that things that are true are true by definition."

No, I am saying that the truth value of a tautology will always be the same, i.e. "true", for otherwise it would not be a tautology. This is not a matter of "circular reasoning", because I do not infer anything. It is simply how a tautology is defined.

"But a much larger class of logical formulas are not true by definition, they are gleaned analytically by the relation of those definitions. [...] Consider the satisfiability problem, a sequence of boolean clauses separated by logical operators, with the goal being to find an expression that evaluates to "true". This provides a clear example of a deductive inference that is not true invariant of the values of the clauses, like "A OR ~A", and therefore isn't tautological."

I believe you have misunderstood me and/or I phrased it unclear. I did not claim that any deductive inference per se is valid. What I tried to say is that if a deductive inference is valid, it will necessarily lead to transmissibility of truth from true premises to the conclusion due to its tautological nature.

The satisfiability problem you mention is a case in point. At least in case of classical, bivalent propositional logic it is precisely the goal to elucidate whether a given logical inference is tautologous, i.e. whether its negation is unsatisfiable. An algorithm is used to compile the truth values of a given proposition A and thus of the inference which is equivalent to said proposition. If the result is positive, i.e. if the truth value of A is always of the class "true" for all possible combinations of truth value assumptions over A, said inference is deemed "valid".

"Where he [Hume] was wrong was to say that the only way we can be given good reason to believe in a proposition was if it is deductively correct. Simply because high probability doesn't entail a conclusion doesn't mean that it doesn't provide us with good reason to believe it."

But why? If you say that a high probability gives a good reason to classify a proposition as "true", what is your logical basis for this statement?

To repeat: in order for your statement to make sense, you would have to show the logical connection between "probably true" and "true"; or in other words: since you lack any way of knowing whether a "probably true" statement is not false, you may end up with a bunch of probably true, but in fact false propositions.

"But we already have a strong conceptual distinction between high probability and low probability. We exclude the falsehood a proposition when that falsehood has a distinctly low probability."

On what logical grounds? What is your justification for doing so? Is it not merely a pragmatic decision?

"These kinds of propositions have already been formalized. Ever heard of fuzzy logic/sets?"

Certainly. But I believe that, special cases apart, the widespread use of fuzzy logic (or multivalent systems of logic in general) in science would be problematic. Anyway, my comments were restricted to the classical, bivalent system of propositional logic.

"I said that Popper claimed that instances of apparent confirmation confer no knowledge was instances of falsification do. This is what I'm disputing."

We have to be careful with terminology here.

Popper does not claim that a successful confirmation of a theory does not give us any knowledge at all. Rather, it tells us that the theory in question has not yet been falsified and remains a contender for the truth.

What he does say is that positive confirmation or corroboration does not help us to classify a theory as true. Now, you already conceded that you do not dispute the fact that inductive inference is not as valid as deductive inference, i.e. it is not logically possible to validly conclude from an instance of confirmation to the truth of the theory in question. So the logical aspect of Popper's argument remains undefeated.

It is another issue as to whether confirmation of a theory gives us a "good reason" to believe that the theory is true. This would open up an entirely different discussion altogether with regard to what constitutes "good reasons", what they are good for and if they are in any way necessary.

I hesitate somewhat to open another can of worms, but what the heck: so what would constitute a "good reason" in your eyes and why is it preferable to have one?

"What is clear from the example is that, if Popper's claims are taken to their logical conclusions, we can't make justified claims about perfectly ordinary probabilities."

Why? As far as I know, Popper does not dispute the mathematical validity of probability calculus. He just points out the uselessness of it for the task of classifying propositions as true or false.

Iapetus,

As you know from our discussions elsewhere I'm not a philosopher, but let me present my take on what Tyler is saying in my more clumsy "layman's" way of thinking.

Regards deductive reasoning, it seem obvious to me that this can be formally shown to work. It starts with a larger set of truth values (the initial propositions) and derives a subset of them that are also true when taken in combination. The only way I can see that this would fail is if either the initial propositions were not overlapping in the necessary way to derive the desired combination (in which case this should be able to be shown) or one of the "logical rules" (think "operators" in equations) used in deriving the result somehow manages under some "conditions" to invert "true" to "false". This seems exceptionally unlikely (to be polite) as this would imply turning the philosophy world upside-down!

I think a problem with Tyler's post is that he has effectively moved the goal posts. I'm not saying this was his intention, just the effect of how his posts have progressed. He initially appeared to be writing about logical induction, but then moves to probabilistic induction, a quite different thing.

I would have thought philosophers are mostly interested in the former, being about logic in the usual sense.

The latter I see as a pragmatic tool that actually acknowledges that logical induction can't be formally proven, but a useful partial work-around that can be applied is to give a numerical likelihood of some event (induction) being true given a model for the system. An obvious catch is that the model for the system itself can be wrong and that the "truth" of that system would have to be shown... In practical use these models are tested against a "known to be true" system, which of course introduces further query: how do we know the "true system" is actually correct, etc. In actual use, "the real world" in some form is used, but invariably only an approximately, and one limited to the key questions at hand. Even if the model does fit the "test" case(s), you don't really know if this was because the model is a faithful representation of the reality, or that it is "right for the wrong reasons" as it were (i.e. generates the right answers for the test data, but the model is actually wrong).

Saying that probabilistic induction is reliable to me doesn't say that logical induction is reliable, all it really does is introduce a model that is numerical (as opposed to "only" logical), so that probabilities can be derived from it. In effect, it lets you shift the target from logical proof (as in "true") to dealing with likelihoods. This is pragmatic and useful as a practical tool, but it doesn't say "induction works" in the sense of knowing if any individual prediction (inference) made is true, as you cannot know if an individual prediction is falling into the "happened as predicted" bin or the "didn't happen as predicted" bin, you can only know the likelihood that it fell into each of the two bins. (A hidden fallacy of sorts in Tyler taking the approach is that the "reliable" he refers to only works at a "population" level--it's how good the system is if you "run" it many times and score it's "typical" behaviour. That doesn't say you "know" if any individual prediction fell into "correctly predicted" or not bins, though. It's a common error of those not familiar with statistics, so I'm a little surprised to see it showing it's head here.)

By Heraclides (not verified) on 20 Sep 2009 #permalink

"No, I am saying that the truth value of a tautology will always be the same, i.e. "true", for otherwise it would not be a tautology."

The definition of a tautology is not simply a logical formula that evaluates to "true", it's a logical formula that is true regardless of truth assignment. "A OR ~A" is true regardless of whether we postulate that A is true or that A is false, and therefore is a tautology. Saying that all deductive inference is tautological is either something that you claim to be true by definition, or is something that is contradicted by experience and thus false.

"An algorithm is used to compile the truth values of a given proposition A and thus of the inference which is equivalent to said proposition. If the result is positive, i.e. if the truth value of A is always of the class "true" for all possible combinations of truth value assumptions over A, said inference is deemed "valid"."

But verifying that a given boolean expression is a tautology is a computationally distinct problem from the SAT, which is to find out if given a boolean expression as input, we can produce a truth assignment of the variables that makes it evaluate to true. Verifying that an expression is tautologous would be equivalent to finding out that it is true regardless of the truth assignment we make.

"But why? If you say that a high probability gives a good reason to classify a proposition as "true", what is your logical basis for this statement?"

Because probability can be defined as a likelihood. Given a probability distribution we can define the expected values. Truth may not be guaranteed in this sense, but it doesn't have to be.

What you seem to keep pushing me for is a deductive proof that induction yields true conclusions. I don't see why that is required to make inductive inferences justifiable.

"I hesitate somewhat to open another can of worms, but what the heck: so what would constitute a "good reason" in your eyes and why is it preferable to have one?"

Taking after Solomonoff, I believe the concept of compression is the most important reason. A "good reason" to believe in any given explanation for a phenomena is that it compresses data about past observations as well as data predicted by future observation. This also has the benefit that it formalizes the notion behind Occam's razor (simpler theories are more likely).

"Why? As far as I know, Popper does not dispute the mathematical validity of probability calculus. He just points out the uselessness of it for the task of classifying propositions as true or false."

Mathematical validity is not what I'm talking about, I'm talking about being able to apply probability to make statements about real world phenomena. As far as I can tell, making a claim that a baby has a fifty percent chance of being born male requires us to sample births and see if they obey a certain probability distribution. But that would be an instance of confirmation, which according to Popper we can't do.

Heraclides,

I don't see where I've moved the goalposts here. In elaborating on my reasons for accepting inductive inference as justified (I've decided to stop using the word "valid" to avoid confusion with concepts specific to deduction), I've used notions of probability. I'm not sure how you can do any kind of "logical" induction without employing probabilistic concepts.

By the way, I've noticed that both of you have employed pragmatic arguments in favor of our assumptions behind deductive inference. To me that seems a bit like special pleading, seeing as though pragmatic arguments in favor of inductive inference are (rightly) seen as weak.

I don't see where I've moved the goalposts here.

I didn't say you had, I said the effect of your sequence of posts was to.

I'm not sure how you can do any kind of "logical" induction without employing probabilistic concepts.

You must have a different idea of what logic is to everyone I know. Logic is "true" or "false", either/or. It's not probabilistic.

I've noticed that both of you have employed pragmatic arguments in favor of our assumptions behind deductive inference.

Speaking for myself only (Iapetus can speak for himself), illustrating the problem can sometimes be sight quicker, clearer and more to the point than mucking around with "concepts".

To me that seems a bit like special pleading, seeing as though pragmatic arguments in favor of inductive inference are (rightly) seen as weak.

Trying to disparage or dismiss the argument rather than take on board what being said seems weak to me ;-)

By Heraclides (not verified) on 20 Sep 2009 #permalink

"You must have a different idea of what logic is to everyone I know. Logic is "true" or "false", either/or. It's not probabilistic."

That may be true in traditional Boolean logic, but there are other formalisms for which it isn't. I've already mentioned fuzzy logic, but there is also defeasible logic and other forms of non-monotonic logic.

If I were to think of logical induction, I would probably employ concepts from fuzzy logic to classify the degree of conclusiveness of an argument from proposition A to proposition B. That would better articulate what I'm trying to communicate, which is that conclusions can be justified even if not entailed by our premises.

Also, statements about probability can be true or false. We can use the Schroedinger equation in quantum physics to say "the probability that a particle will be located in a given interval is X", or to use a less abstruse example, we can say that "the probability that a fair coin will turn up heads is 1/2". Both are true/false propositions, and both are deductive (and notably, neither are tautological).

@Chris Wills #89,

Somewhat, mathematics is more a way of inference than a way of knowing. Let us not forget Bertrand Russell's Epigram, "Pure Mathematics is the subject in which we do not know what we are talking about, or wether what we are saying is true."

Tyler,

"The definition of a tautology is not simply a logical formula that evaluates to "true", it's a logical formula that is true regardless of truth assignment."

I agree. This is exactly what I stated, so I am not sure why you write this as if it somehow contradicts anything I said.

"Saying that all deductive inference is tautological is either something that you claim to be true by definition, or is something that is contradicted by experience and thus false."

If you look at my previous post, you will find that I explicitly tried to clarify this misunderstanding. Here it is again:

I believe you have misunderstood me and/or I phrased it unclear. I did not claim that any deductive inference per se is valid. What I tried to say is that if a deductive inference is valid, it will necessarily lead to transmissibility of truth from true premises to the conclusion due to its tautological nature.

And let me add that no equivalent procedure for inductive inference exists, which is basically what I am trying to communicate and which you already agreed to.

"But verifying that a given boolean expression is a tautology is a computationally distinct problem from the SAT, which is to find out if given a boolean expression as input, we can produce a truth assignment of the variables that makes it evaluate to true. Verifying that an expression is tautologous would be equivalent to finding out that it is true regardless of the truth assignment we make."

Again, you start your statement with "But" as if the paragraph you are replying to contains some contradiction to what you say, when it really does not.

To repeat: I agree with you that not every deductive inference is tautologous. However, the point is that any deductive inference that is a tautology is "valid" and thus guarantees transmissibility of truth.

"Because probability can be defined as a likelihood. Given a probability distribution we can define the expected values."

Yes, and you end up with propositions that are "likely true", while the possibility exists that they might be false. And what Popper is saying is that instead of employing a procedure which produces "likely true", but potentially false propositions, why not use a deductive process that does not have this problem?

"What you seem to keep pushing me for is a deductive proof that induction yields true conclusions. I don't see why that is required to make inductive inferences justifiable."

The reason why I entered this discussion in the first place was because I was curious about your statement that "Popper was wrong.". I took this to mean that his arguments against the logical impossibility of inductive reasoning were somehow flawed, implying that you would be in possession of a form of valid inductive inference.

As you conceded, this is not the case. So if I understand you correctly, your (weaker) position is that although inductive inference is formally invalid, we are nonetheless justified in using it. Which brings us to a discussion as to what constitutes sufficient justification.

"A "good reason" to believe in any given explanation for a phenomena is that it compresses data about past observations as well as data predicted by future observation. This also has the benefit that it formalizes the notion behind Occam's razor (simpler theories are more likely)."

Hmm, can you elaborate on this? As it is, I do not see the difference to Popper's position. After all, he would agree with you that a scientifc theory must account for past observations and be amenable to falsification by virtue of having a logical consequence which can be contradicted by an observation statement.

Are you saying that a theory is preferable if, in addition to the above, it produces predictions with a "high probability" of being true?

"As far as I can tell, making a claim that a baby has a fifty percent chance of being born male requires us to sample births and see if they obey a certain probability distribution. But that would be an instance of confirmation, which according to Popper we can't do."

Why not? Of course you can confirm a probabilistic hypothesis, just as you can confirm any other scientific hypothesis.

Popper merely maintains that such a confirmation will not tell you whether your hypothesis is true or not and furthermore that it is of no use for the prediction of a particular outcome.

"By the way, I've noticed that both of you have employed pragmatic arguments in favor of our assumptions behind deductive inference."

You have lost me here. Can you point to an example where I have based the validity of deductive reasoning on pragmatic grounds?

"Also, statements about probability can be true or false."

Who ever said otherwise? What I did say is that "probably true" is in no way logically connected to "true".

Heraclides,

I agree to most of what you say.

My personal opinion is that although inductive reasoning is formally invalid, it is simply impossible for human beings to avoid it in our day-to-day lives. As Hume said, we are psychologically justfied in using it due to our very nature, i.e. we literally can not help it. In this sense, Popper argued for an unrealistic ideal.

On the other hand, it would not hurt if scientists and philosophers of science would be a bit more candid about affirming the nature of inductive inference, rather than trying to obfuscate it and implying that scientific knowledge is somehow immune to Hume's argument.

Iapetus,

My personal opinion is that although inductive reasoning is formally invalid, it is simply impossible for human beings to avoid it in our day-to-day lives.

Oh, I agree.

On the other hand, it would not hurt if scientists and philosophers of science would be a bit more candid about affirming the nature of inductive inference, rather than trying to obfuscate it and implying that scientific knowledge is somehow immune to Hume's argument.

Not being a philosopher, I don't fully know Hume's argument, but your comment strikes a chord in that my current feeling is that some scientists do overplay their hand when "defending" induction in science in terms of the formal logic. I wouldn't fully extend this to the more pragmatic approaches used in practice (it's a long story).

By Heraclides (not verified) on 21 Sep 2009 #permalink

@Galen Evans #100,

...way of inference than a way of knowing.

Hardly, it is deduction from axioms, no inference involved.

We know that the proofs of mathematics are true, assuming the axioms.

Bertrand Russell's Epigram, "Pure Mathematics is the subject in which we do not know what we are talking about, or wether what we are saying is true."

Mr Russell may, of course, speak for himself (well he could if he was alive). He may not have known what he was talking about, but he was smart so I'm sure he meant something. Be interesting to know what he meant by true, also what mathematics he considered pure.

Does 1 + 1 = 2 (yes I know that's arithmetic) but we know that is true in it's normal meaning.

A or not A is true.

Are these inferences?

If all mathematics is and science uses this inference what truth value do you then assign to mathematical models in science? Are the models explanatory/predictive by luck or don't you consider the mathematics used by scientists to be pure?

By Chris' Wills (not verified) on 21 Sep 2009 #permalink

Tyler,

I find a bit hard to deal with your arguments because to my reading you present a new, revised notions of logic what is to you, a moving target! If you know you have a different idea of what logic is than most people, perhaps it might be more useful to define it from the onset? I would think that unless you define it, most people will assume you are referring to "ordinary" logic, akin to classic formal logic or Boolean logic. I'm still uncertain just what you consider "logic".

Either way, I can't see that what you have presented counters what I wrote. Fuzzy logic doesn't seem to let you off, it merely introduces probability in a different way and, if anything, I would have thought it re-enforces my point. (I'm a bit out of time to explain it, it's late at night here.) I don't think that non-monotonic logic is relevant to this discussion, really.

Regards your final paragraph, of course probabilities can happen to be 50:50 or be construed as statements about "truth", but that doesn't give probabilistic systems the character of formal logic.

In particular, I feel you should tackle the point I raised that the "reliability" you referred to is at a different level to the individual predictions. At the level of individual predictions you have no idea at all if they are right or not. You can't infer their state, you can only assign a likelihood that an individual prediction is correct, you can't know that it is correct. If you run a large number of predictions, you might be able to ascertain that the model reliably infers x% of the individual cases correctly, assuming that the model is correct and applies to the dataset you are using (which raises further issues).

By Heraclides (not verified) on 21 Sep 2009 #permalink

Pseudonym: Yet there are other matters, such as justice, morality, ethics, that are important, but cannot be empirically tested.

All of these can be tested empirically; or more formally, multiple experience-based descriptions competitively considered under the criterion of Minimum Description Length Induction. Test via designed experiment is not the only possibility within science as philosophical discipline.

Iapetus: However, probability falls short of certainty, so the problem is still unresolved.

It's not resolved in a way you find satisfying. You're stuck with the slight uncertainty as to whether or not you are a cabbage. However, it does provide a resolution. Mathematically, probability is certainty, only in terms of a dense set of values from 0 to 1, rather than only the two extremes.

Collin Brendemuehl: There is no empirical testing of historical evidence. Yet the fossil record is studied with the same methods as happen in courtrooms every day. There is no empirical testing of theoretical physics, like string theory, or theoretical mathematics. Yet all of these are accepted under the umbrella of "science".

Technically, mathematics is only science as an anthropological practice; philosophically, it's a precursor discipline, in that mathematics is only concerned with the real world as a special case of a more general problem class.

The other cases are examples where multiple experience-based descriptions competitively considered under the criterion of Minimum Description Length Induction, rather than soley by designed experiment. MDLI is a test; when used to test experience, it is empirical.

Iapetus However, there is no equivalent procedure for inductive inference. What would be required would be an inductive principle P that would enable the reliable inference from true particulars to a true universal. But no such principle has ever been found. And as Popper argued (quite convincingly, IMO), it can not exist, since you would either have to justify P via induction (leading to an infinte regress) or via postulating it to be true a priori (as Kant unsuccessfully tried to do).

1) MDLI (and the related Solomonov induction) is such a method.
2) It is reliable, in so far as reliability is possible.
3) The argument you cite misses an option. It is justified by induction from a more fundamental assumption: that there is "a pattern" to experience. Mathematically, "a pattern" translates to "enumerable set".

Philosophically, you can also assume a non-enumerable set (one with higher cardinality than the integers). In that case, you apparently have no hope of telling a hawk from a handsaw. So, most are willing to take the minimal assumption.

Iapetus However, you still end up with a result that is probably true, right?

Yes. You may be a cabbage. The probability is low enough that this shouldn't bother you.

Tyler DiPietro: But it is interesting that this conclusion itself cannot be analytically deduced without employing the same circular logic that is considered a fatal problem for induction.

Not quite correct. As noted, it follows from the more basic assumption of enumerability to the pattern of experience; THAT assumption cannot be analytically deduced. Philosophically, you can take the Refutation as assumption; however, that seems to rule out making any inferences whatsoever... save of pure mathematics. Since most self-described "philosophy" types are culturally closer to English majors than Mathematicians, this is considered unacceptable by the anthropological practice of philosophy, and (at least until a better option comes out of the math department) the assumption taken.

Tyler DiPietro: The method of inference is not as reliable, but that doesn't equate to not reliable at all.

Presuming, of course, you are willing to use a (preferably Boolean) lattice having more than two elements for "reliable". Those who aren't, are assigning a different meaning to that word. EG:
Iapetus: I am not sure what you mean here. Either something is reliable or it is not.

...in which case, there is no reliable information outside pure mathematics, and furthermore there is no reliable information even within mathematics as anthropologically practiced. Since it relies from anthropological practice on the axiom of excluded middle (P OR NOT P), the claim "either something is reliable or it is not" is also unreliable; and thus the claim discarded. =)

Chris' Wills: On ways of knowing, we have mathematics and science that most posters will accept, but science isn't mathematics.

Actually, Science-as-philosophy is a branch of mathematics; in that it takes the assumptions of mathematics as valid, adds "enumerability of experience", and continues work.

Galen Evans: once we have abstracted out the objects to make integer addition, we can abstract once more and make integer multiplication

...unless you're using Presburger arithmetic, which is too weak.

Tyler DiPietro: Reliability is not a discrete concept, much less a binary one.

While it's not a binary one, it may be a discrete one once you've taken the assumption of enumerability. (However, the reliability of such reliability is undecidable, in a Gödelian sense.)

Tyler DiPietro: Where I disagree with Popper is where he says that we can't confirm certain hypotheses but can only know which hypothesis have survived attempts at disconfirmation.

The root of Popper's error is that he was an anthropologist, not a mathematician. He thus ascribed the reliance on Simplicity to how it anthropologically allows Falsification, rather than recognizing that mathematically (and thus philosophically), Falsification is specific form of failure for the general expression of Simplicity.

Iapetus: What good is a method for separating truth from falsehood if it is not reliable?

Perfect reliability is not a necessity for being useful (and thus "good"). Some reading about the mathematics of "Arthur-Merlin Proofs" might help you understand.

Heraclides: He initially appeared to be writing about logical induction, but then moves to probabilistic induction, a quite different thing.

Some reading about the mathematics of "Arthur-Merlin Proofs" might help you, too. Logical induction is a subcase of the more general problem.

Iapetus: Generally speaking, what is claimed by critical rationalists like Popper and Miller is that the goal of science is to separate true propositions about our reality from false ones and that inductive inferences that produce propositions which are "likely true" fall short of helping in this goal, since nobody has come up with a method to get from "likely true" to "true".

This makes the mistaken presumption that Science believes achieving perfect truth is possible. Science may be more generally viewed (anthropologically and philosophically) as an attempt to sort propositions in order of truth probability (which thus includes the Popper and Miller sense as a sub-case). As such, inductive inferences producing "likely true" facilitate the more general sorting.

Heraclides You must have a different idea of what logic is to everyone I know. Logic is "true" or "false", either/or. It's not probabilistic.

Even Boolean algebra allows lattices which have more than two elements. (One element is also possible, but uninteresting.)

Iapetus: And what Popper is saying is that instead of employing a procedure which produces "likely true", but potentially false propositions, why not use a deductive process that does not have this problem?

The answer is the lack of the necessary starting point: absolute propositions about experience.

Iapetus: On the other hand, it would not hurt if scientists and philosophers of science would be a bit more candid about affirming the nature of inductive inference, rather than trying to obfuscate it and implying that scientific knowledge is somehow immune to Hume's argument.

Such candor is only appropriate for the segment of the public who have learned how to handle probability and enumerable complexity classes, so as to correctly understand the implications.

Heraclides You can't infer their state, you can only assign a likelihood that an individual prediction is correct, you can't know that it is correct.

Yes. Again; you can't be sure you're not a cabbage, even if it is more likely that you're a human.

Comment #3 is right-on.

Literature does not convey truths or falshoods. "It is not even wrong".

Literature conveys values - which are essentially arbitrary.

For example, a piece of literature may convey the Golden Rule. But who says the Golden Rule is "good", much less truthful.

The First Culture (philologists, theologists, artists, cultural anthropologists, poets, etc.) slice and dice VALUES and not FACTS. True and False simply does not apply to them.

Values are ESSENTIALLY ARBITRARY. That is why literature and myths have evolved to become an instrument of mass control.

By Hamidreza (not verified) on 21 Sep 2009 #permalink

Hamidreza: Values are ESSENTIALLY ARBITRARY.

You're referring to the is-ought problem (another of Hume's toys). However, there's at least one resolution via a Cantor-style diagonalization argument. However, since it involves mutual information, those who are dissatisfied with probabilities won't be any happier.

It's not resolved in a way you find satisfying. You're stuck with the slight uncertainty as to whether or not you are a cabbage. However, it does provide a resolution. Mathematically, probability is certainty, only in terms of a dense set of values from 0 to 1, rather than only the two extremes.

Congratulations to today's Santayana Award winner. It was precisely this kind of thinking (understating the limits of induction) that was a significant cause of the current economic crisis. Multiple claims of "probability being certainty" were the justification for a huge amount of risk. The alleged certainty turned out to be anything but, to catastrophic effect.

abb3w - However, there's at least one resolution via Cantor-style diagonalization argument. However, since it involves mutual information,

But aren't values sort of "the rules of the game" which are "choose as you go"?

So please explain to me how a diagonalization can arrive at values. I understand that a value system has to be coherent, etc. But my point is that since a value system by definition is not based empirically and is rooted in thin air, then how can it ever claim to be good or bad in an absolute sense. The issue is not probabilities, or undeterminacy. The issue is the meaningfullness of a good or bad value system, no matter what the probabilities.

By Hamidreza (not verified) on 21 Sep 2009 #permalink

abb3w,

Some reading about the mathematics of "Arthur-Merlin Proofs" might help you, too.

I've only time for a (very!) quick skim of the general notion, but I can't see that this changes what I've written, rather it seems to re-enforce it. Maybe I'm reading you the wrong way. By "might help you, too ", do you mean this will further re-enforce my point?

Even Boolean algebra allows lattices which have more than two elements.

This is seems to be merely shifting away from what I was writing about. FWIW, I intensely dislike "but this, but that" approaches, as it often is used to try "win" rather than either explain why you think there is a problem or to try understand what the person is saying first. (Creationists employ this approach to "escape" from having said something stupid an awful lot, in an admittedly more disingenuous fashion!)

As far as I can see "grouping"âin any wayâdoes not change what I was trying to say, but rather it in effect tactically admits there is a problem at the individual element level and seeks to provide a pragmatic "solution" to it. (Solution in double inverted commas as it's not an ideal solution in the sense of "perfect" logic, but an approximation that is useful in practical use.)

Again; you can't be sure you're not a cabbage, even if it is more likely that you're a human.

I don't think this example refers to specifically what I was saying, it's more a reference to perception than to probabilistic reasoning. My point was the same as one of the points I believe Iapetus raised: as long as at least some of the time the prediction of individual events is incorrect, you can't know if your individual predictions are correct.

By Heraclides (not verified) on 21 Sep 2009 #permalink

lapetus,

"I agree. This is exactly what I stated, so I am not sure why you write this as if it somehow contradicts anything I said."

I think my misunderstanding of you is that I took you as saying that the only valid deductive inferences we can make are tautologies. Rather, you seem to be saying that the very idea of valid deductive inference is tautological. I apologize, I really should've known better than that.

However, assuming that the statement that is supposed to be a tautology is something like "true antecedents always entail true consequents", I don't see how that can take a tautological form. The statement is logically contingent in that there exists the possibility of it being false. It does, however, seem to be something that we take to be true by definition, which again leads to the problem of circularity.

"Yes, and you end up with propositions that are "likely true", while the possibility exists that they might be false. And what Popper is saying is that instead of employing a procedure which produces "likely true", but potentially false propositions, why not use a deductive process that does not have this problem?"

Because a purely deductive theory doesn't tell us which among competing hypotheses is the most likely without auxiliary criteria (that is, just "not falsified" doesn't get you anywhere). MDL gives you a formal theory about which hypothesis is the best.

"The reason why I entered this discussion in the first place was because I was curious about your statement that "Popper was wrong.". I took this to mean that his arguments against the logical impossibility of inductive reasoning were somehow flawed, implying that you would be in possession of a form of valid inductive inference."

That was a mistake of terminology. I used "valid" in a less formal sense than "guaranteed transitivity of truth values". But my argument is essentially unaltered, that being that our conclusions don't have to be entailed by our premises to be justified by them, thus there is no reason to accept Hume's conclusion that we have no reason to believe any contingent proposition about the unobserved.

"Are you saying that a theory is preferable if, in addition to the above, it produces predictions with a "high probability" of being true?"

I'm saying that a theory is ideally preferable if it is the shortest program that extrapolates future observations, encoded in bits.

"Popper merely maintains that such a confirmation will not tell you whether your hypothesis is true or not and furthermore that it is of no use for the prediction of a particular outcome."

But that outcome is obedience to a particular probability distribution, so essentially what I'm saying is true: an implication of Popper's theory of scientific reasoning is that we can't have positive knowledge of such outcomes.

"You have lost me here. Can you point to an example where I have based the validity of deductive reasoning on pragmatic grounds?"

I jumped the gun here. I apologize.

Heraclides,

"I would think that unless you define it, most people will assume you are referring to "ordinary" logic, akin to classic formal logic or Boolean logic. I'm still uncertain just what you consider "logic"."

What I mean by logic is any theory that allows us to explicate the relations between propositions.

"Fuzzy logic doesn't seem to let you off, it merely introduces probability in a different way and, if anything, I would have thought it re-enforces my point."

This isn't correct. Fuzziness and probability have distinct interpretations. To use a common example, a liquid can have 0.9 membership in the (fuzzy) set of drinkable liquids and have an 0.9 chance of being a drinkable liquid. They're two different concepts.

"I don't think that non-monotonic logic is relevant to this discussion, really."

Non-monotonic reasoning gives us plenty of examples of where "may be true" is a valid consequent. We could, for instance, reason by default subject to possible belief revision, which we couldn't do with a monotonic consequence relation.

"In particular, I feel you should tackle the point I raised that the "reliability" you referred to is at a different level to the individual predictions. At the level of individual predictions you have no idea at all if they are right or not. You can't infer their state, you can only assign a likelihood that an individual prediction is correct, you can't know that it is correct."

The refutation to this is simple: "knowledge" is not the same thing as "absolute certainty".

The refutation to this is simple: "knowledge" is not the same thing as "absolute certainty".

Reading this, I'm not going to continue as you appear to be avoiding what I actually saying to try re-cast things as suits yourself. At no point did I say "this is knowledge", which is an entire other argument. There's point constructive discussion to be had if you write like that. I understand where your comment leads, been there before at some length. You seem to be completely unaware that I accept that it is pragmatic to use the approaches you refer to in "real world" applications and in that context only I would accept what you are saying. However that is a completely different context to what I referring. I do understand where you are coming from, but you do not appear to see what I am saying differs, nor why, nor seem to want to make an effort too. You appear to focusing on trying to find something that "contradicts" or "refutes" me, instead of realising that your context is different and thus what you are saying, while interesting in it's own context, doesn't relate to my points (AFAIC). Neither of your remarks about fuzzy logic or non-monotonic reasoning really relate to what I was getting at.

Anyway, I have better things to be doing.

By Heraclides (not verified) on 21 Sep 2009 #permalink

Well I'm sorry if I've angered you, Heraclides, but I don't see any other interpretation of the comment that I quoted than that we don't have knowledge about individual predictions due to the fact that we are not certain of them.

Tyler,

"However, assuming that the statement that is supposed to be a tautology is something like "true antecedents always entail true consequents", I don't see how that can take a tautological form. The statement is logically contingent in that there exists the possibility of it being false. It does, however, seem to be something that we take to be true by definition, which again leads to the problem of circularity."

Ah, I think now I understand what you are getting at: the assumption that a valid deductive inference will guarantee transmissibility of truth from premises to the conclusion, while being a necessary result within propositional logic, lacks an external justification via some sort of meta-logical calculus.

I agree. And as I stated, being a fallibilist, I do not exclude the possibility that in the future we will have to revise this notion. That being said, I find the reasoning behind said result rationally compelling, which is why I believe it is incumbent on the sceptic to advance an argument as to why we should expect a valid deductive inference from true premises to yield a false conclusion. Otherwise, I believe it is rational to accept this concept.

Furthermore, it seems that a sceptical attitude with regard to deductive inference does not help in justifying its inductive counterpart. At least in deductive inference we have some mechanism which seems to ensure (although we may be mistaken about it) transmissibility of truth. In contrast, inductive inference lacks any such mechanism.

"Because a purely deductive theory doesn't tell us which among competing hypotheses is the most likely without auxiliary criteria (that is, just "not falsified" doesn't get you anywhere). MDL gives you a formal theory about which hypothesis is the best."

To which Popper would reply: I am not interested in which theory is the most likely (and therefore "the best"). I am interested in which theory is true.

According to Popper, if we have two (or more) theories which have not yet been falsified, we should provisionally accept the one which has the higher empirical content and which has been more extensively corroborated. However, the reason for this is not that any amount of corroboration would tell us that the theory is true, but because extensive corroboration means that it has withstood repeated attempts at falsification. Furthermore, a theory with high empirical content is preferable since it makes bolder predictions which are easier to falsify.

Of course, this is only a coarse sketch of Popper's philosophy I am painting here. I would recommend his "Logic of Scientific Discovery" for a detailed account.

Now, I realize that this Popperian ideal is hard to follow in the real world and furthermore that we find inductive thinking intuitively compelling due to our psychological constitution. But from a purely logical/methodological point of view, I find his arguments convincing.

"But my argument is essentially unaltered, that being that our conclusions don't have to be entailed by our premises to be justified by them, thus there is no reason to accept Hume's conclusion that we have no reason to believe any contingent proposition about the unobserved."

I think we both agree that (at least at present) there is no inductive mechanism equivalent to a valid deductive inference. So from a logical point of view we have no reason to accept any result of an inductive inference. Unless you want to take the position that ALL logical systems are not ultimately justified and therefore forfeit. But then you would have to stop making any arguments at all. Extreme scepticism alone leads to silence.

So what could be a "reason" to accept an inductive inference? Could it be a pragmatic one? I think so, because as I stated repeatedly, it is simply not possible for humans to give up inductive thinking.

If I understand you correctly, you are also offering a pragmatic reason by saying "If some form or other of probabilistic inductive inference gives me a proposition that is true with a probability of X%, that is good enough for me and sufficiently reliable in practice.". I can sympathize with such a notion. However, you should keep in mind that:

1.) the "fundamentalists" will (quite rightly) point out that "probably true" is in no way, shape or form connected with "true", our instinctive intuitions notwithstanding; meaning that

2.) Hume's argument is not formally defeated thereby.

"I'm saying that a theory is ideally preferable if it is the shortest program that extrapolates future observations, encoded in bits."

I see, although I think this is basically what I was trying to say. Dedicated Popperians will disagree with you on this, for the reasons alluded to above.

"But that outcome is obedience to a particular probability distribution, so essentially what I'm saying is true: an implication of Popper's theory of scientific reasoning is that we can't have positive knowledge of such outcomes."

If you equate "positive knowledge" with "know with certainty (or merely confidence) that said distribution will also hold in the future", then you are correct, since this would entail a valid inductive inference from particulars to a universal. And Mr. Hume is wagging his finger here.

abb3w,

Due to time constraints (and because, frankly, I could not make much sense of some of your comments), I will just briefly address some of them.

"Mathematically, probability is certainty, only in terms of a dense set of values from 0 to 1, rather than only the two extremes."

I adopt a concept of absolute truth. Furthermore, I think science should (and does) do the same. Thus, a proposition that is "true with a probability of 50%" is not what I am aiming for.

"Yes. You may be a cabbage. The probability is low enough that this shouldn't bother you."

So you are also arguing on pragmatic grounds? See above as to why this does not refute Hume's argument.

"This makes the mistaken presumption that Science believes achieving perfect truth is possible."

Who ever said anything about "perfect truth"? Since I have no idea what this is supposed to be, I settle for the truth, plain and simple.

"Since it relies from anthropological practice on the axiom of excluded middle (P OR NOT P), the claim "either something is reliable or it is not" is also unreliable; and thus the claim discarded. =)"

The law of excluded middle is not a claim, it is a tautology in propositional logic. Moreover, I can define "reliable" as I see fit. If you use a different definition, fair enough.

"The answer is the lack of the necessary starting point: absolute propositions about experience."

I do not know what "absolute propositions" are, but I assume that you are referring to "being known to be true with certainty". However, Popper is a fallibilist, thus he does not believe that our experiences convey certain truths. Consequently, scientific knowledge is likewise not deemed to be certainly true.

To illustrate my point above....

If we select the daily percentage changes in the Dow Jones Industrial Average during last October (2008), and ask the question of how frequent these changes occur assuming that, as is commonly done in finance models, these events are normally distributed, the results are truly astonishing. There were two daily changes of more than 10% during the month. With a standard deviation of daily changes of 1.032% (computed over the period 1971-2008) movements of such a magnitude should occur only once every 73 to 603 trillion billion years. Yet it happened twice during the same month. A truly miraculous event. Indeed, there were four other price changes during that same month of October that, probabilistically, should not have happened in our lifetimes. Induction didn't work out too well in that instance.

The supporting concepts:

http://www.fooledbyrandomness.com/EDGE/index.html

lapetus,

I think it's time for me to start winding down my participation in this thread, but before I leave I will say that I think you are still misinterpreting my argument. What I am conceding is emphatically not that "probably true" has no logical connection to "true", what I am conceding, and in fact stated at the outset, is that "probably true" doesn't entail "true". Unlike Hume I don't see this as a fatal problem for induction.

Hume's argument essentially has two steps:

1. It is impossible to make an inductive inference that is valid in the sense of our premises entailing our conclusions.

2. Therefore, we have no reason to believe any contingent proposition about the unobserved.

What I dispute is not proposition 1, what I'm disputing is the consequent in number two. Ironically, it's not a valid conclusion.

Solomonoff induction does what a robust theory of induction should do: formalize our notions of inductive inference and conclusiveness, the latter being more than a binary relation. So what I'm saying is not "induction is good enough in practice". I'm saying we have a rational basis for accepting inductive inferences even if they are not deductively valid.

Hamidreza But aren't values sort of "the rules of the game" which are "choose as you go"?

It would be closer to characterize them as "choose or you're gone".

Hamidreza So please explain to me how a diagonalization can arrive at values.

Consider a philosophical entity X embedded in space-time such that there may be an information/mass/energy flow across the perimeter, with otherwise arbitrary defining perimeter.

The inflow corresponds to information received from the external universe; the outflow corresponds to the "choice" function of the entity, expressing de facto values.

Choices may have the character of having some X-prime "more or less like" X being around in the future. The meaning "more or less like" is more formally a value on a [0,1] interval defined via mutual information; it therefore includes the possibility that X and X-prime are what is colloquially considered "the same thing".

So, one of the attributes of X is whether the choices tend have something like X around in the future. Thus, the diagonalization: If X chooses so as to not have anything at all like X, this means any future X-prime will not have that trait.

Thus, the root of "ought" is "choice such that there will likely be something like me around in the future". (Thus as I noted above: choose or you're gone.) Some confirmation of this may be observed in that the human ethical flavors (empirically identified by Haidt, and including both liberal and conservative) of FAIR, HARM, INGROUP, AUTHORITY, and PURITY may be expressed as particular approximations to this precept.

Heraclides I've only time for a (very!) quick skim of the general notion, but I can't see that this changes what I've written, rather it seems to re-enforce it.

The point is that the approach you are using is a more specific example of a more general case. Mathematical axioms (or any other foundational premise) are treated as if they have p=1.0; however, this treatment became particularly questionable after Paul Cohen showed ZF¬C was as consistent as ZFC.

Heraclides I don't think this example refers to specifically what I was saying, it's more a reference to perception than to probabilistic reasoning.

No, because human perception itself has a probability of error. (Cue optical, tacticle, and auditory illusions web search....) The roots of Hume's Induction problem are in Epicurus's Deduction problem.

Heraclides My point was the same as one of the points I believe Iapetus raised: as long as at least some of the time the prediction of individual events is incorrect, you can't know if your individual predictions are correct.

Only if you consider knowledge as absolute, rather than degree. You do not know absolutely (p=1.0) that the ball won't land on 00 at the roulette wheel, but you also are not absolutely ignorant (p=0.0); you have information, corresponding to a partial degree (p=0.97368...) of knowledge.

Heraclides At no point did I say "this is knowledge", which is an entire other argument.

What you said was "no idea". Contrariwise, you do have some idea; merely not an absolute philosophical certainty.

Iapetus I adopt a concept of absolute truth. Furthermore, I think science should (and does) do the same.

That it does is observably an error. (EG: Peter Rubba's work on "The Myth of Absolute Truth".) That it "should"... gets us back to the is-ought problem. Simply put: since that is mathematically impossible, science should settle for finding what best approaches that absolute truth, even if it may not necessarily reach it.

Iapetus Who ever said anything about "perfect truth"? Since I have no idea what this is supposed to be, I settle for the truth, plain and simple.

Perfect in the sense of p=1.0 exactly. When you say you "settle for", I say you are demanding more certainty than the universe entitles you to.

Iapetus The law of excluded middle is not a claim, it is a tautology in propositional logic.

Heh. In one mathematical sense, it's potentially worse than that. If you're using a Robbins Axiomatization of logic (to get a Boolean equivalent), P OR NOT P is the definition of TRUE.

On the other hand, the Boolean equivalents aren't the only option. Instead of a Boolean Lattice, you can work with a Heyting Lattice, where (P IMPLIES P) is the definition of TRUE, but where (P IMPLIES P) is sometimes something different from (P OR NOT P).

So, in a mathematical sense, if you're working with a boolean equivalent, you're working from certain axioms for the logic; and in a philosophical sense, all axioms are human assertions, and thus claims of uncertain ultimate nature (even if the philosophers all decide to pretend otherwise). You can question these underlying assumptions; much as you can question the assumption that experience has a pattern. I consider both equally pointless, but if you're willing to question one such primary assumption, I feel obliged to show how wide that can of worms opens.

Sinbad The alleged certainty turned out to be anything but, to catastrophic effect.

And like those economic idiots you rightly criticize, you evidently don't understand the difference between zero and 0.02; the implications to your two cents worth are left as an exercise.

And like those economic idiots you rightly criticize, you evidently don't understand the difference between zero and 0.02; the implications to your two cents worth are left as an exercise.

While I would note that a significant number of those you characterize as "idiots" won Nobel Prizes (not that having done so insulates them from error), I readily grant the difference between zero and 0.02. My point is simply that induction is an imperfect, though valuable tool.

Sinbad I readily grant the difference between zero and 0.02.

But you fail to distinguish individuals mapping propositions to [0,1] and those who map to {0,1}? To repeat: Mathematically, probability is certainty, only in terms of a dense set of values from 0 to 1, rather than only the two extremes. To rephrase:

Certainty is mapping to {0,1}.
Probability is mapping to [0,1].
Save for the difference of the target set, the process either is the same: a mapping.
And, since 0â[0,1] and 1â[0,1], certainty is merely a specific form of probability.

Sinbad My point is simply that induction is an imperfect, though valuable tool.

That is far from your only point. EG: But such claims [subject to empirical analysis] are far from the only ones that we need answers to in order to live a full and meaningful life.

You also appear to be trying to claim the limits of that imperfection preclude effective use in some areas, and to advocate use of alternatives tools despite that those alternatives are subject to the same underlying limits, and despite that they provably are even further from perfect at giving answers.

(For those interesting the formalisms: "effective" answers.)

abb3w,

Perhaps it would be better to say that certainty is a topological property, i.e., the boundary of the closed set constituting the probability interval. That might be more intuitive.

A couple of quick responses.

Then I must confront your limited definition of science. There is no empirical testing of historical evidence. Yet the fossil record is studied with the same methods as happen in courtrooms every day. There is no empirical testing of theoretical physics, like string theory, or theoretical mathematics. Yet all of these are accepted under the umbrella of "science".

Science does admit historical evidence, as in the case of the fossil record, but there's still a key point: The theory of evolution would have a far more "provisional" than it does were it not testable in the field and in the lab.

Mathematics is a bit different (and, indeed, some don't feel that mathematics counts as "science" proper), but still, proofs can be checked. As for string theory, you certainly have some prominent physicists like Lee Smolin who would concur that until there's a test, it isn't physics. Whatever you think about that, it's still mathematics, and hence it's still science.

abb3w:

All of these can be tested empirically; or more formally, multiple experience-based descriptions competitively considered under the criterion of Minimum Description Length Induction. Test via designed experiment is not the only possibility within science as philosophical discipline.

Of course not all testing has to occur via designed experiment. But still, I'm curious about this claim that "all of these can be tested empirically". How would you propose to test empirically whether or not murder is wrong?

By Pseudonym (not verified) on 22 Sep 2009 #permalink

Tyler,

"I think it's time for me to start winding down my participation in this thread, [...]"

No problem. Thanks for the discussion. I will give some final remarks.

"What I am conceding is emphatically not that "probably true" has no logical connection to "true", what I am conceding, and in fact stated at the outset, is that "probably true" doesn't entail "true"."

But I do not see where you have provided this logical connection.

The equation "nearly certain (probable) truth = truth attained with near certainty (probability)" is false, because a theory may be probably true but not be true. Or to spell it out more elaborately: Getting near to certifying a theory as true is one way of getting near to classifying it as true. But it is not a way of classifying it as true, or of classifying it as nearly true or anything like that.

If you concede that "probably true" does not entail "true", what is left? If not truth and falsity, what does "probably true" refer to?

"Hume's argument essentially has two steps:

1. It is impossible to make an inductive inference that is valid in the sense of our premises entailing our conclusions.

2. Therefore, we have no reason to believe any contingent proposition about the unobserved.

What I dispute is not proposition 1, what I'm disputing is the consequent in number two. Ironically, it's not a valid conclusion."

Well, you have left out a premise:

1.1 It is not rational to believe the result of an invalid inference.

Now, you deny this premise, and point to Solomonoff induction. Which brings us back to the topic of whether any form of a probabilistic concept of truth is acceptable or not. I think it is not if our aim is to search for true theories and descriptions of reality, for the reasons stated above and in previous posts, while you think otherwise. Fair enough.

abb3w,

"That it does is observably an error."

This is observably an assertion in need of an argument (and no, citing a book title will not do).

"Simply put: since that is mathematically impossible, science should settle for finding what best approaches that absolute truth, even if it may not necessarily reach it."

The concept of absolute truth is mathematically impossible? I look forward to your mathematical proof of this...

What you probably have in mind is the question of whether it is possible for us to attain it or the further question whether we could ever know that we have attained it. But these issues are distinct from the concept of absolute truth.

"Perfect in the sense of p=1.0 exactly. When you say you "settle for", I say you are demanding more certainty than the universe entitles you to."

Again, you are conflating a concept with the epistemic questions of whether it can be satisfied and whether we can know that it is satisfied. The concept of absolute truth has nothing whatsoever to do with epistemic certainty.

"So, in a mathematical sense, if you're working with a boolean equivalent, you're working from certain axioms for the logic; and in a philosophical sense, all axioms are human assertions, and thus claims of uncertain ultimate nature (even if the philosophers all decide to pretend otherwise). You can question these underlying assumptions; much as you can question the assumption that experience has a pattern. I consider both equally pointless, but if you're willing to question one such primary assumption, I feel obliged to show how wide that can of worms opens."

I am not sure what your point is. As stated repeatedly, I am a fallibilist and thus do not consider that any part of our knowledge, be it empirical or logical (or mathematical, which you seem to be rather fond of) is in any way infallible and certain. Moreover, the above seems to undermine your own point about "mathematical impossibility".

However, I fail to see the relevance to what I said. I am advancing a concept of absolute truth, i.e. that truth is the same for everyone and that it does not come in degrees. This is a definition, so epistemological questions do not enter here.

Tyler DiPietro : Perhaps it would be better to say that certainty is a topological property, i.e., the boundary of the closed set constituting the probability interval. That might be more intuitive.

Somewhat; but then you have to get into a discussion of limit points, and whether any points we have are limit points. I'm seeing enough misunderstanding from probability; throwing topoology into the mix does not look helpful.

Pseudonym: How would you propose to test empirically whether or not murder is wrong?

Well, first you need a very careful definition of what is meant by "murder"; compare homicide, manslaughter, assassination, mortal duel, and deaths during combat in time of war.

Second, you need a definition of "wrong", which gets back to the is-ought problem. Embedding a definition of "wrong" in an empirical framework is not impossible; my diagonalization provides an absolute, but empirical inquiry as to subjective perception (philosopherese for "ask a bunch of people how they feel about it") is another alternative (with a different source of uncertainty). As a secondary issue, you need to consider "wrong for whom" -- the question is subtly different for individuals and for societies; let's focus on the former for now.

After that gather data from history (since experiment seems right out); and, yes, you find out that murder is usually a bad choice ("wrong") compared to doing nothing, since other people tend to react by attempting imitation, which reduces chances of something like the murderer being around for long. There are exceptions, which usually get socially justified, resulting in the society's definition of "murder" getting gerrymandered.

Iapetus: I adopt a concept of absolute truth. Furthermore, I think science should (and does) do the same.
abb3w: That it does is observably an error. (EG: Peter Rubba's work on "The Myth of Absolute Truth".)
Iapetus: This is observably an assertion in need of an argument (and no, citing a book title will not do).

In addition to finding references to Rubba's work on the topic (that was a paper, not a book) with a very quick Google Scholar search, I also checked with a couple of local professors in the field of history and philosophy of Science, confirming what I've encountered elsewhere. Science, as anthropologically practiced, is not in the business of absolute truth. That's left for the mathematicians -- and many of them will (as a historical spin-off of proven ambiguities inherent in the Axiom of Choice) reject "absolute truth", too. (We could also get into digression on how the burden of proof rests on assertion of existence, but that ties back to the MDLI which you're already having problems with.)

While you might like an argument, I'm simply informing you that you're exhibiting gross ignorance of a basic fact, and giving you a pointer to research the topic. (If it might make you feel any better, Rubba's work is on how common that particular ignorance is outside the scientific community, so you're far from alone.)

Iapetus: I am not sure what your point is.

That the means you use to discard conclusions (as imperfect), can also be used to discard all foundations of reasoning whatsoever. I don't think you want to allow the latter; therefore, you ought reconsider the means for the former.

There's other points implicit, but explicit insult and sufficient effort to avoid same both look a waste of time.

Iapetus: I am advancing a concept of absolute truth, i.e. that truth is the same for everyone and that it does not come in degrees. This is a definition, so epistemological questions do not enter here.

Since we seem to be splitting hairs, I'm not raising an epistemological (knowledge) question, but an ontological (existence) question; which enters when asking if the alleged concept you are defining can be meaningfully said to exist, or whether you are assuming a spherical cube.

To add insult to injury, the assertion "This is a definition, so epistemological questions do not enter here" involved inference ("so") from premise ("This is a definition") to conclusion ("epistemological questions do not enter here")... and thus, may also be discarded on a similar ontological basis.

"While you might like an argument, I'm simply informing you that you're exhibiting gross ignorance of a basic fact, and giving you a pointer to research the topic."

So you have no argument, but are really, really sure that others have made it and have talked to still others that you feel agree with it. Noted.

Could you at least give the names of those philosophers of science who you feel support your position? The only possibility I can readily think of would be people who endorse some form of constructive empiricism à la Bas van Fraassen.

"(If it might make you feel any better, Rubba's work is on how common that particular ignorance is outside the scientific community, so you're far from alone.)"

Since you have no idea what my background is, I suggest you abstain from speculating about it. And stop this patronizing, it is unbecoming.

"Science, as anthropologically practiced, is not in the business of absolute truth."

So then, what are the truths that science talks about relative to? An example would be much appreciated.

"That the means you use to discard conclusions (as imperfect), can also be used to discard all foundations of reasoning whatsoever. I don't think you want to allow the latter; therefore, you ought reconsider the means for the former."

Although I have explained this very issue several times already, apparently it still has not sunk in. So let's go on another round:

It is rationally preferable to accept the conclusion of a valid deductive inference over an inductive inference because the former guarantees, as far as we can determine, the transmissibility of truth.

For your convenience, I have highlighted the important part. As you will (hopefully) see, the point here is not that a deductive inference has somehow been shown to be "perfect" in the sense of "could not possibly come out wrong", but that we presently have no indication that this might be the case.
In contrast, inductive inference has no such mechanism (which, like in the case of deductive inference, might ultimately turn out to be flawed) in the first place.

To repeat (hopefully for the last time): I am a fallibilist, i.e. I do not consider any piece of our knowledge to be perfect/certain/incorrigible. However, this position does not mean I endorse full-blown scepticism.

"There's other points implicit, but explicit insult and sufficient effort to avoid same both look a waste of time."

I am sure that sentence sounded good in your head.

"Since we seem to be splitting hairs, I'm not raising an epistemological (knowledge) question, but an ontological (existence) question; which enters when asking if the alleged concept you are defining can be meaningfully said to exist, or whether you are assuming a spherical cube."

Since we seem to be splitting hairs, a "concept" can not exist, since it is abstract (unless you endorse some form of Platonism, which I do not); it can only be instantiated or satisfied (or not).

Furthermore, what I am advancing is a thesis about "truth" as understood in the correspondence theory of truth. I am not postulating the existence of some kind of Platonic entity.

So do you want to argue now that the thesis "truth is absolute" is incoherent? By all means, the floor is yours...

"To add insult to injury, the assertion "This is a definition, so epistemological questions do not enter here" involved inference ("so") from premise ("This is a definition") to conclusion ("epistemological questions do not enter here")... and thus, may also be discarded on a similar ontological basis."

What could you possibly be talking about here? A "similar ontological basis"? What "ontological basis"? And what is it "similar" to?

Finally, as some more food for thought for you, an empirical argument:

Scientists usually accept a form of metaphysical realism, implying that "truth" corresponds to a world outside their minds. Furthermore, they use a natural language (augmented by mathematics) to formulate their theories and communicate with each other.

Now, the combination of these two facts entails that scientists assume truth to be absolute, since it is not dependent on the personal viewpoint of the scientist. However, since the meaning of terms is determined by convention, it is an empirical fact that the concept of "truth" is "absolute" in the sense I defined it.

Iapetus: So you have no argument, but are really, really sure that others have made it

More that the suggestion that a claim of nonexistence needs argument indicates you lack sufficient common premises for an attempt to qualify as "argument" rather than "argument clinic".

I'll freely admit I'm not an expert. I was willing to check Google Scholar for a verification of my understanding. I was willing to ask waste five minutes of time for two experts who were nearby. Both these provided some confirmation.

I don't think anything I say would lead you to change your mind. I do think that I've given enough for anyone interested in the topic to start looking on their own. Thus: no point to further effort... beyond limiting the spin you put on my laziness.

Iapetus: Could you at least give the names of those philosophers of science who you feel support your position?

Incidentally: one philosopher, one historian; with the caveat that I asked about the specific point of whether "science is in the business of absolute truth". (The reason I asked the historian is because the question is rooted in the anthropological practice; EG, what scientists do.)

So... to what point? They're two who were handy, but hardly titans in the field. Feel free to harass local experts of your own.

Iapetus: So then, what are the truths that science talks about relative to? An example would be much appreciated.

Relative is in terms of probability, a topic you seem to have some fundamental philosophical issues about.

Consider flipping an unfair coin. You flip it; it lands heads. Again; heads. HHHHHHHH.... damn, it's a really unfair coin. Perhaps perfectly unfair? But no finite number of observations can lead you to absolute confidence in this; merely asymptotic approach. Similarly, with any observation process for science.

Scientists are interested in getting ever closer to that limit point. However, they do not consider any of the knowledge of science as being at the limit point; there is always the option to ask a question by re-running some experiment that might provide new information. They do not ascribe particular descriptions to reality with absolute certainty. The limit point itself is not an element of the set interval.

They usually treat pure mathematics as sitting at the limit point; and use its languages accordingly for the uncertainly ascribed descriptions. However, mathematics started getting out of the business of absolute truth about the time the equivalent validity of the anti-parallel postulate showed up. Mathematicians now settle for absolute results from the chosen absolute beginnings, without requiring "absolute" truth for the beginnings, merely a lack of absolute falsehood - which they allow may be different things. (A Boolean lattice may have more than two elements.)

Iapetus: So let's go on another round:It is rationally preferable to accept the conclusion of a valid deductive inference over an inductive inference because the former guarantees, as far as we can determine as far as we can determine, the transmissibility of truth. For your convenience, I have highlighted the important part.

And "as far as we can determine" corresponds to a higher probability of correctness; but not an absolute probability.

Iapetus: In contrast, inductive inference has no such mechanism (which, like in the case of deductive inference, might ultimately turn out to be flawed) in the first place.

Inductive inferences may be mathematically restated as absolute deductive inference statements regarding probability. Which means, it is ultimately the same mechanism.

Iapetus: Since we seem to be splitting hairs, a "concept" can not exist, since it is abstract (unless you endorse some form of Platonism, which I do not); it can only be instantiated or satisfied (or not).

Ah, that seems to be the problem then. Absolute truth is an abstract, a limit point on the set, but not an element of the set. Since you reject abstractions, it looks you're projecting your worldview onto scientists. Scientists may focus on tangibles, but they don't categorically reject abstractions.

Iapetus: Now, the combination of these two facts entails that scientists assume truth to be absolute, since it is not dependent on the personal viewpoint of the scientist.

But this doesn't mean scientists think they can get to such absolute truth via science, rather than merely get ever closer.

"More that the suggestion that a claim of nonexistence needs argument indicates you lack sufficient common premises for an attempt to qualify as "argument" rather than "argument clinic"."

I already told you that "existence" or "non-existence" is not the issue here. But I am getting used to the fact that you need things explained repeatedly before they sink in.

"I'll freely admit I'm not an expert."

No, you are not. This is clearly evidenced by the nonsense you wrote and continue to write.

Now, ignorance as such is nothing to be ashamed of. Unless, that is, said ignorance is used as basis to attack me and try to portray me as a clueless rube, when it is painfully obvious that you have fundamentally failed to understand what I am even talking about. The resultant embarrassment is entirely of your own making.

"I don't think anything I say would lead you to change your mind."

Why should it? Nothing you have said thusfar even addresses the point I am making.

"So... to what point? They're two who were handy, but hardly titans in the field. Feel free to harass local experts of your own."

So in addition to your inability to produce an argument, you can not even name any philosopher of science who would support your stance. I already pointed you in the direction you might find this support, but due to your misunderstanding which overshadows this whole discussion, I think you might not consider his views palatable (unless you want to discard metaphysical realism).

Here is a reality check: the thesis that "truth is absolute" is by and large completely uncontroversial among philosophers of science who hold to some variant of metaphyscial realism (i.e. the vast majority) and has been around since the time of Aristotle. It is its negation that is the new kid on the block and needs arguments as to why science should adopt it. At present, I do not see what they could be (and you certainly have not provided any).

"Relative is in terms of probability, a topic you seem to have some fundamental philosophical issues about."

Why should I? What I have fundamental philosophical issues about is when "probability" is erroneously conflated with "relative truth".

Care to tell me how the statement "Event X has a probability of Y% of occurring." is in any way, shape or form in conflict with my thesis that "truth is absolute" as I defined it? You do remember this definition, don't you?

And while you're at it, you might also want to think about the difference between "Statement X is true with a probability of Y%" and "Statement X is Y% true".

One of the many things I stated repeatedly is that the problem with "probably true" propositions is that they are of no help in separating true from false propositions, unless you come up with a mechanism that ties "probably true" to "true".

"Scientists are interested in getting ever closer to that limit point. However, they do not consider any of the knowledge of science as being at the limit point; there is always the option to ask a question by re-running some experiment that might provide new information. They do not ascribe particular descriptions to reality with absolute certainty. The limit point itself is not an element of the set interval."

And there we have it in all its glory: the conflation of "absolute truth" with "certain knowledge".

I repeatedly warned you about this conflation, but (as I am getting used to by now) to no avail. In your patented knee-jerk style, you completely disregarded what I wrote and continue to attack a position I not only do not hold, but have explicitly repudiated.

In other words: all this time you have been labouring mightily to tilt at a straw-man.

"And "as far as we can determine" corresponds to a higher probability of correctness; but not an absolute probability."

I do not really care what kind of "probability" it corresponds to; I will leave this up to you and your fascination with probabilities.

For my purposes it is sufficient that thusfar it has not been shown to be flawed.

"Inductive inferences may be mathematically restated as absolute deductive inference statements regarding probability. Which means, it is ultimately the same mechanism."

Your obsession with mathematical probability is a good example of the old adage "To a hammer, everything looks like a nail".

If you can point me to a logic textbook or an academic article which comes to the conclusion that there is no difference in validity between deductive and inductive logic, we might have something to talk about.

"Ah, that seems to be the problem then. Absolute truth is an abstract, a limit point on the set, but not an element of the set."

No, the problem is that you fail to understand what is plainly written.

"Absolute truth" is a thesis about "truth" as understood in the correspondence theory of truth. Nothing more, nothing less.

"Since you reject abstractions, it looks you're projecting your worldview onto scientists. Scientists may focus on tangibles, but they don't categorically reject abstractions."

I am sure you can cite where I said that I "reject abstractions.".

"But this doesn't mean scientists think they can get to such absolute truth via science, rather than merely get ever closer."

I am afraid you completely miss the point here.

If you ask a scientist whether he believes there exists a reality which is independent of his mind, he wil answer in the affirmative. If you ask him further whether he believes his theories correspond to this reality, he will again answer in the affirmative. Thus, he uses "truth" in an absolute sense. It really is very simple.

@abb3w #105

Chris' Wills: On ways of knowing, we have mathematics and science that most posters will accept, but science isn't mathematics.

Actually, Science-as-philosophy is a branch of mathematics; in that it takes the assumptions of mathematics as valid, adds "enumerability of experience", and continues work.

But science isn't a philosophy.

There may be philosophers of science, I'm sure there are, but that doesn't change science from a method to a philosophy.

Unless, of course, you want to change the nature of science from a method of investigating nature to something else.

Considering science as a branch of mathematics is rather limiting.
Field data is essential in science, that's how hypothesis are test and in many cases how they are initially arrived at.
Scientific theories are often not formulated as mathematical models, this may be because it is too difficult/complex or that suitable mathematics hasn't been discovered.
There are no absolute proofs in science (we just test and see if the test results match the prediction), there are proofs in mathematics.

By Chris' Wills (not verified) on 25 Sep 2009 #permalink

Chris' Wills: But science isn't a philosophy.

The term "science" is used to refer to both the anthropological practice, and to the underlying branch of reasoning within certain assumptions; the latter qualifies as a branch within philosophy (and thus, more accurately a class of philosophies).

The former is indeed a separate entity, and not a philosophy.

Chris' Wills: Considering science as a branch of mathematics is rather limiting.

Not really. As part of the process of showing the incompleteness of all self-consistent systems able to handle arithmetic, Gödel showed how any self-consistent system could be reduced to an arithmetically expressible system.

So, it's limits Science a bit (it can't be complete), but not significantly (geometry suffers the same problem).

Of course, in terms of the anthropological practice, since humans aren't very good at math, trying to do anything but the testing that way is very limiting. On which note....

Chris' Wills: Field data is essential in science, that's how hypothesis are test and in many cases how they are initially arrived at.

Technically, hypotheses are not tested by the gathering of the data, but by mathematical evaluation of the properties of data gathered. These data properties may be expressed as numbers; the test, a function to at least the binary boolean set. (You may be testing multiple hypotheses competitively, so a metric score might also result.) Ultimately, the test boils down to competitive testing on probable correctness.

The choice of experiments is a design decision, intended to get easier math. EG: the anthropological practice of minimizing number of independent variables.

But yes; data are essential, because that defines which function you are seeking to describe.

Chris' Wills: Scientific theories are often not formulated as mathematical models, this may be because it is too difficult/complex or that suitable mathematics hasn't been discovered.

And here's the heart of it. Yes, humans mostly use words, because humans aren't very good with numbers; the models thus often are not expressed in mathematical terms. However, self-consistency philosophically implies a mathematics.

As to "suitable mathematics hasn't been discovered", that's rather imprecise. Set theory is sufficient (Gödel's trick); merely not very usable (especially by simple enumerative techniques). Beyond that, it's a question of which mathematical alternative is MOST suitable (which may depend on the specific purpose you seek to suit).

Chris' Wills: There are no absolute proofs in science (we just test and see if the test results match the prediction), there are proofs in mathematics.

However, there are not only "absolute" proofs in mathematics; there are also Arthur-Merlin type proofs. The lack of "absolute proof" in science merely indicates that a specialized branch of mathematics is involved, akin to the study of AM proofs. (Science may not be entirely relying on the p>2/3 threshold.)

Iapetus: Why should it? Nothing you have said thusfar even addresses the point I am making.

It does, merely not very effectively.
Perhaps more clearly, I should have phrased that as I'm doubtful anything that anyone can say would lead you to change your mind.

Iapetus: So in addition to your inability to produce an argument, you can not even name any philosopher of science who would support your stance.

I can; I might, should reason for it be presented. As I said, I don't see the point; the most likely result would be leading to you irritating them as well as myself.

Iapetus: Here is a reality check: the thesis that "truth is absolute" is by and large completely uncontroversial among philosophers of science who hold to some variant of metaphyscial realism

However, Philosophers of Science are a distinct anthropological group from Scientists; it is the latter that I was making the point about. The former may claim to be in the business of absolute truth, and may even claim the latter are as well. However, my understanding is that the latter group observably do not in general consider themselves to be in the business of absolute truth. (There doubtless are some minority who do.)

Iapetus: Care to tell me how the statement "Event X has a probability of Y% of occurring." is in any way, shape or form in conflict with my thesis that "truth is absolute" as I defined it?

THAT statement is not the only one under consideration for potential conflict with your thesis.

What does appear to conflict is "Event X has a probability of Y% of having occurred". Some good illustrations of the difference come from the weirdness of QM, via the electron slit experiment; there is a 100% chance the particle went through one of the two holes, but only a 50% chance of having gone through either hole... and the two 50%-probable pasts interacted with each other to produce the observed present. It's been replicated for other particles as well as electrons, as large as buckminsterfullerene molecules.

This may not be the weirdest thing in QM (CPT reversal may be weirder), but it's pretty easily in the top five. One can also use relativity's ties of space and time to move such multiplicity of pasts to a multiplicity in space, but that calls for aspirin.

Iapetus: One of the many things I stated repeatedly is that the problem with "probably true" propositions is that they are of no help in separating true from false propositions, unless you come up with a mechanism that ties "probably true" to "true".

Truth of "probability X"; plus the limit point on the lattice.

I'll also note that the field of A-M proofs is fundamentally about how "probably true" can be used to separate true from false.

Iapetus: For my purposes it is sufficient that thusfar it has not been shown to be flawed.

"For my purposes" looks to my amateur eye like you're taking advantage of Constructive Empiricism there, while trying to disavow having made the assumption later on.

Iapetus: If you can point me to a logic textbook or an academic article which comes to the conclusion that there is no difference in validity between deductive and inductive logic, we might have something to talk about.

I did not say there was no difference; I said it was the same mechanism underlying.

Iapetus: I am sure you can cite where I said that I "reject abstractions.".

Perhaps I was reading too much into your rejection of Platonism and concepts, insisting on instantiation. However, your phrasing made it seem you would reject the existence of the abstraction "my son" when I am childless. As a philosophical possibility of being instantiated (I'm not sterile), the relationship would appear to exist in a platonic sense of a potential, not an instantiation. In this sense, you would appear to reject such abstractions.

Most integers "exist" in this sense, as a potential character of a relationship, but lacking particular instantiation.

(Of course, my idea of the abstraction is instantiated; but my idea of the relationship and the relationship are distinct philosophical entities. The best illustration of that distinction is from Through the Looking Glass in Chapter 8, with the White Knight's Song.)

Iapetus: If you ask a scientist whether he believes there exists a reality which is independent of his mind, he wil answer in the affirmative. If you ask him further whether he believes his theories correspond to this reality, he will again answer in the affirmative. Thus, he uses "truth" in an absolute sense.

If, however, you ask whether he believes this correspondence is absolute, he will generally deny that, in favor of considering the correspondence an approximation.

@abbw3 #130

The word science, as commonly used, has at least four distinct meanings: it denotes
an intellectual endeavor aimed at a rational understanding of the natural and social
world; it denotes a corpus of currently accepted substantive knowledge; it denotes the
community of scientists, with its mores and its social and economic structure; and,
finally, it denotes applied science and technology.
http://www.senseaboutscience.org.uk/PDF/AlanSokalLecture2008.pdf

Now Sokal may include technology, I put that in Engineering (applied Science if you like).

Yes, there are underlying pre-suppositions within Science (rules exist, the rules are understandable by man, as below so above, rules don't change much over time etc); that doesn't make it a philosophy. Unless you wish to expand philosophy to include all things as it did in the middle ages. More Scientia than Science.
But Science is the method of finding out what those laws are, it is a method (or set of methods perhaps).

abbw3
Technically, hypotheses are not tested by the gathering of the data, but by mathematical evaluation of the properties of data gathered.

Not true. Make prediction see if prediction is correct. No mathematics required, except perhaps in the prediction part.
Not all science is physics.
Unless you want to expand mathematics to include who won a race.

abbw3
As to "suitable mathematics hasn't been discovered", that's rather imprecise.

If it can't be used by humans then it isn't suitable; no matter that in theory a function exists.

abbw3However, there are not only "absolute" proofs in mathematics; there are also Arthur-Merlin type proofs. The lack of "absolute proof" in science merely indicates that a specialized branch of mathematics is involved, akin to the study of AM proofs. (Science may not be entirely relying on the p>2/3 threshold.).

There are some things that aren't True/False in some logics, so what?

Science predicts then tests. Pass test firms up hypothesis, fail test try again a few times, if continue to fail re-visit hypothesis etc....

It may use specialised parts of mathematics, but that is akin to an engineer using a pencil instead of a pen (most suitable tool for the particular task). Engineering isn't a specialised form of writing instrument.

Physical data required.

Mathematics doesn't require physical data, sciences only reason is to understand the physical data.

I'm not sure why you want to have science as a sub-set of mathematics, but the differences between the two disciplines are large, mostly because one is based on axioms that are taken to be self evident and the other isn't.

By Chris' Wills (not verified) on 26 Sep 2009 #permalink

Chris' Wills: Yes, there are underlying pre-suppositions within Science (rules exist, the rules are understandable by man, as below so above, rules don't change much over time etc); that doesn't make it a philosophy.

Strictly, a class of philosophies; depending on which exact rules are ultimately taken as axiomatic, it's a slightly different philosophy.

Chris' Wills: Unless you wish to expand philosophy to include all things as it did in the middle ages.

M-W's first definition remains "All learning exclusive of technical precepts and practical arts". So, it's not "philosophy" as anthropologically practiced in college philosophy departments (who anthropologically seem to exclude any hint of mathematics, perhaps because that would require them to work for a living). However, in the broader sense, it's still philosophy, in concern with the relation of ideas.

Chris' Wills: But Science is the method of finding out what those laws are, it is a method (or set of methods perhaps).

Note that "method" is one synonym for "algorithm", which brings it back to mathematics.

Chris' Wills: Not true. Make prediction see if prediction is correct. No mathematics required, except perhaps in the prediction part.

Like a snake in tall grass, Mathematics is hiding in your test. To see if the prediction is correct, you need examine whether prediction equals result.

Since you are mapping input information (such as "prediction" and "result") to a set of values (such as "correct" or "incorrect"; perhaps with further nuances), it ultimately has a mathematical algorithm, regardless of whether humans think of it that way or not.

(Agreed, math in terms of anthropological practice is not necessary for prediction, although it seems to get more frequent and reliable results than awaiting seraphim choir. But that doesn't mean the prediction is not mathematical.)

Chris' Wills: If it can't be used by humans then it isn't suitable; no matter that in theory a function exists.

Again, "suitable" depends on what your purpose is. Once you know it exists (even if unusable), you can then begin search for an equivalent function (or a functional approximation) which humans can use. Contrariwise, if you know it's impossible, you might then jump directly to considering approximations which might be possible.

It also may depend on how you demarcate "human"; change the demarcation from "human" to "human with what tools humans may build", and what can be useful may change.

Chris' Wills: There are some things that aren't True/False in some logics, so what?

So: neither "it's not true/false" and "it's not 'absolute' proof" are enough to make science something other than a branch within mathematics. Mathematics is too damn big.

Chris' Wills: Science predicts then tests.

Actually, it observes, recognizes, and describes first (see back post #13), then tests; and usually follows up by designing means to look for more data in ways that are likely to make for easier math in the testing. Prediction is usually part of the last, aka "experiment design", and design the demarcation to the sub-branch of engineering (which the anthropological practice of science routinely crosses).

Chris' Wills: It may use specialised parts of mathematics, but that is akin to an engineer using a pencil instead of a pen (most suitable tool for the particular task).

Actually, use of mathematics by science seems more akin to an engineer working with mass-energy in space-time; we really don't have any alternative, because mathematics is too powerful to avoid.

Chris' Wills: Physical data required.

Absolutely. The function trying to be described is defined by which data are within experience, after all.

Chris' Wills: I'm not sure why you want to have science as a sub-set of mathematics

Just calling it as I see it. Science tries to describe the input and function (reality and laws thereof) from the output (experience, aka "data"). Math, dude.

Chris' Wills: one is based on axioms that are taken to be self evident and the other isn't.

Science takes the axioms of mathematics (or at least, those axioms necessary to get to basic arithmetic) as self-evident, too. It also requires at least one additional axiom; roughly "experience has a pattern". Alternatively, more powerful axioms are sometimes used; for example, in anthropological practice, the algorithmic validity of the experimental method is effectively taken as axiomatic by most scientists. (This is not necessary, as it may be derived from more basic premise.)

The question is not whether you take some things as self-evident; reasoning requires a starting point. (This was one of the side-effects of Gödel's work.) The question is what gets taken as self-evident.

To conceptually illustrate this, picture a scientist who has just said "I believe _________", and an annoying philosopher who asks "But on what basis do you hold that belief?" and who further continues doing so each time he is answered, in the repetitious manner of a three-year-old. (There may be the occasional digression of "how do you define the term '____'?" at points.) Assuming the scientist does not have enough engineer in them to solve the problem by strangling the philosopher, you will eventually reach a belief that is taken as "self-evident", and as an axiom. EG: "reality has a pattern". Or a circular belief, which is effectively the same. (Any element of the cycle may be treated as an axiom, producing the same class of results.)

"It does, merely not very effectively."

Only if you equate "not very effectively" with "not at all"...

"Perhaps more clearly, I should have phrased that as I'm doubtful anything that anyone can say would lead you to change your mind."

This is really rich, coming from someone who persistently ignores any attempt to help him clear up the misunderstanding his whole argument is based upon.

"I can; I might, should reason for it be presented. As I said, I don't see the point; the most likely result would be leading to you irritating them as well as myself."

Evasion noted.

I am not interested in the names of your pals down the hall. I ask for the name of an eminent philosopher of science who you think endorses your view of "relative truth" that science allegedly uses, in the forlorn hope that this will help you resolve your basic misunderstanding, since I am apparently unable to get through to you.

I already pointed you to a possible candidate. Do you conform to his view of science?

"The former may claim to be in the business of absolute truth, and may even claim the latter are as well. However, my understanding is that the latter group observably do not in general consider themselves to be in the business of absolute truth."

Oh, they do, since the overwhelming majority of scientists endorses some form of metaphysical realism, which is incommensurable with "relative truth". But as long as you fail to understand the difference between "absolute truth" and "certain knowledge", you will not see this.

I am starting to feel like a broken record, but I can only repeat what I stated before:

The thesis of "absolute truth" is a thesis about "truth" as understood in the correspondence theory of truth. It is completely silent on epistemic questions.

"THAT statement is not the only one under consideration for potential conflict with your thesis."

It was in response to your "Toss a coin etc." example. Since you fail to elaborate where exactly this "potential conflict" lies and switch to another example, I assume you retract it.

"What does appear to conflict is "Event X has a probability of Y% of having occurred". Some good illustrations of the difference come from the weirdness of QM, via the electron slit experiment; [snip]"

Do you want to argue that the double slit experiment is an example of "relative truth"? It would mean that QM forces us to abandon metaphysical realism; few scientists these days will be willing to follow you on that path.

However, I have the strong suspicion what you are trying to say here is something completely different, namely that QM introduces indeterminism into physics and thus we sometimes have to settle for probabilistic explanations. Which, of course, has no bearing on the question at hand, unless you conflate "probability" with "relative truth".

"Truth of "probability X"; plus the limit point on the lattice."

What is the connection between this sentence and the paragraph you are responding to?

"I'll also note that the field of A-M proofs is fundamentally about how "probably true" can be used to separate true from false."

Of course you can arbitrarily define that a "probably true" proposition shall be deemed "true" if it exceeds a certain threshold of probability. However, this does nothing to change the underlying problem that a "probably true" proposition is not necessarily identical to a "true" proposition.

Try as you might, you can never exclude the possibility that a proposition which was certified as true with a probability of X% was erroneously classified as true. Given this, Popperians maintain, what is the use of attaching any kind of probability onto the truth-value of a proposition in the first place, when our goal is to separate true from false propositions and not "probably true" from "probably false" ones?

As I said, I believe Popper argued for an unrealistic ideal here. Human beings are simply prone to inductive reasoning, i.e. we can not help but prefer a proposition with a probability of 99% over one with 5%. Furthermore, sorting propositions according to their probability values may be useful in ranking them if we have no other options available. But all these considerations do not refute the argument made by Popper and his followers.

""For my purposes" looks to my amateur eye like you're taking advantage of Constructive Empiricism there, while trying to disavow having made the assumption later on."

I have no idea where you get this notion from. In what way is a mildly sceptic position like fallibilism connected to constructive empiricism?

"For my purposes" was meant to indicate that I considered your ruminations about what kind of "probability" was allegedly involved here irrelevant. According to the fallibilistic methodology, propositions are tentatively classified as "true" or "false", but not certified. Thus, their status can be revoked any time in light of new evidence/arguments, irrespective of any "probabilities".

"I did not say there was no difference; I said it was the same mechanism underlying."

So there is a difference in validity between deductive and inductive logic. Thus, we have a rational reason to treat the results of both procedures differently, which was the point I made and you seemed to contest.

"Perhaps I was reading too much into your rejection of Platonism and concepts, insisting on instantiation. However, your phrasing made it seem you would reject the existence of the abstraction "my son" when I am childless. As a philosophical possibility of being instantiated (I'm not sterile), the relationship would appear to exist in a platonic sense of a potential, not an instantiation. [...] Most integers "exist" in this sense, as a potential character of a relationship, but lacking particular instantiation."

Why did you put exist between scare quotes in the last sentence, but not the one preceding it? Do you believe that abstracta have an independent ontological status or not? If you do, you will have to explain the nature of these entities and also address the epistemological problem of how we can have knowledge about them.

As for me, I am sceptical about any form of Platonism and its cluttered ontology containing an infinite amount of abstracta. However, since this is not really related to the topic under discussion, I will not go into this any further. Suffice to say that I do not consider abstracta to be meaningless or incoherent, so I have no objection to their use in science.

"If, however, you ask whether he believes this correspondence is absolute, he will generally deny that, in favor of considering the correspondence an approximation."

There you go again, dragging in an epistemological issue where it does not belong.

Yes, scientists will consider their theories to be imperfect, tentative models of something which is epistemically more distant, i.e. reality "out there". But that is not relevant here. What is relevant is that if you ask them whether they provisionally consider their current, best theory(ies) to be true, they will naturally answer "Yes", since otherwise they would not have adopted it (them).

Now, if they have accepted a realist metaphysics (as most scientists have), it follows necessarily that the truth-aptness of the model/theory is dependent on the structure of reality and not on any kind of relativizer like personal perspective or societal opinion, consensus of the majority or whatever. In other words, "truth" is deemed to be "absolute" in that it is the same for everyone, because we all share the same external reality.

Iapetus: Only if you equate "not very effectively" with "not at all"...

That would be one subset.

Iapetus: This is really rich, coming from someone who persistently ignores any attempt to help him clear up the misunderstanding his whole argument is based upon.

Your assertion of which objective you attempt is duly noted.

Iapetus: I ask for the name of an eminent philosopher of science who you think endorses your view of "relative truth" that science allegedly uses

Endorsement by philosophers is not the central point. Anthropological practice by scientists is the point. Popper is the only name of "eminence"; and as you note, his work has issues, and is now dated to boot.

It would appear you are using "absolute truth" as synonymous with "objective truth" (as per Popper in "Truth, rationality, and the growth of knowledge", "that is truth as correspondence with the facts"). I would agree that science does accept "objective truth". If made synonymous by definition, "Absolute truth" as term-of-art is not absolute in the sense a mathematician would require.

So, probably, I'm misunderstanding the philosophy jargon, much as most people misunderstand the scientific sense of "theory".

Iapetus: Do you want to argue that the double slit experiment is an example of "relative truth"? It would mean that QM forces us to abandon metaphysical realism; few scientists these days will be willing to follow you on that path.

I don't see that it does; however, I'm also not able to find a concise definition of "metaphysical realism". I find pointers to the work of Hilary Putnam on the topic (and what looks to be disavowal of MR due to QM), but I have more productive ways to waste my time than seeking to extract conciseness from Realism and Reason. I suppose I might annoy the local philosophers more by asking one, but I try to ration that habit.

The closest I've come is Britannica, and the remark there that The metaphysical realistâs truth is, as Putnam also puts it, "radically nonepistemic," potentially outstripping not only what scientists actually believe but also what they would believe were they to form their beliefs perfectly rationally under evidentially ideal conditions... leaves me with deeper doubts yet about your claim.

Iapetus: However, I have the strong suspicion what you are trying to say here is something completely different, namely that QM introduces indeterminism into physics and thus we sometimes have to settle for probabilistic explanations.

No; the universe isn't that simple. The electron slit doesn't merely require probabilistic explanations; those are inadequate. Because the two probable electron possibilities interact with each other as if separate entities, we have to accept probabilistic causation. Which is a bit weird, to say the least.

Iapetus: One of the many things I stated repeatedly is that the problem with "probably true" propositions is that they are of no help in separating true from false propositions, unless you come up with a mechanism that ties "probably true" to "true".
abb3w: Truth of "probability X"; plus the limit point on the lattice.
Iapetus: What is the connection between this sentence and the paragraph you are responding to?

Relative truth may be expressed as a lattice; which is to say, a set and ordering relationship. By definition, the universal maximal element of the lattice is TRUE. (If you're using the trivial lattice, it's also the minimal element FALSE, and you have an inconsistent system.) The lattice positions may be interpreted as corresponding to probability values. If you are at a position such that p>0.5, repeated testing allows shifting to a lattice point nearer to TRUE. The sequence allows asymptotic approach to within arbitrary non-zero bound epsilon distance of TRUE using finite number of iterations delta.

Iapetus: Of course you can arbitrarily define that a "probably true" proposition shall be deemed "true" if it exceeds a certain threshold of probability.

That's not what A-M proofs do. They sort by probability; thus, literally "separate" TRUE from FALSE by putting intermediate propositions (in order) into a distance metric between.

Iapetus: However, this does nothing to change the underlying problem that a "probably true" proposition is not necessarily identical to a "true" proposition. Try as you might, you can never exclude the possibility that a proposition which was certified as true with a probability of X% was erroneously classified as true.

In what sense is that a "problem"?

Iapetus: I have no idea where you get this notion from. In what way is a mildly sceptic position like fallibilism connected to constructive empiricism?

"For my purposes". That looks suspiciously like the use of "empirically adequate" of Van Fraassen.

The philosophy may be disjoint; your anthropologically held position appears less so.

Iapetus: According to the fallibilistic methodology, propositions are tentatively classified as "true" or "false", but not certified. Thus, their status can be revoked any time in light of new evidence/arguments, irrespective of any "probabilities".

Ah. So, since all propositions are tentatively classified, there are none that would appear "absolute" as I understand the colloquial term.

Iapetus: Do you believe that abstracta have an independent ontological status or not?

Depends what you mean by "exist". I don't consider abstracta (particularly: integers) all immanent; merely valid as philosophical potentials.

Iapetus: If you do, you will have to explain the nature of these entities and also address the epistemological problem of how we can have knowledge about them.

The resolution to the problem involves how a Universal Church-Turing Automaton can simulate any Church-Turing Automaton, even another particular case of itself (or multiple cases simultaneously). I doubt you want or would accept the full explanation. On one hand, some of the abstracta (particularly: the concept of "set") are mathematically and philosophically taken as primary and without prior. Having such primaries is necessary to avoid circularity. As such, the nature cannot be explained; there is nothing more fundamental to explain them in terms of. On the other hand, you appear to be very familiar with philosophical terminology. This most often means someone uncomfortable working with mathematics.

Iapetus: As for me, I am sceptical about any form of Platonism and its cluttered ontology containing an infinite amount of abstracta

Integers are abstracta. Unless you care for a maximal integer, you're rather stuck with infinite abstracta.

For purposes of the constructing integers Von Neumann style, "Set of X" may be roughly translated as "philosophical concept of X". One may then define the idea "nothing" (the empty set) as "all things that are not the same as themselves" (All X not equal X). One may have "the idea of nothing"; then, "the idea of the idea of nothing", "the idea of the idea of the idea of nothing", "the idea of the idea of the idea of the idea of nothing" et cetera ad infinitum-- literally, to infinity. Humans may not be able to grasp all such abstracta to immanentize them as human ideas, but they remain philosophically valid.

Strictly speaking, the Von Neumann construction uses an incrementing step of "X union set-of-X"; thus, the set for "five" has five elements: those Von Neumann sets corresponding to sets having zero, one, two, three, and four elements. However, translating even { {}, {{}}, {{},{{}}} } into "idea of" terms gets befuddling and requires too many parenthesis for any hope of clarity, so I didn't. Even more strictly speaking, "five" is actually the class of sets with bijective functions to the fifth Von Neumann integer, but I doubt anyone left reading this cares.

Iapetus: Yes, scientists will consider their theories to be imperfect, tentative models of something which is epistemically more distant, i.e. reality "out there". But that is not relevant here. What is relevant is that if you ask them whether they provisionally consider their current, best theory(ies) to be true, they will naturally answer "Yes", since otherwise they would not have adopted it (them).

On the contrary; that seems relevant. The former seems to indicate that the latter sense of true is not "absolute" truth; that is, at least equally or more true than "P OR NOT P" (for arbitrary philosophical proposition P).

Iapetus: In other words, "truth" is deemed to be "absolute" in that it is the same for everyone, because we all share the same external reality.

Again, I would consider this as "objective truth", rather than "absolute truth".

Of course, Ayn Rand gave a bad connotation to "objective", which might underlie why you use "absolute" as the term; to avoid confusion, I'll note I reject most of her philosophy as resulting from erroneous premises, erroneous inferences, or both. Objectively, Rand was mistaken. =)

This will definitely be my final installment here, since I do not have the impression it is really going anywhere.

"Your assertion of which objective you attempt is duly noted.."

No, no, do not just "note" it; try to understand what I am talking about, in order that this poor thread can finally rest in peace...

"Endorsement by philosophers is not the central point. Anthropological practice by scientists is the point."

Most scientists do not think about such matters very often. They simply talk about "truth"; philosophers quibble about its nature. Moreover, as I have been stating incessantly, scientists usually hold to some form of realism in their "anthropological practice", which necessarily implies a concept of "absolute truth".

"Popper is the only name of "eminence"; and as you note, his work has issues, and is now dated to boot."

Excuse me?

First, there are many other well-known and widely influential philosophers of science; examples would be Kuhn, Lakatos, Feyerabend, Laudan or van Fraassen.

Second, saying that his work has "issues" and is "dated to boot" is not a refutation of his arguments. Btw, many of said arguments are not seriously contested anymore, e.g. the impossibility to achieve certain knowledge and to find an inductive principle whose truth is certified.

"It would appear you are using "absolute truth" as synonymous with "objective truth" [...] I would agree that science does accept "objective truth"."

Hallelujah.

"If made synonymous by definition, "Absolute truth" as term-of-art is not absolute in the sense a mathematician would require."

This is not my problem. I explicitly and repeatedly defined "absolute" and used it consistently throughout. The fact that there are other definitions of "absolute" is totally irrelevant.

"I don't see that it does; however, I'm also not able to find a concise definition of "metaphysical realism". I find pointers to the work of Hilary Putnam on the topic"

If QM would imply "relative truth", it would force us to abandon realism since the former is incompatible with the latter.

There are many different flavours of realism. What they all have in common, though, is that they posit the existence of a reality that is independent of the individual and his perception of it. Most realists also hold that said reality is (at least to some extent) cognizable (which is where the correspondence theory of truth comes in).

As to Putnam, he has changed his position on this and other topics so frequently that it is hard to tell exactly what his views are. He switched from "classic" realism to something he labelled "internal" realism (which contained more than a whiff of relativism) and currently espouses "direct" realism. But be that as it may, I think he never gave up the two fundamental aspects of realism I cited above.

Now, a scientist who wants to deny "absolute truth" would either have to give up realism and posit that the structure of reality is influenced and shaped by some kind of relativizers. Or he would have to give up any claim to "truth" and regard his theories and models merely as useful fictions, i.e. embrace some form of instrumentalism.

"The electron slit doesn't merely require probabilistic explanations; those are inadequate. Because the two probable electron possibilities interact with each other as if separate entities, we have to accept probabilistic causation. Which is a bit weird, to say the least."

I do not have the time or motivation to discuss this in detail with you. The relevant point here is that if QM requires a concept of "relative truth", we would have to give up realism. The Copenhagen interpretation is thought by some to go in this direction, although I do not see the necessity. Incidentally, Heisenberg likewise denied the notion that QM introduces a subjective component into our scientific theories.

"That's not what A-M proofs do. They sort by probability; thus, literally "separate" TRUE from FALSE by putting intermediate propositions (in order) into a distance metric between."

This is useless if your purpose is to end up with "true" or "false" propositions and not with "probably true" or "probably false" ones.

Look, if you believe that A-M-proofs or any other mathematical machinery is capable of solving the logical problem of induction, I am sure the philosophical world will be eager to hear from you.

"In what sense is that a "problem"?"

In that a "probably true" proposition might be false. I have covered this issue repeatedly.

""For my purposes". That looks suspiciously like the use of "empirically adequate" of Van Fraassen."

I already explained the sense of this phrase and why it has nothing to do with constructive empiricism.

"So, since all propositions are tentatively classified, there are none that would appear "absolute" as I understand the colloquial term."

Your inadequate understanding of the terms I use (despite my repeated and explicit definitions and explanations) is the sole reason for this whole "discussion". It is shown at work here again.

Propositions are tentatively classified since they are not deemed to be certain. However, they well may be "absolute", i.e. absolutely true or absolutely false, if they refer to properties of or entities located in the external reality.

"Depends what you mean by "exist". I don't consider abstracta (particularly: integers) all immanent; merely valid as philosophical potentials."

Although I have no idea what you mean by "immanent" (is it in opposition to "transcendent"? And if so, "transcendent" to what?), I take this to mean that you do not consider abstracta to have an independent ontological status.

"The resolution to the problem involves how a Universal Church-Turing Automaton can simulate any Church-Turing Automaton, even another particular case of itself (or multiple cases simultaneously) [snip]."

I should have anticipated an answer like this.

Let me spell out the philosophical problem more clearly:

If you posit that abstracta have an independent ontological status, the question arises as to their nature and how we can generate knowledge about them. Presumably, there would be some place where abstracta reside (sometimes ironically dubbed "Platonic Heaven").

Now, the ontological realist (not to be confused with a metaphysical realist as discussed above!) will postulate that e.g. a mathematician does not invent or construct, but literally discovers something when doing mathematics. But how can he do that? Which mechanism of knowledge-acquisition is at work here? It is most certainly not empirical. So what the ontological realist has to assume is that our mathematician somehow forges a connection with this Platonic realm and gathers knowledge about it; an assumption that is not unproblematic, to say the least.

"Integers are abstracta. Unless you care for a maximal integer, you're rather stuck with infinite abstracta."

Yes, but the question is whether these integers, along with an infinity of other abstracta, reside somewhere within our reality, waiting to be discovered.

"For purposes of the constructing integers Von Neumann style, "Set of X" may be roughly translated as "philosophical concept of X". [...] Humans may not be able to grasp all such abstracta to immanentize them as human ideas, but they remain philosophically valid."

I do not see how this addresses the ontological question at hand. I am also not sure what you mean by "immanentize". If it is meant in the sense of "becoming conscious of an existant", it would seem to require an independent ontological status for abstracta and all the problems this entails.

"On the contrary; that seems relevant. The former seems to indicate that the latter sense of true is not "absolute" truth; that is, at least equally or more true than "P OR NOT P" (for arbitrary philosophical proposition P)."

That last part does not seem to make sense.

Again, "absolute truth" has nothing to do with epistemic issues. A scientist does not have to know which parts of his theory/model are true (if any) and which are not. The point is that the true and the false parts are deemed "absolutely true" and "absolutely false" in the sense I defined.

"Again, I would consider this as "objective truth", rather than "absolute truth"."

Grr.

I defined what I mean by "absolute truth" early on. If you do not like the term, you may call it "ultra-super-hyper truth" or anything else; as long as it describes the same thesis, I could care less.

And now I am outta here.

abb3w: "Your assertion of which objective you attempt is duly noted."
Iapetus: No, no, do not just "note" it; try to understand what I am talking about, in order that this poor thread can finally rest in peace...

You misread that. In less civil terms, I'm saying that I'm not convinced that is your real objective, and that if it is I regard you as a miserable incompetent at achieving what you're attempting.

Iapetus: Moreover, as I have been stating incessantly, scientists usually hold to some form of realism in their "anthropological practice", which necessarily implies a concept of "absolute truth".

I'm not sure if you're saying "implies", or "presupposes". In both cases, it looks like you haven't studied enough physics to wrap your idea of a 37% probability being the actual absolute reality.

abb3w: "Popper is the only name of "eminence"; and as you note, his work has issues, and is now dated to boot."
Iapetus: First, there are many other well-known and widely influential philosophers of science; examples would be Kuhn, Lakatos, Feyerabend, Laudan or van Fraassen.

"Jane, you ignorant slut." I specified ANTHROPOLOGISTS of science. While Popper's renown is as a philosopher of science, some of his early work was (IIR) explicitly anthropological... but checking back, even he did not have an anthropology degree. You might make a case for Kuhn; he also was arguing from what scientists DO, rather than presupposing why they do it; not the other examples, however.

There remain some scholars who study the anthropology of science; U. Arizona has a group, and Berkeley has a couple in it as well. There hasn't been anyone objectively qualifying as "eminent" in the broader areas that I know of.

abb3w: If made synonymous by definition, "Absolute truth" as term-of-art is not absolute in the sense a mathematician would require.
Iapetus: This is not my problem. I explicitly and repeatedly defined "absolute" and used it consistently throughout. The fact that there are other definitions of "absolute" is totally irrelevant.

It is relevant, if you failed to give the definition at onset; and it is indeed your problem, if you are sincere in the attempting of the goal you assert.

Iapetus: If QM would imply "relative truth", it would force us to abandon realism since the former is incompatible with the latter.

You're also rejecting that the relative condition may be the absolute truth; eg, that the absolute is not an electron, but the absolute is 37% probability of electron. There remains "objective truth", it's merely not embodied in the objects you are used to.

Both math and science can handle this, even if a classical philosopher is guaranteed a headache.

Iapetus: There are many different flavours of realism. What they all have in common, though, is that they posit the existence of a reality that is independent of the individual and his perception of it. Most realists also hold that said reality is (at least to some extent) cognizable (which is where the correspondence theory of truth comes in).

Presumably this is "metaphysical realism", since you later refer to that being "discussed above".

Iapetus: Now, a scientist who wants to deny "absolute truth" would either have to give up realism and posit that the structure of reality is influenced and shaped by some kind of relativizers. Or he would have to give up any claim to "truth" and regard his theories and models merely as useful fictions, i.e. embrace some form of instrumentalism.

"Uncertain Approximations", rather than "fictions".

Iapetus: I do not have the time or motivation to discuss this in detail with you. The relevant point here is that if QM requires a concept of "relative truth", we would have to give up realism.

Only for things like electrons, and not the probabilities nor experience of electrons.

abb3w: That's not what A-M proofs do. They sort by probability; thus, literally "separate" TRUE from FALSE by putting intermediate propositions (in order) into a distance metric between
Iapetus: This is useless if your purpose is to end up with "true" or "false" propositions and not with "probably true" or "probably false" ones.

If that is your purpose, you must presuppose having absolute truths about reality before you can get there, or your purpose simply cannot be achieved.

However, your purpose is not necessarily that of science, which may be willing to settle for a progression toward "more probable" truths.

Iapetus: Look, if you believe that A-M-proofs or any other mathematical machinery is capable of solving the logical problem of induction, I am sure the philosophical world will be eager to hear from you.

I doubt it. I'm merely noting someone elses' (Vitanyi and Li) results. Most philosophers don't have the math background to handle the precursors; they'd need about four to six semesters of classes to even understand the point.

abb3w: In what sense is that a "problem"?
Iapetus: In that a "probably true" proposition might be false. I have covered this issue repeatedly.

Repeatedly failing to grasp my point. The "might be" is an inherent problem of any initial philosophical proposition; and thus, you are stuck with it thereafter, howsoever you may pretend that you aren't.

Yes, a "probably true" proposition is also "probably false" in inverse degree. However, I am asking what makes this a "problem". Leaving out the obsolete, historical, and grossly irrelevant options from the OED, we have:

-- 2. a. A question proposed for academic discussion or scholastic disputation. Now rare.
-- 3. a. A difficult or demanding question; (now, more usually) a matter or situation regarded as unwelcome, harmful, or wrong and needing to be overcome; a difficulty.
-- 4. Math. and Science. A proposition in which a specified action is required to be done (originally in Geom., contrasted with a theorem, where something is proved). Later also: a phenomenon or situation to be physically explained or mathematically described or solved. Freq. with distinguishing word.

In simple terms, the meta-problem resolves that "there is no solution to this problem." At some point, you are stuck taking "probably true" propositions as absolute; you can then exploring the consequences of that branch of probabilities, but if you forget what "probably true" statements you've taken as absolute assumptions, your philosophy becomes sloppy.

So, yes, you can discuss it as (2a); however, since there is no solution without taking some probable as an absolute, there's not much point. You may regard it per (3a) as "unwelcome, harmful, or wrong", but the objective universe is not altered by such regard; and considering it a "need to be overcome" may simply be erroneous presumption that overcoming it absolutely may be absolutely impossible.

abb3w: So, since all propositions are tentatively classified, there are none that would appear "absolute" as I understand the colloquial term.
Iapetus: Your inadequate understanding of the terms I use (despite my repeated and explicit definitions and explanations) is the sole reason for this whole "discussion".

"The beginning of wisdom is the definition of terms." - Socrates.

Iapetus: Propositions are tentatively classified since they are not deemed to be certain. However, they well may be "absolute", i.e. absolutely true or absolutely false, if they refer to properties of or entities located in the external reality.

OK. So, there may be objectively "absolute" propositions; we just can't tell if we have one, unless we start by assuming that we have some... which assumption may not be correct.

In this sense, science does deal with "objective" propositions, in that they may refer to properties of or entities located in the external reality. However, such reference is by approximation (not so absolute), and dependent on validity of the prior assumptions.

abb3w: Depends what you mean by "exist". I don't consider abstracta (particularly: integers) all immanent; merely valid as philosophical potentials.
Iapetus: Although I have no idea what you mean by "immanent" (is it in opposition to "transcendent"? And if so, "transcendent" to what?)

In your terms: concrete entities located in the external reality.

Iapetus: I take this to mean that you do not consider abstracta to have an independent ontological status.

Their existence is independent of the external reality, although concrete entities may manifest them as attributes.

Iapetus: If you posit that abstracta have an independent ontological status, the question arises as to their nature and how we can generate knowledge about them. Presumably, there would be some place where abstracta reside (sometimes ironically dubbed "Platonic Heaven").

Which highlights the problem of those philosophers who have not been paying attention to the mathematicians. Such an abstraction containing all other abstractions leads to Russell's Antimony, discovered over a hundred years ago; and thus, your "Platonic Heaven" becomes inextricably Platonic Hell. The resolution is THAT is an invalid (self-inconsistent and thus FALSE) abstraction to posit.

That contemporary philosophers make such errors that have been so long understood as erroneous by mathematicians does not speak well of the philosophers.

Also, I wouldn't consider the sense fully "independent", in that all things that are validly manifest are necessarily also validly abstract; thus, it's more akin to being a subset-superset relationship.

Iapetus: Now, the ontological realist (not to be confused with a metaphysical realist as discussed above!) will postulate that e.g. a mathematician does not invent or construct, but literally discovers something when doing mathematics. But how can he do that? Which mechanism of knowledge-acquisition is at work here? It is most certainly not empirical.

The abstract remains validly abstract even when manifest in the empirical. This gets into the difference between "the idea of the empty set" and "the empty set".

So, yes, it's both empirical and abstract.

In terms of physics, the discovery involves a mass-energy-entropy flow across the perimeter of your mathematician, or a redefinition of the perimeter of the mathematician so the flow remains zero.

abb3w: Integers are abstracta. Unless you care for a maximal integer, you're rather stuck with infinite abstracta."
Iapetus: Yes, but the question is whether these integers, along with an infinity of other abstracta, reside somewhere within our reality, waiting to be discovered.

The universe contains 1080 baryons; I suggest you read up on Graham's number, and consider the implications. Put simply, it is philosophically valid as abstrata, since it does not self-contradict, but our universe is too small for it to be immanentized herein.

And then there's the XKCD semi-pointless calling of the Ackerman function using Graham's number as both arguments....

Iapetus: That last part does not seem to make sense.

So, add "lattice theory" to your mathematics reading list; you might find an introduction to set theory that covers that as well, at least to the point of Boolean Lattices. Wikipedia covers some of it, under "Boolean algebra (structure)" and "Lattice (order)".

Wow, it took you a week to come up with this?

Let's see:

- conflation (again) of "truth", "knowledge" and "certainty";
- failing to understand the difference between classifying and certifying statements;
- irrelevant "Maths is sooo awesome, m'kay?" ramblings in a pitiful attempt to address metaphysical problems (Russell's Paradox as breaking news for philosophers - that particular bit of inanity made my day)
- a hilarious excuse of your own incompetence by trying to blame me for being unclear;
- confusing (for the umpteenth time) "probability" with "relative truth";
- gratuitous insults and arrogance in a transparent attempt to cover your muddled thinking

etc. etc.

IOW: Nothing new or interesting here.

But if it makes you feel better, consider your mental outpourings above a complete and utter demolition of anything and everything I said...

Bye.

In summary, neurofeedback appears to be essentially the most promising complementary answer to ADHD in 2012. The signs and symptoms of inattention, disorganization, and frustration like we observed above are certainly common in grown-ups managing ADHD. Remember attention deficit disorder disorder is usually a biological disorder in the brain which makes you slightly diverse from your peers. It is important for your doctor to acquire a a healthy body history making without doubt it truly is safe to offer stimulants simply because they might cause serious problems in patients with underlying cardiovascular disease. At the Atlanta Center for Adult ADHD, we very often recommend botox injections along with FDA approved medication discussed above.

People professing "other ways of knowing" are demonstrating eisegesis, projection of their own opinions, and sophistry after the fact.

I liken it to "the emperor's new art" -- where you put a small black dot in the middle of a white canvas, label it "untitled", and display it. Listen to what 'sophisticated' art lovers say about it. You'll laugh!

Same thing.