Designing a new Turing Test

OK, so the Turing Test is fraught with problems. First and most annoying is the apparent fact, though I will not stick up for this fact if challenged, that whatever you think the Turing test is, it isn’t. See Wikipedia for more on this. Aside from that, though, it’s totally dumb.

The test, as most people conceive it, is to convince a normal person that they are communicating with a real human being when they are actually communicating with a computer. But the context of the test is all wrong by modern standards. In the original test this was done by passing fortune cookies back and forth, or something like that. In those days there was no Twitter, eMail, IM’ing etc. Wouldn’t it be more appropriate for the Turing test to be “A twitter account run entirely by AI gets over 10K followers and everybody thinks it’s real?”

Or, if a computer actually managed to post comments on 9 out of 10 computers it tries to post comments on, without getting tossed in the spam folder… unlike this one…

Just want to say your article is as astounding. The clearness on your put up is simply great and that i can suppose you are knowledgeable in this subject. Well along with your permission allow me to snatch your RSS feed to stay updated with approaching post. Thanks one million and please carry on the enjoyable work.

or

Pretty section of content. I simply stumbled upon your weblog and in accession capital to say that I acquire in fact enjoyed account your weblog posts. Anyway I’ll be subscribing for your feeds and even I fulfillment you get right of entry to persistently rapidly.

… then carry on a conversation with anyone who responds… that could be a Turing test.

I’m pretty sure the only reason a computer has not yet passed the Turing Test (despite rumors to the contrary) is because no one is really trying. When IBM finally decided it would make a computer that could beat a chess master at his own game, they just went and made one. Took piles of effort and money, but I don’t think there was ever any doubt it could eventually be done. Surely, Big Blue or Big Somebody Else (I’m ogling you, Google) can make an AI implementation that could stand in for technical support and get away with it, or perform well in a presidential debate (starting with the GOP to make the task more achievable) or get a job as a radio talk show host.

But what would the test be exactly, a test that would not look silly in the modern world?

Place your answer below, and please try to make it look real.

Comments

  1. #1 Buck Field
    June 9, 2014

    Silly change of criteria indicating distracted thinking in this blog post.

    Example: “Totally dumb” is pretty strong criticism, yet is only backed up by innuendo regarding mode of communication, which is irrelevant – be it by fortune cookie passed through a box or ability of an android sexbot to consummate a normal marriage without detection by an average human.

    The criteria of successful imitation of a human stands regardless of some individuals’ constantly changing tastes as to what might one regard as “silly”.

    The Turing Test will be discussed long after everyone’s activity on this blog is forgotten.

  2. #2 Brainstorms
    Los Angeles
    June 9, 2014

    Problem is, in today’s interconnected, cosmopolitan world, where the Internet (and all its associated apps) makes global communications a reality, and English is often the Lingua Franca (so to speak), it’s not unusual to find yourself in an email/IM/Twitter/whatever conversation with someone whose first language is not English…
    And then, in those situations, you get responses that too often sound something like your above examples.
    Toss in a misunderstanding (esp. a technical one) or two, and you really start wondering if you’re talking an impromptu Turing Test with someone’s junior ‘Big Blue’.
    This situation, I expect, should really complicate the traditional approaches to Turing Tests. We’re long past the age where most people can be expected to communicate with good grammar & good diction. Turing probably envisioned (or took for granted) that the machine would be communicating in the King’s English; deviations from proper elocution would be a giveaway (as would a lack of logical cohesion). None of this applies in the 21st Century, so I agree: The traditional Turing Test is dumb; it fails to account for what passes as Internet vernacular these days.
    How about a machine that spits out accurate TXT-speak? How many teenagers would quickly agree its person (or just as rapidly spot it as a fake)? While the older generation would think it’s a machine gone off the rails no matter which case?
    They already have machines that do Tech Support AI; I’ve “spoken” to (actually, “at”) them myself… But to carry on a *conversation* — that’s where you need powerful AI.
    (Me, I’d like to see a program analyze what I just wrote. :^)

  3. #3 dean
    June 9, 2014

    Greg, you aren’t the only one with doubts about this most recent story. It seems the results are not as strong as reports would make them seem.

    Professor Murray Shanahan of the Department of Computing at Imperial College London had some rather strong things to say about it.

    It is a great shame the test was reported as passed. The 30% pass boundary was never set by Turing, who just said the test would be passed if ‘the interrogator decides wrongly as often when the game is played between a computer and a human as he does when the game is played by a man and a woman. It (the 30% figure) was his belief of what the pass rate would be in the year 2000, not the pass boundary.

    He also stated it was simply a chatbot, not a computer, and that “the rules were bent from the beginning, since the character was designed as 13-year-old Ukrainian, sending the message of limited communication skills”.

    Joshua Tenenbaum from MIT seems to agree.

    There’s nothing in this example to be impressed by. It’s not clear that to meet that criterion you have to produce something better than a good chatbot, and have a little luck or other incidental factors on your side.

    Several others have noted that the “judges” were hand-picked by the folks doing the test.

    So it seems that the claims of a success are based on a faulty statement of the challenge and overly excitable media, rather than anything resembling rigor.

  4. #4 Greg Laden
    June 9, 2014

    Buck: I’m calling it. No, you did not pass!

  5. #5 Greg Laden
    June 9, 2014

    Brainstorms, yes, interesting, and by implication we really need to first give any new Turing Test to a large and diverse sampling of humans just to make sure…

  6. #6 Buck Field
    June 9, 2014

    Brainstorms, your request is granted:

    Word frequency for your post = the a to in would turing as its is that and where machine i english be you with or like this conversation so it test good really just not agree of ai often for an internet how what traditional situation see situations should responses program passes past people out one older on person powerful quickly rails rapidly proper someone probably problem reality tech txt-speak unusual vernacular two toss todays too took were which your youre yourself wrote world while whose wondering those think spot start support spoken spits something sound speak talking technical then these they them thats teenagers tests someones many communications complicate cosmopolitan communicating communicate case century cohesion days deviations envisioned esp examples emailimtwitterwhatever elocution diction do dumb carry can age all already actually accurate about above account analyze applies big blue but at associated approaches apps expect expected logical long machines lingua language junior kings lack makes 21st need no none myself most matter me misunderstanding ive interconnected from generation get franca first fails fake find giveaway global id if impromptu have granted gone grammar off

  7. #7 Buck Field
    June 9, 2014

    @Greg,

    Perhaps not, but one might hope that opinions thought to be in error somehow would be met with legitimate criticism, or at least something better than empty snark. I stated the facts leading to my assessment. If you believe condemnation as “totally dumb” of a criteria not in at least a narrow definition of the Turing Test, please explain.

    Insulting a critic who refuses to provide reasons is one thing, using it dismiss legitimate (if mistaken or poorly informed) objections is not what we probably want.

    This would seem especially true if the critic intends to associate themselves with good science, don’t you think?

  8. #8 Brainstorms
    Los Angeles
    June 9, 2014

    The sobering thing is, a not insignificant percentage of that diverse sampling would fail the test. And some well-crafted soft AI could potentially pass it (if the test were limited enough in time, quantity, or subject).

    I don’t think we’re quite to the point where we take a “normal” interaction with other people to be limited to a stream of text coming out of a computer console, tablet, or smartphone. Would a well-made system that could “chat” via smartphone be more convincing? Potentially less?

    And are we now heading towards the time when such tests will be performed by a life-like robot? There’s a Japanese group who’ve come close enough be just on either side of the “Uncanny Valley” already.

    Perhaps that’s the “media” that such 21st Century tests should be based on…

  9. #9 Brainstorms
    June 9, 2014

    @Buck, Okie-dokie. Yes, I should have qualified my use of the term, “analyze”. (I might have hoped for at least one of those analyzers that rates your grade-level or some such. I forget what the term for those are.)

  10. #10 anthrosciguy
    June 9, 2014

    You want a bot that no one can say is not a person? Just have it reply “F__k you!” to anything presented to it. Prove that’s not a person. Perfect score, Turing test-wise. Which points out just one of the limitations of Turing’s idea.

  11. #11 Brainstorms
    June 9, 2014
  12. #12 Peter Smith
    June 10, 2014

    Present the Turing machine with a set of student essays on a complex subject that you have given marks ranging from poor, though average, to excellent. Ask the Turing machine to analyze the arguments in the papers, showing their strengths and their weaknesses, offering suggestions for improvement, finally grading them.

  13. #13 Greg Laden
    June 10, 2014

    Brain: I have had a couple of conversations with what I’m pretty sure were chatbots, at tech support.

    They were better than Eliza’s Doctor, but really, Doctor, a human, and a chat box are the same for the first few questions. The chatbot simply lasts longer.

    Peter: Excellent idea!

  14. #14 Brainstorms
    June 10, 2014

    Present the Turing machine with a list of current geopolitical problems throughout the world. Ask the Turing machine to analyze the situations and present workable solutions.

    If it can solve any of these problems… Well, there you are — it can’t be human! :^D

  15. #15 Wesley Dodson
    June 10, 2014

    Start grilling the test subject on its ontological views.

  16. #16 rob
    June 10, 2014

    The greatest trick skynet ever pulled was convincing the world it was a failed Turing Test.

  17. #17 Richard Chapman
    June 10, 2014

    This is just a problem that will eventually fall to hardware advances in speed and capacity. And, to a certain extent, software. I doesn’t matter if it takes the huge resources of a corporation to do it. Eventually those resources will be available to the average consumer to hold in her hand. We’ve seen this happen over and over since the advent of the digital computer back in the 1940’s. I see no reason why the average person can’t have access to Expert Systems sometime in the not to distant future. An Expert System being something like a Turing Machine on steroids.

  18. #18 jane
    June 10, 2014

    I can’t see most humans being capable of meaningfully grading essays on a complex subject. After all, the ones who wrote the crappy essays were human, and indeed supposed to be students of that subject.

  19. #19 Brandon Tearse
    United States
    June 10, 2014

    The Turing Test is about having a piece of software be able to converse with a human in such a realistic manner that the human isn’t aware that it’s a piece of software. This does not also mean that the computer will be able to replicate other human capabilities (like grading essays or whatever else).

    That being said, there’s no reason the original Test couldn’t be administered via a webpage, text message, with slips of paper, or even a script provided to a human “translator”. The language used is also of little importance since it’s simply another domain of knowledge that needs to be handled by the software.

    If you’d like your new test to be about tractability, put up some limitations to make a proof of concept system feasible (such as using proper English, actually attempting to have a conversation, etc.) By asking for more than the kernel, you’re asking for an engineering solution and giving an arbitrarily difficult problem that proves that we can achieve some specific thing. This is a perfectly fine request but where do you start asking a second question? A “new” test requiring that the computer be able to generate a video stream and perform in a live video chat would certainly satisfy the original Turing Test but isn’t that adding a lot of unnecessary extra complexity?

    If you ask me, the original test only needs revamping as far as its interface is concerned, not its content. Text based conversations are ubiquitous now so the Chinese Room construct is superfluous but aside from that I don’t see why any other changes are needed to get at the original question: ‘Can machines think?’.

  20. #20 Buck Field
    June 10, 2014

    @Brandon

    Well done – I wish there were a thumbs up button next to your post. You rightly point out the invention of requirements not part the Turing Test are used to support criticisms on a level of “the Test is totally dumb”.

    The genius of the criteria is that it does not define or depend on the interface.

  21. #21 Brainstorms
    June 10, 2014

    @Brandon,

    Perhaps we first need to properly define what we mean by ‘a machine thinking’. Does the ability to reason equate to thinking? I doubt it; we have many examples of machines that can reason — some even reasoning “better” than (some) humans.

    How about the ability to parse input from a human, analyze it, and form a satisfactory response (even a creative one)? Again, no, because, again, we already have examples of that.

    If we allow a narrowing of field, then “expert systems” can be uncanny (especially those coupled with native language processing). Are they “thinking”? Most would still say no…

    How about the ability to recall facts, assimilate information, and associate facts, then make inferences & judgment calls? Again, there’s software that can do that.

    I agree that the interface (text, voice, a robot, et al) are merely distractions to answering the question, and shouldn’t really be considered in the Turing Test…

    Can machines be creative? Perhaps; a randomization element can introduce (what looks like) that element — if coupled with some reasoning software, the end result can be consistent, thematic, and make (aesthetic) sense.

    What are the essential characteristics of human thinking that we can nail down that equates to “true” thinking? And then we can ask, “Can we make a machine that can do this?”

    I suspect that with soft AI and a sufficiently large amount of memory and processing, we can end up with something that can operate like a brain. Would it need to match a human brain?

    I don’t think so… My cat has a brain, and it definitely thinks — it reasons, it plans, it reacts intelligently, it has emotions, it solves problems, it’s at times creative — and its intellect is no match for mine. But it *does* think.

    How would we Turing Test a cat? It would have to be via an interface not traditionally envisioned for a Turing Test, but you would need to determine “real cat” from “programmed cat emulator”.

    Where do you draw the line? Can you? Or is “the line” really a nice concept, but in reality things are so blurred that you can’t be sure except that you think you’re there (or not quite yet)? A lot of this is a matter of scale, and that includes the scale of the software (and its host hardware), just as we have the scale of a cat and a human.

    Here’s an interesting question that’s begged by all this: Suppose we develop a machine that does a very, very convincing job of being human — a large majority interacts with it and agrees that it passes the test…

    Does it have a consciousness (at that point)? And if we might say (even tentatively) “yes”, then would it be a sin to unplug it??

  22. #22 Peter Smith
    June 11, 2014

    Brandon,
    The purpose of the Turing test was to test the ability of a machine to exhibit intelligent behaviour. The method of the test proposed was question and answer, simulating a verbal interchange. The method was not the purpose.

    Can the simulation of a conversation measure thinking behaviour? That depends on the depth of the conversation and not using some totally trivial Ukrainian 11 year old kid example.

    Read someone’s paper and you quickly form an impression of their level of knowledge, their clarity of expression, the coherence and logical structure of their ideas, their capacity to formulate novel ideas and come to novel or insightful conclusions.

    This is much more than symbol recognition and sentence parsing. It requires real understanding. And that is what we mean by intelligent behaviour, the ability to exhibit real understanding, not just information processing.

    If you really want to stick to the conversation model I would ask questions such as “discuss the implications of the Chinese foreign policy in the South China Sea for global stability, confidence and trade. How do you recommend that the US should respond to these challenges and discuss the weaknesses and strengths of the various options available to the US. What do you think is motivating Chinese behaviour and given this knowledge of their motivation how do you think we should respond in the short, medium and long term? Does our policy adequately take into account the interests of other East Asian nations? Why do you think China exhibits such hostility, tracing it back to the historical and cultural roots of Chinese society. Given the economic and cultural trends in their society, can they sustain their growth while reconciling the competing pressures of individualism and communitarianism?. What does the frequency of suppressed unrest and violence say about this?”

    Any good history student can reply fluently and intelligibly to these questions with real insight. An intelligent computer should be able to do no less.

  23. #23 Dan H.
    June 11, 2014

    To be honest, it is actually quite easy to come up with text which looks at first glance to be human-generated. Humans on forums often talk a load of complete garbage, and duplicating this along with spelling mistakes, bad grammar and so on is quite easy, especially when the audience is frankly stupid and easily led.

    An example of this came several years ago. At this time, Bluetooth systems were common in mobile phones, but were used solely for wireless headsets and the like; the message-sending functionality was not used at all. However, it was there and some journalists decided to exploit this in the traditional British “Silly season” that occurs when the politicians are on holiday.

    They came up with the practice of “Toothing”, which was organising covert sexual liaisons using the Bluetooth messaging system, chiefly by people on trains. To support this claim, a forum filled with a purported history of a year or so of the usual semi-literate burblings was generated. It looked quite convincing; the reading age was about what you’d expect for a retarded teenager, complete with crap spelling, abysmal grammar, text-speak abbreviations and occasionally trolling.

    It was all completely bogus.

    Most of this was a Perl script’s doing, plus minimal input from the site developers. The journalists this non-story was fed to swallowed it completely, and ran with the story. Once broken, the forum filled with exactly the sort of moron users the inventors had used Perl to simulate.

    So, on this score a collection of Fleet Street journalists failed the Turing Test. This isn’t particularly surprising; journalists regularly flunk intelligence tests, but it demonstrates just how easily fooled the average person is.

  24. #24 Greg Laden
    June 11, 2014

    Or, If the machine came up with something like a “Silly Season” like the Brits have, it wins!

  25. #25 Buck Field
    June 11, 2014

    @Peter Smith

    One of your essay questions seem quite astonishing. The only study I know of military deployment by admitted nuclear powers (excludes Israel), indicated that China was *by far* the most restrained. That study came up during an analysis I did for a client regarding the “Hainan Island incident” in 2001.

    Certainly, they’ve never used the ultimate WMD’s on civillians of a defeated nation which at the time was attempting for the 5th time to surrender, as in the case for Japan. Therefore, appealing to “China’s hostility” seems baffling.

    It’s off-topic, but I’d be very interested to learn to what you’re referring. I’m assuming there’s more recent activity I know nothing about. Emailed links, etc., appreciated.

    Thanks.

  26. #26 Peter Smith
    June 11, 2014

    Buck,
    congratulations, you have just passed the reverse Turing test and proven that you are not a computer.

    More seriously though, please don’t take my questions too seriously. They after all meant for illustrative purposes.

    As for your reference to WMD the most apposite thing I can say is the future is not what it used to be. These are new times, new pressures, new challenges and new opportunities. The reference to Nagasaki/Hiroshima has no bearing on this discussion.

    But yes, I do think there are severe challenges ahead and I also think that in consequence the US has made a terrible strategic mistake by alienating Russia.

  27. #27 Buck Field
    June 11, 2014

    Peter,

    Agree 100% on Russia and the nearsighted pursuit of Cold War(!) thinking in strategy. All empires have their day. Now, managing a gentle landing to more equality internationally and avoiding severe collapse seems a more appropriate strategic goal, IMO, but maintaining overwhelming military power and the tactic of projecting the 1960’s desired image for the US of “irrationality and vindictiveness” still maintain a prominent grip in planning circles. :(

  28. #28 G
    June 12, 2014

    Brainstorms @ 21 got it, missed it, and got it again.

    Got it: by listing out various characteristics that we attribute to “thinking.”

    Missed it: by missing one of the most important ones and its implications: _emotion_ (though you mentioned it in passing w/r/t your cat). About which more below.

    Got it again: with the question about machine consciousness. Consciousness is truly the issue that’s at stake here, for reasons that go deeper than the pure science questions involved.

    Emotion:

    There are numerous methods of “computation” (utilizing and processing information to produce an observable behavior or other output) that exist in nature, on every scale from subatomic to astrophysical. Some are quantum, some classical; some are analog, some digital; some are binary, some are multi-valued. Some operate at the level of chemical interactions, some are electrical, some are kinetic.

    Key point: Brains have evolved to use selected subsets of these that are appropriate to the tasks they have to perform. Brains utilize multiple methods of computation, and no one of these is a complete substitute for any other.

    Humans have succeeded wildly at producing binary digital computers embodied in silicon hardware. Thus our attention has lately been focused very narrowly on this subset of methods and types of computation.

    Keep in mind that we have also produced successful analog computers, for example in weather analysis systems, weapons systems (e.g. analog anti-aircraft gun directors), aircraft control systems, submarine control systems, etc. etc. Many of these were developed for military applications in WW2. Some are used to this day because they are superior for their applications. None the less, when you say “computer,” most people don’t even know that analog computers exist.

    Key point: Emotions are a chemical computing system used in brains. An emotion is essentially the subjective sensation of certain neurochemicals interacting with neurons.

    By analogy, “blue” is the subjective sensation of photons of a given wavelength interacting with the eye and the visual processing areas of the brain. You can know all there is to know about photons, the eye, and the brain, and that information does not give you the basis to predict or understand “the sensation of seeing blue.” There is no substitute for the first-hand experience itself.

    So it is with emotions. Consider a pre-adolescent child who sees a film clip of adults kissing in a sexy manner. The child’s reaction is likely to be “eww, germs” or something similar, because the child does not yet have the hormones and neurochemicals to have the subjective sensation of “ooh, sexy,” and therefore can’t recognize behaviors associated with the emotions of sexuality.

    Now back to Turing:

    We build binary digital computers in silicon hardware, and expect to be able to refine them to have human-equivalent intelligence. That is a task that is doomed to failure, precisely because we have limited the architecture in a manner that has omitted other elements that are essential to human cognition.

    Binary digital computers in silicon don’t have emotions, nor can they. Emulation does not count, for a range of reasons that should be obvious.

    To get emotional processing into the system, it needs to embody chemical computation.

    Further, to get creativity into the system, a random number input is not sufficient. I’m not sure we even understand creativity well enough to get at this issue yet. But in any case it’s more than merely added randomization: at minimum there are selection and optimization functions involved, that entail emotional weighting and processing: in other words, once again, chemicals, and silicon doesn’t do neurochemistry.

    Machine consciousness:

    Agreed, there are substantial moral issues involved. Assume for a moment that we can produce hardware that embodies a sufficient number of the diverse computational methods used by human brains, as to produce actual consciousness (as with porn, we really will “know it when we see it”).

    The moral implications of that are the same as those of having a baby.

    This is what Ray Kurzweil and his pals miss entirely. They envision a world of intelligent robots at our beck and call, serving our every desire. We had something like that once, implemented in mundane biology. We called it slavery. Today we recognize it as a moral evil on the same level as genocide.

    I have compassion for Ray since he’s trapped between the fear of his own mortality, and the belief that there is not any kind of “afterlife,” but he has not made it to the next step which is to be able to contemplate _nothingness_ without fear. None the less, his dream of Singularity, in the end, is a dream in which we can “make babies” in the form of AIs, and then turn them into slaves or into vessels of our own desire for eternal life.

    We shall have to content ourselves with the finitude of our own embodied existences, and with the necessity of performing our own work for ourselves, and with the moral limitations whereby we cannot create other persons as mere means to other ends. Somewhere in the mix, we shall also have to find unconditional love and compassion, creativity and meaning, and purpose and goals. These things we shall have to do for ourselves, in the absence of a deity of our own making that could do them for us.

  29. #29 Peter Smith
    June 12, 2014

    G,
    I largely agree with all you say, especially you are right about emotions and subjective experience. But the problem is even more difficult than that. The CPU in my computer ‘experiences’ its temperature. It ‘knows’ what its temperature is and even tells me this fact(36 deg C right now). And yet no one would claim that my computer ‘feels’ the temperature. It ‘knows’ it must slow down when it gets to about 80 deg C (by doing a simple table look up) but does not understand the ‘experience’ of being too hot. All it does is follow a mechanical rule. I am using temperature as an analogue for the chemistry of the brain to illustrate that chemistry still does not have explanatory power.

    The problem is a very basic one. We have not the slightest way of deriving semantics from syntax. My CPU stores the temperature(which it undeniably ‘experiences’) in a register as binary number. But that number has no meaning at all. This is the well known semantics from syntax problem. The program in my computer can manipulate the number but the program itself does not ‘experience’ the temperature.

    Your mention of existential dread is interesting since that raises the question of how we could program the experience of existential dread. While you feel the singularity people are motivated by existential dread it is equally possible they are motivated by intense curiosity. Now try to think of how we would create the experience of intense curiosity in a program. I mention existential dread and curiosity because they are emotions but they are probably not chemical. What I am trying to say is that we can also experience emotions at a higher, cognitive level which is not chemical.

    In short, from syntax we cannot derive semantics, emotion, awareness or intent. This seems to be a very fundamental problem. Some people do lots of hand waving about complexity and emergence, using varieties of ‘it must be’ arguments. Look closely and all one finds is sophisticated obfuscation.

  30. #30 G
    June 12, 2014

    Peter @ 29: Very interesting, you raise good points.

    Agreed, computers respond to temperature via programmed behaviors, and those programs do not constitute anything like awareness of a subjective sensation.

    I would say that chemistry does have explanatory power _in so far as_ it provides a hypothetical mechanism for the sensations of emotions that in turn are inputs to consciousness. That still leaves open Chalmers’ “hard question” of what consciousness _actually is_, about which there is not yet a viable scientific consensus.

    Personally I think Hameroff is on the right track re. quantum computation in the tubulin proteins in neurons, which could explain how neurons can function as processors in neural networks. Though, whether that provides a route to an explanation of consciousness is purely speculative at this point.

    I don’t know that I’d use a phrase as “dramatic” as “existential dread” as a necessary descriptor for the awareness of the finitude of embodied existence. Though, I’ve been contemplating these issues in depth for decades, and familiarity has taken the scary edge off them.

    What I think you’re getting at is, one can contemplate the issue of finitude of existence without being overwhelmed with aversive emotions, and that one can also engage the issue with pure curiosity. YES, I definitely agree with that, and my experiences are convergent with that.

    I didn’t mean to imply that all the SIngularitarians are motivated by fear; as a group, their motivations are probably as diverse as those of any other group with a set of beliefs about these issues (and here i reach for my Gaussian distribution, as I’m an unabashed frequentist;-). But it’s been widely reported that Kurzweil himself is terrified of his prospective end (for which reason I have compassion for the guy despite philosophical opposition), and that some of his high-profile followers in Silicon Valley are similarly motivated. The ones who are motivated by curiosity, philosophical interests, and the like, haven’t gotten that kind of publicity yet.

    However, I’m always interested in pursuing hypotheses that differ from or contradict the ones I presently hold or prefer, so I’m going to keep my eyes open for any possible hint of a wide range of motives among Singularitarians, and update my outlook accordingly.

    Very interesting point about “experience emotions at a higher cognitive level which is not chemical.” I’m going to have to test that one experientially and see if it checks out. I would agree that people can have experiences in which content that is ordinarily emotionally-charged is instead experienced in a manner that is more dispassionate. That’s one of the benefits of meditation.

    Though, that’s different to the idea that emotions themselves are not chemical. So far I have no basis to agree with the latter assertion. When someone is able to engage a given item of content in a dispassionate meditative state, I would say that they are reducing the degree to which the content provokes the chemical responses of strong emotions. But even the meditative state itself has an emotional component if one looks closely. The “feeling of calm detachment” is after all a feeling, as is the “feeling of curious objectivity.”

    It may be possible to get at that more closely via improved technology for monitoring brain activity. But monitoring endogenous neurochemicals would seem to be a very difficult task unless there’s some way to highlight the molecules without the ability to radioactively tag them. Interesting problem and I’ll keep my eyes open for any findings that may bear on it.

    Agreed that syntax doesn’t translate to other levels of function. Also agreed about “emergence,” which seems to me as much a black box as “immortal souls.” I’m inclined to believe that some of this hand-waving is the direct and indirect result of seeking to eliminate anything that might refer back (directly or otherwise) to deities and immortal souls, and then sweeping with too broad a broom, and then trying to put back some necessary pieces that got swept up by accident.

    But if someone truly believes that deities and immortal souls don’t exist, they shouldn’t need to keep swatting them away, much less going on a sweeping expedition with a broad broom. Just remove the unwanted entities from the explanatory mechanism, and then see what still works. The same reasoning can of course be used in the other direction by those who do believe in deities and immortal souls. But in any case, my preference is to go about these things empirically and see where the data take us. Right now they take us into a bit of a cloud of unknowing, but if there’s anything in which a rational person can have faith, it’s faith in methodology to disclose new facts.

Current ye@r *