Chatbots are boring. They aren't AI.

You know I'm a bit sour on the whole artificial intelligence thing. It's not that I think natural intelligences are anything more than natural constructions, or that I think building a machine that thinks is impossible -- it's that most of the stories from AI researchers sound like jokes. Jon Ronson takes a tour of the state of the art in chatbots, which is entertaining and revealing.

Chatbots are kind of the lowest of the low, the over-hyped fruit decaying at the base of the tree. They aren't even particularly interesting. What you've got is basically a program that tries to parse spoken language, and then picks lines from a script that sort of correspond to whatever the interlocutor is talking about. There is no inner dialog in the machine, no 'thinking', just regurgitations of scripted output in response to the provocation of language input.

It's most obvious when these chatbots hit the wall of something that they couldn't interpret -- all of a sudden you get a flurry of excuses. An abrupt change of subject, 'I'm just a 15 year old boy', 'sorry, I missed that, I was daydreaming', all lies, all more revealing of the literary skills of the programmer (usually pretty low), and not at all the product of the machine trying to model the world around it.

Which would be OK if the investigators recognized that they were just spawning more bastard children of Eliza, but no…some of their rationalizations are delusional.

David Hanson is a believer in the tipping-point theory of robot consciousness. Right now, he says, Zeno is "still a long way from human-level intellect, like one to two decades away, at a crude guess. He learns in ways crudely analogous to a child. He maps new facts into a dense network of associations and then treats these as theories that are strengthened or weakened by experience." Hanson's plan, he says, is to keep piling more and more information into Zeno until, hopefully, "he may awaken—gaining autonomous, creative, self-reinventing consciousness. At this point, the intelligence will light 'on fire.' He may start to evolve spontaneously and unpredictably, producing surprising results, totally self-determined.... We keep tinkering in the quest for the right software formula to light that fire."

Aargh, no. Programming in associations is not how consciousness is going to arise. What you need to work on is a general mechanism for making associations and rules. The model has to be something like a baby. Have you noticed that babies do not immediately start parroting their parents' speech and reciting grammatically correct sentences? They flail about, they're surprised when they bump some object and it moves, they notice that suckling makes their tummy full, and they begin to construct mental models about how the world works. I'll be impressed when an AI is given no pre-programmed knowledge of language at all, and begins with baby-talk babbling and progresses over months or years to construct its own competence in comprehending speech.

Then maybe I'll believe this speculation about an emergent consciousness. Minds aren't going to be produced by a sufficiently large info dump, but by developing general heuristics for interpreting complex information.

Tags

More like this

Here is another philosophy paper of mine, which I find to be increasingly relevant, all the time. It describes how a computer might soon have a consciousness equivalent or surpassing the human consciousness: philosophy with a bit of AI theory mingled with a touch of neuroscience. When I got the…
A while back I read an essay on Artificial Intelligence at TR by David Gelernter wherein, besides other things, he discusses where AI research stands at present (the short answer, nowhere). Like all discussions on AI, it inevitably led to the question of Consciousness. As always, I promptly got…
The Chinese Room is a thought experiment in artificial intelligence. John Searle proposed it as a way to falsify the claim that some computer algorithm could be written which could mimic the behavior of an intelligent human so precisely that we could call it an artificial intelligence. Searle…
"It is better to tackle ten fundamental [scientific] problems and succeed in only one, than to tackle ten trivial ones and solve them all," Francis Crick once told his devoted pupil V.S. Ramachandran, director of San Diego State's Center for Brain and Cognition. Ramachandran, apparently, took this…

How the fuck do babies do it? Will a machine have to be aware in order to gain intelligence? It's going to need a brain.

By Wesley Dodson (not verified) on 11 Sep 2014 #permalink

IMHO a big problem with creating a human sort of AI has to do with agency, intention, and motivation. I talk to people because (worse case) they have something I want, (best case) I'm curious and/or want company and want to get to know them.

The problem I see is not the humans thinking the AI is boring but rather that the AI will find humans boring. Exactly what would an AI want from a human? If the AI doesn't feel fear, desire, greed, it may not feel curious about humans or see any advantage to dealing with us.

Wasn't there a SF story about this? The story was about scientists struggling to create a sentient robot. They succeed but the robot finds humans to be entirely predictable and boring so it launches itself out into space figuring that solitude and space had to be better than hanging with humans.

"Art" -- an AI program, as PMZ's article made clear, cannot experience boredom---indeed, idiots who trumpet the marvels of robotics often insist on that factor. "Robots! They never tire, they never get bored!" So, maybe you're a "bot"--you're thinking like one.

By proximity1 (not verified) on 12 Sep 2014 #permalink

@Proximity - please think before typing, robots are not AI computers. If (a big if) a computer was to become self aware I would think there is a very good chance it could become bored without sufficient stimulation, which given the potential for thinking faster without a need for rest or sleep could mean a lot of stimulation.
I'm fairly dubious about PZ's post, I don't think it's the amount of starting data that matters as much as the brain development. Babies do start with some hard wired rules (look at faces, listen to sounds from faces, etc). I suspect if evolution had provided a mechanism to pass on language babies would be able to communicate at an understandable level quite quickly. I do agree that trying to program responses to pass the Turing test achieves nothing interesting.

I'm a big fan of Steve Grand's approach to AI, which is essentially "bottom up" instead of "top down". In other words his creations look dumb, but they get there on their own!

His current project, "Grandroids", does indeed show creatures flailing around and bumping into things.

@ 5: "@Proximity – please think before typing, robots are not AI computers. If (a big if) a computer was to become self aware I would think there is a very good chance it could become bored without sufficient stimulation, which given the potential for thinking faster without a need for rest or sleep could mean a lot of stimulation."

Though I didn't claim that robots and AI computers were one and the same things, the logic of my point is the same whether they are or aren't. Robots require an operating "program" (in the broadest sense of that term) of some sort in order to operate. The program might or might not be part of some AI design--whether it is or not is really beside my point. AI's functionality, like non-AI programming, has as a common feature--trumpeted by techno-optimists--that "these programs / devices don't tire, don't get bored."

So, really, I have thought about it--perhaps more than you have.

RE "If (a big if) a computer was to become self aware I would think there is a very good chance it could become bored without sufficient stimulation, which given the potential for thinking faster without a need for rest or sleep could mean a lot of stimulation."

But what you refer to as "need" is merely your anthropomorphically-based error--attributing your humanistic notion of "need" to a machine. First, your big "If," isn't just big, it's a self-contradiction. Unless a machine is independently evolving from inception, all its subsequent evolutions are premissed on prior interactions with an environment in which the "machine's" "reactions" arenecessarily preordained. If they weren't--if the reactions' range and scope were really open, the machine would no longer qualify for that term.

Machine's don't "need" a power source (electrical or other), don't "need" maintanence or tuning, and don't "need" repair or replacement parts if a component part breaks or fails. In the absence of any of these, the machine simply sits there, dumb, insensible and idle. "Needing" nothing. It's only the machine operator who view the machine as in "need" of something. Thus, your terminology betrays your erroneous presuppositions about the matter.

PZM's remark was entirely apt: He'll ... "be impressed when an AI is given no pre-programmed knowledge of language at all, and begins with baby-talk babbling and progresses over months or years to construct its own competence in comprehending speech."

Until then, we are concerned with dumb collections of wires and condensors, resistors, transistors, capacitors, which, at the most, send and receive electrical impulses which are notionally "ones" and "zeros". That is not and can never be "thinking."

By proximity1 (not verified) on 12 Sep 2014 #permalink

Interestingly enough, as I thought about and composed my response @ 7, just a few feet away, lying in a stoller, is a living, breathing, very animated example of a real intelligence in the works. A nine-month old boy is cooing, babbling, oooohing and ahhhing, screeching, practicing "bububub bub-bub-bub- bahhhh-- eeeeeeee--ooohh--owwww sounds. He's expressing vocally indications of his present physical comfort or discomfort, pushing little non-urgent groans and sighs and ummphs. His upward gaze is continually shifting, his ears reacting to the ambiant noises around him, he turns his head to the attention of the incoming sounds. When I look in on him, he immediately smiles at me and wiggles arms and legs.

Also, in my immediate vicinity are a number of computer terminals. They sit there, doing nothing, aware of nothing. Throughout the life and growth of that nine-month-old boy, those machines, if left alone and unattended, would contine to do that---nothing--until the infant is dead and gone and the building collapses around them.

By proximity1 (not verified) on 12 Sep 2014 #permalink

Douglas Hofstadter voices similar criticism in the last part of "Fluid Concepts and Creative Analogies".

Additionally, I think attempts to develop a human like AI just with software are doomed. "I" am not just a brain. "I" wouldn't be myself if I had lived with different body, much less if I had lived my whole life as a brain in a jar, which is what trying to create human conscience just by software seems to me.

However, I think if we remove the AI hype from it, good domain specific chatbot-like software can have cool applications. Like I had to call Apple support a couple of years ago, started by asking my question in spoken english, and while the details are gone from my memory now, I did get a resolution to my problem without interacting with another person. Unless Apple was using Amazon's mechanical turk service behind the scene, of course ... :)

So... what you're saying is that there is a lonely attic somewhere, and there's a table, and on the table there's a shape under a sheet? And lightning flashes light the room with an eerie glow? And the AI Programmer... sorry, the Creator is rubbing his hands together and cackling with glee?

Dude, that's how it's done.

I think that
the strained effort to imagine the essential difference between natural intelligence and something referred to as "AI" betrays our stubborn failure to recognize just how deep and connected are our inherent biological traits. Though we cannot locate all the elements throughout it, the inheritance stream among living organisms (with, perhaps a number of broken starts over untold eons), is seamless and our intelligence debts go back to the simplest life-forms from which we eventually arose. That means that "intelligence" evolved with living organisms and did not just suddenly sping up in or among some forms and skip others. The phylogenetic chain runs unbroken from single-cell life to mammals; so, in a real sense, not merely a figurative one, our "intelligence"is the product of, and developed from the simplest life forms and was inherited as first some simple stimulus-response mechanisms which themselves gradually evolved and were passed on.

The original premise for that was and remains an open-ended, naturally-occurring evolvability in organic cellular matter which no one created out of whole cloth in a step-process. It was a potential characteristic of cell-matter. But there is no reason so far to suppose that a seemingly intelligent collection of wires and silicon, with all the construction that goes into them, could, when infused with electrical streams for calculation of charge-impulses, one day spontaneously gain "awareness."

Phylogeny and ontogeny present a single life-stream which can be looked at from either "end" of the spectrum--"populations" and "individuals" are different locations along a single extensive line which incorporates all life forms--except, theoretically, those which sprang from a separate life-origin episode, and we can't, without a distinctive sort of DNA for it, identify that. Though Carl Woese's and George E. Fox's "archea" may be a case in point of such distinct living things.

See, e.g. : Jean-Jacques KUPIEC et al; L'ontophylogenèse, Editions Quae, (2012) ; and "Stochastic Gene Expression" ( "SGE"); and Kupiec and Sonigo, Ni Dieu ni gène: pour une autre théorie de l'hérédité, Paris, Seuil, coll. Science ouverte, (2000).

By proximity1 (not verified) on 13 Sep 2014 #permalink