SyNAPSE: In Pusuit of The Cognitive Platform

"What we're seeking is not just one algorithm or one cool new trick - we're seeking a platform technology. In other words, we're not seeking the entirety of a collection of point solutions, what we're seeking is a platform technology on which we can build a wide variety of solutions."

Dharmendra Modha, manager of cognitive computing at IBM Research Almaden, discusses the Systems of Neuromorphic Adaptive Plastic Scalable Electronics ("SyNAPSE") project. Mad scientist eyes are also on display:

i-ecb659f1b1214a961cd3a977e4dbaa8e-crazyeyes-thumb-104x130-53719.jpg

Video after the jump:

Hat tip to Dave Jilk for pointing this out. See also his eCortex work.

Categories

More like this

When you are in a not good position and have no cash to go out from that, you will need to receive the business loans. Just because it would help you emphatically. I get bank loan every year and feel myself good because of it.

The idea was that sales staff provided the contract details and the computer used the same language to explain what it is doing and why. To a very large extent the high priests of the stored program computer (programmers and systems analysts) were bypassed as unnecessary.. The idea moved to a computer manufacturer where it was generalised to handle a wide range of tasks involving human involvement in a dynamically open-ended situation. It was supported by the leading UK computer pioneers, John Pinkerton and David Caminer, which resulted in the generous funding of the first prototype computer simulation of the idea, while patents were taken out...

I'll gear this review to 2 types of people: current Zune owners who are considering an upgrade, and people trying to decide between a Zune and an iPod. (There are other players worth considering out there, like the Sony Walkman X, but I hope this gives you enough info to make an informed decision of the Zune vs players other than the iPod line as well.)

Silly!!!
Platform of WHAT?
In any field premature optimization is worthless.
And Dharmendra Modha has been exposed before.
Actually I think both Modha and Markram are wrong, I have been surveying AI for more that 20 years and it is the same story again and again:
The true incentive of projects is not to "solve" anything but to GET FUNDING, therefore with respect to this all are quite successful but they bring only very marginal "progress" toward the supposed goal of understanding and implementing intelligence.

P.S. What do you think the "cute" jingles interspersed in the video are for ?
:-D

I retired from academic life over 20 years ago, having worked on the problem of people and computers working together. I recently decided to try and catch up with what has happened since then and as I had been thinking neural nets might be relevant to what I had been doing before retirement I watched Dharmendra Modha's video.

Dharmendra (with a well funded team) is taking a bottom up approach to how humans think based on the latest brain research. When I started over 40 years ago I was a lone, unfunded, scientist crying in the wilderness that the stored computer model was incompatible with how people think and there were simpler possible alternatives. The relevance to neural nets is that I was looking at the opposite end of the problem â the handling of large quantities of only partially understood information by humans â and how a suitable information processing architecture might provide a human-friendly âwhite boxâ approach in stark contrast to the human-incompatible âblack boxâ approach of the stored program.

What I would like to know is - has anyone published work on how neural nets can be related to non-trivial information processing tasks where human intelligence is important because of the presence of known unknowns, etc., which cannot be a priori defined. Virtually all the research I have seen seems to be a bottom up approach, working upwards, often using a simple closed task as the starting point.

It may help if I give a brief summary of what I did. (I can provide more details if you are interested.)

The idea started with an unintentionally âway outside the boxâ suggestion that a vast sales accounting system (250,000 customers, 5,000 product lines) in the 1960s could be run with sales staff and the computer working symbiotically â using a simple TWO-WAY language. The idea was that sales staff provided the contract details and the computer used the same language to explain what it is doing and why. To a very large extent the high priests of the stored program computer (programmers and systems analysts) were bypassed as unnecessary. The idea moved to a computer manufacturer where it was generalised to handle a wide range of tasks involving human involvement in a dynamically open-ended situation. It was supported by the leading UK computer pioneers, John Pinkerton and David Caminer, which resulted in the generous funding of the first prototype computer simulation of the idea, while patents were taken out.

The key difference between what was proposed and the established way of supporting applications was the âeliminationâ of application programs. The âintelligenceâ of the system actually lay in the user language and the way the items of information were interconnected, which is why neural nets appear to be relevant. There was a scratch area called âThe Factsâ (= short term memory) which contained the currently âactiveâ items (rarely more than a human could usefully manage) and this was overseen by a âdecision making unitâ which simply scanned the stored information, comparing items, or adding new âactiveâ items to the Facts. While anyone indoctrinated with the idea of conventional programming will find it hard to believe, the answer to a task often just seems to drop into the Facts. The idea was developed over the years and tested on a wider range of applications from artificial intelligence related tasks, to representing incompletely known historical information, and the experimental software was used for computer-aided teaching with classes of up to 125 students. The important thing was to keep the whole simple and uncluttered â so that normal people could feel at home with it. To demonstrate how undemanding the approach was in conventional computer terms a portable version was produced for a BBC microcomputer in 1985 â which was about 5 orders of magnitude smaller than the typical university computer used in, for example, artificial intelligence research. Three papers were published in the top UK Computer Journal, the most recent in 1990. (Many other papers describing particular applications, but deliberately playing down the underlying architecture were also published.)

The problem was that the underlying philosophy, though very simple, was counter-intuitive. It was based on the idea that the man and computer have to work together because the world is unpredictable and that for non-trivial tasks it is not practical to âfully specifyâ them in advance. In effect what I was trying to do was to build an âintelligentâ information mirror which would not only be able to reflect the user's âthought imageâ of a problem â but actually uses the user's own logic implicit in the âinformation imageâ to make further deductions. OK I would be the first person to agree that my model was a long way from being perfect â but at least it was a start.

The initially funded research was dropped because of a company merger and the unconventional (and counter-intuitive) nature of the underlying philosophy made it impossible to get research grants from an establishment which took the stored program computer philosophy as the best possible answer because it made so much money. (Anyone trying to do blue sky research in an area dominated by a rapidly expanding and very âsuccessfulâ technology is likely to be faced with the same âgo away and stop wasting our timeâ reaction.) The combination of sheer exhaustion fighting the âstored program computerâ establishment, a family suicide, and a bullying professor who complained about the lack of grants, led to me to throwing in the towel in order to retain my health.

By HertfordshireChris (not verified) on 28 Jul 2010 #permalink

I didn't see Kevenbuangga's comments until after I had posted mine and I am most interested about his views on projects and funding.

My upbringing and later studying for a Ph.D. taught me quite a lot about science and the scientific method, and the fact that there was a lot of poor quality science (not necessarily wrong) around, and that the establishment view was not always right. For myself I decided that the the most important things were to be objective, looking for the most appropriate question to ask (rather than rushing forward to develop the first idea that came into my head) and looking to making a contribution to society, either directly or by providing greater understanding. All I was looking for was personal satisfaction in a scientific investigation well done. In my book the rat race to grab funds for scientific research is inimical to being honest and objective and can lead to good ideas being suppressed.

If I had been an aggressive self publicist with a firm proposal for a human-friendly white box information processing system I would almost certainly have got funds and by now my research might be well known.

The snag is that if I had been such an individual there is no way I would have ever considered such a blue-sky counter-intuitive approach, as I would be looking for something that would give me a quick return, either financially or as a career advance.

By HertfordshireChris (not verified) on 29 Jul 2010 #permalink

HertfordshireChris:
The snag is that if I had been such an individual there is no way I would have ever considered such a blue-sky counter-intuitive approach, as I would be looking for something that would give me a quick return, either financially or as a career advance.

Yes, yes, there are many such conundrums around, this is where the (positive) black swans come from, when an improbable feat makes it thru by some kind of tunneling effect.

I am most interested about his views on projects and funding.

Unfortunately I have zero views on projects and funding, I am only watching the field.
This came out of frustration (as a software engineer) of being promised that "computers will manage by themselves a few years from now" more than 40 years ago.
Something strangely akin to the goals of the personal experience you report, thus I am not as optimistic as you about this "white box/neural nets" idea.
You might be interested in Monica Anderson's Artificial Intuition.