Explantion of How the Emotiv Headset Works

So I got this comment on my post about the "Video Game Mind Control" aka the EEG headset made by Emotiv. Thought I would post is to get a bit more visibility.

Hi guys,
I work at Emotiv and stumbled upon this site by chance.
As you guys seem to have the most knowledge in this area compared to other forums, I can tell you the following:

1. It is an EEG system (new technology that required no gel)

2. It's developed based on the latest breakthoughs in neuroscience. Emotiv has developed a highly sophisicated algorithm (patent pending) to "unfold the cortex" (which individualized across peopl like finger prints) to map a thought signal to it's source.

3. Current and pass EEG systems have not gone very far because they attached the well-known brain states (relax, attention, meditative, etc...) with actions in the applications. They do not allow you to think the way you think about the actions. In other words, you train your brain to learn how to do an action. At Emotiv, we have built an extensive machine learning system that allow US to train the machine how we think. Hence, we are able to diferentiate between different thoughts (neat huh?).

4. Because of that, training take as fast as 7 secs.

5. We are working on applications to help disabled people but if we are successful with our product launch, within two years, disabled people will be able to get the system for a few hundred dollars instead of tens of thousand of dollars. Pervasiveness is the key to success of BCI. Hope to have your support on this. And disabled people around the world, PLEASE bear with us for another couple of years, we promise to get you the best systems very cheaply.

6. For the research community, we are anouncing a research kit for universities to use for non-commercial activity in the near future. It would take as fast as 20 secs to set up an patient or a subject for research ( again, no gel). And the good news is, it's likely to be free for meaningful research projects. We really want to promote this field to be bigger.

7. To see how a game can be controlled by the emotiv headset, check out an video someone post on youtube

And finally, now that we told you quite a bit of info, we have to hire you ;)

Thank you for your support, really appreciated!

More like this

Every gamer's wet dream---controlling the game with your mind---may soon be a reality. Made by Emotiv, its called Project Epoc, and will be demo'd at the Game Developer's Conference this week. This wireless headset if fitted with sensors which, I only guess, can detect changes in blood flow in…
By way of the Chronicle News Blog...researchers are using a brain-computer interface to allow physically-impaired people move an avatar around in Second Life. The system consists of a headpiece equipped with electrodes that monitor activity in three areas of the motor cortex (the region of the…
'If man in space, in addition to flying his vehicle, must continuously be checking on things and making adjustments merely to keep himself alive, he becomes a slave to the machine. The purpose of the Cyborg, as well as his own homeostatic systems, is to provide an organizational system in which…
The Clocky Alarm Clock is an alarm clock designed to flee the scene when it wakes you up so that you have to go search for it to turn it off: Clocky (patent pending) is an alarm clock that runs away and hides if you don't get out of bed on time. The alarm sounds, you press the snooze, and Clocky…

I'm relieved s/he didn't end it with the usual "I'm going to have to ______ you" line.

As before, I'm not up on current literature, but I'd be interested in hearing a quick yet reasonably detailed summary of how they do the machine training.

Thanks Shelley!

Orb- Good work on the project so far, I definitely think that moving BCIs into the mainstream is the best way to further the technology.

You'll have to forgive my skepticism, but I'm not convinced that you can get detailed low latency high fidelity feature extraction from the device. The youtube video is pretty telling in that it doesn't really show any state choices, just activity detection with high latency.

That being said, I think that BCIs are going to be among the most important developments of the 21st century. I'm extremely excited to see this kind of system being readied for market. With a great toolset you'll have an open source following that will make roomba run to its charger in shame.

So two questions - What can you tell us about the thought process behind the electrode placement? And how many different states can the system reliably differentiate between in a non-constrained system? (Youtube example shows one constraint at once, two dimensional cursor movement requires two)

The #2 point of "a highly sophisticated algorithm to unfold the cortex" sounds like a phrase from a PR department and not a scientist. EEG can barely be used to spatially localize activity (especially with only 18 or 19 surface electrode). Dipole localization methods can help, but it far from anything resembling unfolding the cortex. The reason this is impossible is that it's a classic mathematical inverse problem. If you collect data from 18 locations, you cannot identify the sources from among combinations of hundreds of locations.

What they are probably doing is making assumptions about the locations of the neural data sources to make educated guesses about the neural locations. Machine learning is used to adapt the algorithm to how the electrodes fit on every person's head. That might be more scientifically accurate, but nowhere near as good PR.

Beyond that their machine learning methods do sound very good and the product does have potential. As far as others getting there first, in this case first doesn't matter. Each problem will use very different algorithms and only time will tell which is best. Competition in this case is very good.

Thanks for the kind words! The demo that is floating around youtube has been created for GDC last year when we showcased it behind closed door. The camera is pre-scripted so the person doesnt have to touch the keyboard/joystick (better to get the idea accross in demo).
If you were at GDC and were in one of our private demonstrations you would see that we showed 12 different actions and at any one time, differentiate between 4. Hope that answers your question. Also, what we showed is what we shipped to developers already, not what we have in the lab. This thing is coming faster than people expect but hey, isnt that true with most disruptive technology.

Yes, we have a brief from PR department regarding what we can and cannot say ;) so really sorry I can't tell you in greater details. regarding your point on each problem will use different algorithms I would like to sligthly correct it as each suite of detections use a different approach. Detections in the same suite use roughly the same algorithm.

EEG is a process that has been around for more than half a century, I am sure that there have been many attemps to bring it to consumer market. The key is to be "consumer ready". The way we see it, BCI is the next wave in technology and it cannot be a significant wave unless it's pervasive.

Hope this helps and thanks again for the support!