Casual Fridays: We can identify "mystery faces" just 6 pixels wide!

How good are humans at identifying faces? Amazingly good, even with only a few sparse pixels' worth of information. Inspired by the research of Pawan Sinha, who had found that people can recognize faces using just 12 × 14 pixels' worth of information, we wondered if people can distinguish between faces and non-faces with even less information. So, last Friday, we asked CogDaily readers to try to identify faces as small as one-quarter the size of those used in Sinha's study: just 6 by 7 pixels. Readers rated 8 different photos in four different sizes ranging from 20 pixels wide to just 6 pixels. How'd they do?


Very well, thank you very much! Average accuracy was almost 90 percent, and only one photo, at the lowest resolution, troubled readers so much that they weren't significantly better than 50 percent accurate -- the level dictated by chance. Here's the picture:


Can you tell whether this is a face? I'll give the answer at the end of the post.

We showed readers one of two different versions of eight different photos -- four faces and four non-faces. We tried to find non-face pictures that were similar in complexity to the face pictures (I'll show you all the originals at the end so you can decided for yourself). The first task was simply to say whether or not the picture was a face. Then readers were asked to identify the photos in an open-ended response. Even in these open-ended responses, accuracy was impressive. For example, over 20 percent of respondents correctly identified this photo as either a church, cathedral, or Notre Dame:


An even larger number -- 23 percent -- successfully identified Napoleon Bonaparte from this picture, at an even lower 15-pixel resolution.


When the Napoleon picture was bumped to 20 pixels wide, over 50 percent could identify it:


One person even identified this 10-pixel photo as Sean Connery!


This study filled up so quickly that we decided to do a supplemental poll on Monday. Those results have been incorporated into the graph above, and they confirm what we learned from the Casual Fridays study: We are accurate at distinguishing faces from other similarly complex pictures, even at resolutions as low as 6 × 7 pixels. In case you took the test and are curious which pictures you saw, here they all are, along with the original on which each was based (each person only saw one version of each pictures, and the pictures were not necessarily displayed in this order):









The photo of Sean Connery on the left above is the only picture in the entire group that wasn't identified at levels above chance. Only 46.5 percent of respondents identified it as a face.

More like this

Did you miss the cut-off for the Casual Fridays study last week? Now's your chance to see more mystery photos. In this post, you'll find eight new versions of the photos -- ones which didn't appear in the original study. Each photo is followed by a poll so we can track responses. The idea is to see…
This photo of the World Trade Center burning on September 11 attracted a lot of attention for a curious pattern in the smoke. Was it the face of God? Satan? Of course it was just a random pattern in the smoke that briefly seemed to look like a face, but many people were not convinced by appeals to…
Last week, we asked readers if they could tell which of these two photos, offering only 12 × 14 pixels of information apiece, represented a face: Nearly three-quarters of respondents accurately identified the photo on the right as a face. But what face? It only took 6 guesses for readers to guess…
Take a look at this quick video. You'll see a set of six small images, arranged in a circle, for 1 second. Then the screen will go blank for 1 second. Finally, one image will reappear in the place of one of the first six pictures. Your job: indicate whether the final image is the same or different…

If I may hazard a guess at why the Connery picture was to hard to recognize as a face, I'd say it's because of the lighting. The other faces are brightly lit, and all the facial features are clearly visible. Sean's face is heavily shadowed on his left side.
My two cents anyway.


You may be on to something, but I was very surprised that the lower-res versions of the Jim (toddler) photo and the SUV were both accurately recognized -- these seemed quite difficult to me.

I find it way eaiser to see the faces if you cross your eyes.

omg! me too! i totally thought bill murray....weird...maybe it rang the same memory of that photo for us...

I thought it was Bill Murray as well.

I wonder how much of it has to do with the fact that these specific images are already familiar to many people. Take the picture of Napoleon - maybe we recognize the image itself simply as an image we've seen before, and *then* make the connection (maybe) that it's of Napoleon.

Pretty amazing...I thought it was Bill Murray also.

Bill Murray here too

Another Bill Murray!

I thought it was Bill Murray also. Think it's the hair and facial shape.

By Ross Barber (not verified) on 02 Mar 2007 #permalink

going with the bill to the murray..freaky

I think Napoleon was the easiest because of two distinguishing traits in the photo -- the bit of hair that falls on his forehead and the dark collar around his neck. Your eyes use those as a basis since they are the things that stand out.

Had the photo of Angelina shown her wide open mouth, it might have been easier??

Does anyone else do the "blur" thing with their eyes to help them "see" the picture?

Did anyone else think that that one picture looked like the guy from Caddy Shack?

Heidi, I also find that it is easier to identify them as faces when you cross your eyes. Interesting! It gives me the illusion that if I were to saccade to where the image is, it would be high resolution. But there are no details - just the impression of a face.

By Brian Mingus (not verified) on 02 Mar 2007 #permalink

Fascinating. I think our brains are very good and predisposed to recognizing faces, even in this seemingly challenging situation. Have you ever stared at shapes on the acoustically treated ceilings? Your brain cannot help but try to find faces and human figures in the randomness. This also explains the face on Mars. I think this explains supernatural phenomenon also. Our brain "wants to" make sense of randomness. Sometimes it is wrong, sometimes it is right.

your mom might warn against crossing your eyes, in case the wind changes. it's safer and easier to simply squint and lo, the images become slightly more discernable.

By john maguire (not verified) on 02 Mar 2007 #permalink

I immediately knew that the Napoleon pic had to be old because of color the collar and the coat. It is a black collar and a white coat. Clearly not a modern look, but one that lends itself to thinking it is some sort of pre 1800's type of outfit. If it was the inverse, (white collar, black coat) then it might look modern.

If you squwint (old trick)to the physical limit that you eyes will still remain open, you brain seems to fill in the gaps and smooth out the picture making it a lot easier to recognize. I think at least.

Another Bill Murray here. I wonder what percentage misidentified Napoleon as Bill Murray.

Another Bill Murray.

Probably because both images have receding hairlines...

This certainly explains humankind's strong tendency to pareidolia.

By svengalalia (not verified) on 02 Mar 2007 #permalink

Funny, I really thought it was Bill Murray too! Really the hair :)) ...and I guess the fact that I'm not familiar with Napoleon's face contributed to my mistake.

Squinting or looking at the images in the peripheral vision actually makes the pixelated images more clear. This is especially true of video images.

It's much clearer that they are faces if you blur your vision.

I recognized Napoleon and Notre Dame right away
but then I am French... Cultural background IS important.Definitely no Bill Murray here for me.
I also got Sean Connery right, although I second-guessed him as Abe Lincoln!

I saw Napoleon right off the bat, but I can see how folks might have thought it was Bill Murray. There's *something* about their faces that's similar, but I can't figure out what it is.

Yes, I squint too, but I do something in my head as well--maybe I go into alpha?? It's some kind of distancing maneuver, as if I'm no longer actively looking at the image but just allowing it to be there. Hard to describe.

By Swift Loris (not verified) on 02 Mar 2007 #permalink

Low rez is the way the entire world looks when I'm not wearing my glasses, as I am legally blind without them.

I think we recognize the faces based on the dark line of the recessed area for the eyes, in relation to the lighter areas of the forehead and cheekbones. Another dark line for the mouth and light patch for the chin makes the image even more "face like". I think the mouth/chin combination is less necessary for a successful identification because there is more variation in expression and structure below the nose than above it.

To test this, you may want to try faces of non-white people, as the contrasts between the shadows of the eyes and the highlights of the forehead and cheekbones is going to be less pronounced. Also try photos of people in less expected poses, such as lower or higher viewing angles.

By Fox Laughing (not verified) on 02 Mar 2007 #permalink

What was the low outlier at the 10px size?


A couple of vexing issues:

1. How many people replied "Bill Murray" to the Napoleon picture in the original study? Actually, just two! Many, many more replied "Abraham Lincoln." But I agree, it does sort of look like Murray. If we could find a way to make him look short, it would be interesting to cast Murray in a Napoleon biopic.

So why do so many of our commenters say they saw Bill Murray? Perhaps they were participating in the second part of the study, where the more ambiguous 6px version was shown.

2. What was the low outlier at the 10px size? It was the SUV-- Just 62 percent got that one correct (still significantly better than chance, though!)

I would love to know the ages of participants in the study, and to see whether there was a breakdown done by age. For those who grew up with Atari and some of the first home PCs, I would suspect they're better then younger folks because they had to distinguish pixelated faces for a longer time - making them more practiced, if you will. Interesting nonetheless...


No, there's no breakdown by age. However, I can tell you that most CogDaily readers are in their 20s, followed by the 30-39 age range. Those two groups account for well over half our readers.

I thought this survey was a lot of fun, tho I'm not sure exactly what we were trying to prove. My wife is very skeptical; she says that people are just picking out the faces using the general roundness of the shape. If the brain (as in our theory) is good at picking out faces based on the darkness and position of the eyes and mouth, perhaps we should do another survey to pick out faces from other roundish objects, like baseballs or pancakes.

Silly study. It's testing whether people can identify a HUMAN HEAD, FACE FORWARD, not how good they are at identifying "faces". I assume that the shape of the facial plane-- lighter color-- meeting the hairline and the darker shape of the hair is an important cue.

A human head & face compared to inanimate objects that have geometric properties (flat image) that are nothing at all like a human face.

You could probably find amazing performance for identifying blurry "automobiles" compared to trees, too.

Secondly as someone pointed out, it's not remarkable to identify Sean Connery in the blurry pixels if you're already familiar with THAT PARTICULAR photo of him. Same goes for other promo shots or famous portraits.


I think it's more than just shape. The SUV is the same general shape as a face, too. I don't think many readers are familiar with that particular Connery photo or the Jolie photo. The point is well-taken with the Napoleon portrait, though.

I also agree that it's a little silly -- that's what Casual Fridays is all about. Don't you think it's okay to have some fun with psychology every now and then?

That said, there are applications for this sort of research -- the applied stuff comes in to play in artificial intelligence, for example designing robots that can distinguish between humans and non-humans, even in suboptimal lighting.

Instant Bill Murray here.

Haha, I was positive it was Bill Murray too!

Another Bill murray here!

I clearly saw Bill Murray at first glance too.

Create a tiny aperture by making the thumb and forefinger "OK" sign with each hand and then putting the thumbs together and the forefigers together so that the resulting hole exists between the two forefingers and two thumbs.

Then squint through the hole with one eye at the pictures while flexing your hands to vary the size of the hole.

You should find that most of the pictures become pretty clear at the next to lowest resolutions. This will happen when you make the hole really small.

This trick is also useful if you are nearsighted and don't have your glasses with you to see something far away. You are effectively making a pinhole camera to refocus the image.

I too thought it was Bill
knew Notre Dame Cathedral
and got the book

Bill Murray here too. Some of it's not really a fair test -- even at full resolution I still don't recognize the picture as Napoleon since I'm American and haven't studied him in >20 years. The common feature with Bill Murray is the lock of hair coming down the center of their foreheads. And the woman? Not sure who she is from the hires picture either, although the mouth reminds me of Angelina Jolie. (You can probably guess I have problems with faces. Even though I have excellent vision I tend to recognize people by their voice, not their face. To recognize a face I have to concentrate on a feature, hence the mouth on (Jolie?).)

One guy posted the 10-pixel image of Sean Connery to an internet forum. By looking at it without my glasses I was able to guess who it was.

But what comes to the red car... without being able to see the full resolution image I would have thought it to remotely look like the face of a Cylon from the original Battlestar Galactica. Of course, I knew it would be very unlikely.

Another vote for Bill Murray before I read the comments.

Okay, am I the only one who thought Notre Dame was Chewbacca?
Probably. :(

James H (#48) - You may have a rare condition known as prosopagnosia, or "face blindness". Check out the wikipedia entry to see if it relates to you. At the very least, it's an incredibly interesting condition that is not unrelated to this topic.

I can understand all of the Bill Murrays because of the similar hair. But Richard Nixon and Phil Collins also have the same hair thing going on. So why did nobody pick them?

Is it because they are both evil?

By Ick of the East (not verified) on 03 Mar 2007 #permalink

Anyone who has ever writen an OCR program will have experienced the frustrations of getting a computer to read letters. You can quickly get down to error rates of 1:1000, but that is still one error per A4 page. The errors are often absurd ones - it will mistake '?' for '&'; you fix this, and soething else goes wrong.

While I was remembering this, I thought of an earlier bit of work that migh suggest a simpler experiment that avoid the complications of face recognition (or language with OCR).

I used to work in R&D for the printing industry. We used to produce very precise colour halftone patterns. These are big dot halftines like you find in newspapers, not the stochastic screening you get on colour printers (look up 'halftone' on Wikipaedia, and ignore the bizarre hexagonal halftone example). In those days, the halftone patterns were put out to big sheets of black and white film, called 'separations', which where then used to make the printing plates (you go straight to plates these days). Occasionally, the origial artwork was lost, and you needed to scan the image back from the separations.

You could blur the image to hide the halftone pattern, but this lot too much image detail. You can actually get useful detail beyond the halftone screen frequency by making the dots lumpy. So we needed to know the position and pitch of the actual halftone screen to try and recostruct the original image.

We found all kinds of problems getting computers to do this. If you had separate small dots, then you could guess the centres of them, but when they all run together it was harder to analyze. Looking at huge areas and Fourier transforming for the screen pitches was tried, but this was no better.

Nevertheless, give someone who has got used to looking at halftone patterns an 8x8 pixel region of a real square halftone with a pitch larger than 8 pixels and at an unknown angle, and they can usually make a fair guess at the pitch and angle of the pattern, despite the interfering image data. You may have three small segments of a lumpy dots, with two of them run together to be almost indistinguishable, but we seemed to be able to do this without effort.

Assuming most people have not looked at halftones closely for their job, it should be possible to train them in looking, then ask them to identify pitches and angles from small halftoned segments of images.

By RichardKirk (not verified) on 03 Mar 2007 #permalink

i am facinated by yur research. I was thinking , if v could analyse how the mind does it in terms of an algorithm based on neural networks and interpolation , v sure have computers going face recognition and all that , but these seems much more fascinating.i am sure if this could me converted in to some kind of computer programm , that can distinguish faces frm other objects ( may be based on a prior data base for now). thes can have amazing applications in security and war fare.

By Sidhartha (not verified) on 04 Mar 2007 #permalink

If Napoleon didn't have shoulders it might be harder at the lowest pixel level.

I find squinting always helps identify a pixelated face.

Less is more? (data) Why and How?

There didn't seem to be a way of explaining what it was about the size of pixels till I remembered the business of how Darwin used analogical mapping to go from selective breeding (known) to natural selection (unknown).

The obvious pairing here is a cartoon face and a pixelated photo.

Imagine a smiley on the left and your pixed Napoleon on your right. N.B. smiley has no ears!

You squint at the two together.

The cartoon face is so simple you cannot reduce the data any more, but Napoleon appears from the gloom. There are already three correspondences [1] less info [2] simpler representation [3] you are dealing with both heredity and environment (built in face recognition ability + memory]

This has to be deconstructed a bit: a set of drawings starting on the left with a circle -> [circle + dot] -> [circle + 2 dots] -> full smiley -> smiley with ears

From the one dot circle link down to a variation [two dot circle/ rotated 90 degrees).

So, orientation counts too! Put Boney on his side (leave out the shoulders). That's harder...

The dotted line in the smiley series goes between 1 dot circle and 2 dot. You know its a face on the right of this line.

Grateful if someone could map out more correspondences.

What is it about blurring the boundaries between pixels by squinting that makes the face clearer? Lower light intensity: when you get older you need more light to pick out the words on the page. As soon as you shut your eyes a bit, the visual system can no longer clearly distinguish between pixel A and adjoining pixel B if the shade of grey (gray) is almost the same. On Napoleon's forehead in both 15 and 20 pixel resolution the forehead becomes simplified into one characteristically recognisable light blob, which cues for his hair line, etc), except one white square right in the middle is still there in both 15 and 20.

So that's mappings 1 and 2, reduced data and simpler representation.

To run through, by logic, all the processes that go into recognising Napoleon would be a bit tedious, but essentially it has two stages which run in parallel: [it's a face + Its X's face]. Subjectively (with a bit of post experience analysis thrown in) the two seem to be happening at the same time.

Digressively: cognitively, more slowly, your brain can be running along two tracks a the same time, thinking about faces in general and Napoleon's specifically with all the associations they bring up, and linking each set.

Logically, the brain has to be saying "its a face" (simpler perceptual process) because we recognise Napoleon, but it will be doing this "It's a face" processing and then stopping. Once the brain says "It's a face" this part of the perceptuo-cognitive process is over. The brain knows its representational not specific. Except that the brain will almost certainly be remembering everything it knows about face as concept and faces it has known. But it is important because the question of where perception ends and cognition begins must have some significance: is face recognition a (1) lightening quick primary perceptual process or a (2) secondary cognitive one? In the case of smiley we might say (1) but we can't be sure. It needs testing.

Then a higher level of the visual cortex will be taking successively reduced data (which are all the elements of the concept of Napoleon) and at some point concept face will link to concept napoleon. But because we can say 'face' to a simple anonymous cartoon we can be pretty sure that are two separate processes.

Mapping a smiley to a Napoleon there are only three (four if boundary is included): eye1, eye 2 (and thus position) and mouth (and position). The fact that smiley mouth is an upcurve , while Napoleon's might be a down curve of great displeasure will not worry the brain.

There was some work on the visual characteristics of words which I was interested in because it might shed some light on my son's dyslexia. It was reported in Scientific American in the 70s by the then PhD student working on it, Chin Chance, but it is not on the web.

To understand what this face recognition is, one would need to go beyond it into something like how we manage to read words non- phonetically from things like beginning letter, end letter and word length. This might be a way to sparate the perceptual from the cognitive.

Another interesting avenue is the business of how colours appear different according to the surrounding colour. Green surrounded by yellow different from green surround by blue say. Why should the brain need to perceive the green as different hues according to context?

Napoleon, the building and the book were fairly simple (though better if I viewed from a distance) Connery, I couldn't tell it was a face unless I rook my glasses off, at which point it blurred into an obvious face, though I still couldn't identify it

By G. Shelley (not verified) on 04 Mar 2007 #permalink

Squinting definitely makes it much easier for me to discern who's in the image.

By jayteepee (not verified) on 04 Mar 2007 #permalink

I also saw Bill Murray.

By Shinigami (not verified) on 04 Mar 2007 #permalink

I just tried squinting while shaking my head back and forth rapidly. It made an amazing difference in the perception of the low-pixel images!

I really thought it was Bill Murray also until I kept reading and I saw that I was wrong then I felt sort of dumb for thinking it was Bill Murray. I thought I was the only one but I see that I'm not so now I feel better. lol They should do a low res pic of Bill Murray and have them side by side next to Napolean and see if people get confused.

I'd heard previously that if you squint it is easier to identify faces that are pixelated. I think that's one of the reasons why now when you see someone on TV that needs to remain anonymous for whatever reason, they're now using different techniques for blurring out faces.

(I thought Bill Murray too!)

If Hollywood decides to do a movie called: Waterloo Day, in which Napoleon lives over and over again the Waterloo battle, I think we know who is going to be playing him, right?

Another thing you can do is to look at the pixelated images from a distance. This is probably similar to squinting, but if you can get 10 to 15 feet away from the images, even the lowest resolution ones come into focus.

As for the experiment, aren't you really testing people's techniques for clarifying pixelated images? Many of the above posts indicate that people use some way to make the images clearer, so does that challenge the validity of the study?

Very interesting anyway.

I saw bill murray too :), and it was pretty hard for me to guess, Im really bad with faces, and voices, maybe it isn't my style :)

It was immediately Bill Murray for me too!
This needs some explanation...

If you blur your eyes on the Angelina Jolie series, the pictures look the same!

I thought the 10px Sean Connery was Kermit the Frog.

I am quite myopic and astigmatic and didn't get my first spectacles til aged about 10 when i was quite shocked to be able to recognise people across the room! However the lowest pixel count pictures are all very obvious if I look without lenses (maybe because I am now middle aged and needing readers too! - mimicking my old visual limitation at short distance.) I did not correctly identify WHO Sean Connery was but was confident it was a face. You might find myopic people can decode on less visual information (as well as make impressionist artists!)