Cognitive Daily

If the human eye was a digital camera, how many megapixels would it have?

Clarkvision does the calculations.

The answer: 576 megapixels.

Impressive job — I wish I had thought to do that. Note that their calculations require a bit of fudging: the fovea actually covers just a tiny bit of the visual field; the eye must move from point to point in order to assemble an image this detailed. A digital camera records all the pixels at the same time. For the photographically inclined, the article also goes on to make several other camera/eye calculations.

A separate question: could a 576 megapixel image “fool” your visual system into believing it was seeing the real thing? Assuming one eye was covered and you were not allowed to move, I think it could. But as soon as you viewed the image with both eyes or were allowed to move, then you would be able to detect the fact that the image was flat. Three-dimensional images look different when viewed from different perspectives, but flat images don’t.

Comments

  1. #1 Oskar Syahbana
    October 22, 2006

    The question should be, can we create a 576 megapixel image?

  2. #2 oku
    October 22, 2006

    The question should be, can we create a 576 megapixel image?

    Someone did: http://www.tawbaware.com/maxlyons/gigapixel.htm

  3. #3 Janne
    October 22, 2006

    A bit disingenious, I think; if you allow the eye to move and assemble an image over time, then of course you can do the same with a camera (and the gigapixel image linked to above is one example).

  4. #4 Chris Chatham
    October 22, 2006

    As digicam freaks will know, there is a lot more to image quality than resolution. THere is also bit depth – and the human visual system clearly has current consumer digicams soundly beat when it comes to that. At issue is the dynamic range of the images. You can get a feeling for how much consumer grade photos lack by looking at pictures from large format cameras, but even then (I think) you’re missing the ultra-high dynamic range that is present in real-world imagery.

    Interesting analysis though!

  5. #5 sparc
    October 22, 2006

    What about data compression during visual perception? I can’t recall it, but I guess that is quite impressive too.

  6. #6 Jonathan Dobres
    October 23, 2006

    Chris has a point. One of the big giveaways in photography is that most photos represent just a tiny slice of the light that would be entering the human eye at the actual scene. There have been attempts in the world of computer graphics to correct for this limitation. High Dynamic Range Imagery, or HDRI, is a special image format that contains data from multiple exposures for one “picture”, so that you can get an idea not just of what’s bright or dark in a scene, but what’s emitting and absorbing the most light. Paul Debevec has been doing research into this for years, with special emphasis on reprojecting the HDRI data for use in special effects and scene simulation. His research has been used in the Matrix movies. See http://www.debevec.org Pay attention to the old, but still very impressive, Fiat Lux, in which Debevec recreates the interior of St. Peter’s Basilica using sampled light data.

  7. #7 Alan Horsager
    October 23, 2006

    How the human visual system ‘resolves’ visual information is only mildly related to how a camera captures a picture. Even at the level of the retina, the amount of computation involved in contrast adaptation is staggering. How visual information is represented in early visual processing is still a mystery. Not even mentioning how feedback such as spatial and global attention might influence individual channels of information.

    In the lateral geniculate nucleus, for example, approximately 80% of all neurons representing the ‘visual image’ are feedback neurons from higher visual centers, where the other 20% are the feedforward neurons from the retina.

  8. #8 Harlan
    October 23, 2006

    Yeah, 576 million pixels of functional detail from only 6 million cones (the color-detecting photoreceptors) and 125 million rods (mostly for peripheral vision).

    On a related note, I find the work to create camera systems for blind people extremely interesting, but very premature. They’ve created cameras that sit on your glasses and send impressions to your tongue or chest or whatever, but to pan the camera you have to rotate your head. What they really ought to be doing is to have little eyetrackers (perhaps using little magnets in your eye or something easy to track) that can figure out where you’re looking and selectively show the fovial equivalent on the tongue/chest. No need to point the camera, just use a fisheye lens and take a different portion of the image. This would probably be more useful to users of this kind of device than increasing the resolution.

  9. #9 VINDISHA
    May 7, 2008

    how do u calculate 576 megapixels for eye?

  10. #10 Douglas W Taylor
    January 20, 2009

    (Question)

    How for does HD TV have to go to surpass the visual capability of the Human eye? How does HD TV compare to other species visual ranges? Any Takers on this…….

  11. #11 albedo
    January 20, 2009

    The title of the article is a bit misleading.

    I would say the resolution of the eye is lower.

    Further away from the optical axis the acuity decreases rapidly. High visual acuity is only apparent at the fovea that’s only 2° wide.
    Furthermore the color vision switches from trichromatic to dichromatic if the viewed object gets farther away from the optical axis, which should also influence the resolution.

    What I mentioned applies only to the situation where an observer would stare only at one point, allowing only microsaccades.

    If we can view around freely then the construction of a full high resolution view kicks in, of course.

    The article should be titled “The resolution of the human visual system” (which would include saccades and all kinds of processing) or “The perceived resolution of the human eye”.

    @ Harlan: You don’t need magnets to track eye movements. A simple IR light source and a IR sensitive camera are enough to track the eye movement. It’s done by analysis of the reflection on the cornea that moves around according to the eye movement.

  12. #12 felix
    February 17, 2009

    576 MP is indeed misleading. Our eyes don’t process that much information in one instant like a camera does but rather do it in a succession of frames or scannings. So I would say the actual MP is much much lower. Anyway, we get the idea.
    When it comes to digital camera, the point to note here is that the eyes can distinguish two point sources or pixels up to 0.3 Arc-Min apart, that works out to be about 1300 dpi at an average focal length of 220mm. Meaning to say, any resolution higher than 1300 dpi is redundant as our eyes cannot ‘see’ the difference.Therefore, ideally, say, a 4″ x 6″ photo would require 40 MP for the eyes to see the ‘complete’ picture.
    In conclusion, MP does make a difference in digital photography as today’s cameras are still way behind.