Maps, directions, and video games: A model for how we perceive them

i-eca0cf2af9fc3ac4445c7dff7d8aab70-research.gifNearly all video games that offer a first-person perspective -- where the view on-screen simulates what a real person would see as she navigates through the virtual environment -- also include a virtual map to help in navigation. Even my favorite golf game has one. Such maps can be indispensable, but they also invite a question -- should the map rotate to align with the player's viewing angle, or should they remain at a constant orientation?

Aligning the map with the viewer's perspective makes it easier to find items, but constantly rotating the map might make it difficult for gamers to remember where those items are located when they move out of view -- when the object is needed, the map might be upside-down compared to when the object was first encountered.

Of course, maps aren't just useful for games -- more and more cars are equipped with GPS navigation systems, and hikers like myself like to use the old-fashioned paper type to help find campsites in the wilderness. Creating maps that are easy to compare to the first-person viewpoint, whether in a video game or an Air Force jet, can mean the difference between life and death.

Research on map orientation has found that mapreaders take locate items they see in a scene on the corresponding map at different rates. As you might expect, the more the map is rotated compared to the viewing angle of the scene, the longer an object takes to find, with upside-down maps taking the longest. But there is a secondary effect, which depends on the location of the object in the scene. Items directly in front of the viewer are located fastest, regardless of the orientation of the map. As the items move to the right/left and farther away, they take longer to find. But items that are farthest away, near the back of the scene, are found nearly as quickly as items directly in front of the viewer.

Take a look at this simple scene, made using the customizable video game Unreal Tournament:

i-3fa6f44c890bc7ba65acfc70d34410bb-gunzelmann1.jpg

Glenn Gunzelmann and John Anderson showed volunteers a series of maps like this, asking them to indicate where on the map the red object would be found (the arrow on the map indicates the viewer's perspective). Here's a chart showing their response times:

i-1f449607d805e4ba50e01d4df08390cb-gunzelmann2.gif

As you can see, the chart shows an "M" pattern. This same pattern has shown up in a wide variety of research studies, causing researchers to hypothesize that people search for items on a map by mentally rotating their field of view, just as they do when comparing objects to see if they are the same or different. When the object is directly in front of the viewer -- even when it's far away, no rotation is necessary.

But in a second experiment, Gunzelmann and Anderson found that objects that were clustered with others took longer to locate, regardless of their location. Maybe mental rotation isn't the only thing going on when we're consulting a map.

In a third experiment, Gunzelmann and Anderson showed volunteers maps that required them to locate objects in a larger variety of locations:

i-a9d587b4d0a6a5edbba5ac6594b36310-gunzelmann3.gif

The key to this arrangement is that objects can be one of just three different distances from the viewer -- in the first experiment, the distance of each object was different. Now the effect of distance can be separated from the effect of the offset of the object relative to the viewer. Now when the response times are charted, a very different pattern emerges:

i-3c6456da32ae144f658097b76466dd6a-gunzelmann4.gif

As the objects get farther from the viewer, they typically are more difficult to spot, and they also take longer to find as they get more distant from the straight ahead view -- with one important exception. The farthest object was found faster when it was far off to one side compared to when it was just a little to the side.

Gunzelmann and Anderson believe they can explain both this result and the results of the second experiment with a single model for how we map the location of objects. Instead of merely mentally rotating the maps to locate objects, we follow a two-step process. The first step is to describe the location of the object in relation to the self. In some cases, this is all that is necessary: it's right in front of you. But in other cases, some other system must be found, and it can be done through a number of means -- locating nearby landmarks, separating it from other similar objects, or, as in the final case, using the edge of the field of view itself as a "landmark."

The "M" pattern in the initial graph, Gunzelmann and Anderson argue, is nothing more than an artifact of the particular arrangement of the objects tested, along with the fact that the display included no readily identifiable landmarks.

Gunzelmann, G., & Anderson, J.R., (2006). Location matters: Why target location impacts performance in orientation tasks. Memory & Cognition, 34(1), 41-59.

More like this

What I want to know is, if the viewer is seeing a 3-D image (any of the available technologies including Crystal Eye) which is a projection of a 4-D virtual world, what does the human brain do as it learns how to navigate around in 4-D?

This doesn't even take into consideration moving objects. In a first person shooter, if something is moving, it's a lot easier to pick out. This makes it a whole lot easier to track other players.

Also, if the player is moving, it also becomes easier to pick out stationary objects on the map. I guess that would be because they're moving in relation to you.

When you see a dot on the map moving towards you, for instance, you'll see something on the map getting bigger, and maybe a rocket flying at you. Assuming there's adequate contrast between the object and the scenery, that's enough to locate the object very quickly.