Get Inside the Body, in Four Dimensions

Surgeons may have a new tool at their disposal to aid in planning surgeries: an interactive "4-dimensional" model of the human body called CAVEman which describes in live-sized detail over 3000 distinct human parts. CAVEman is really just a huge computer image which can be viewed in a booth (giving the image height, width, and depth) and changes over time to reflect age, disease state, etc. This progression of time is the "fourth dimension."

i-0bc9c3ec2c78c832b8b4e3c2021396dc-caveman.jpg

CAVEman has the potential to help patients better understand their diagnosis and treatment by having it visually represented to them in layman's terms. For the non-layman, CAVEman can organize "the unique visuals of patients, such as magnetic resonance images, CAT scans and X-Rays, giving physicians high-resolution views of the inner workings of the body while it appears to float within arm's reach." It could also be used as a teaching tool to med students, possibly providing a much-appreciated alternative to the (icky) cadaver.

CAVEman is an offshoot of a 3D virtual reality "Cave", a C$6 million ($5.5 million) lab the Sun Center opened in 2002 in conjunction with Sun Microsystems Inc.

The model started partly due to a desire among massage therapy teachers at a company in the central Alberta city of Red Deer for a more intricate picture of muscles and bones.

It cost somewhere between C$500,000 and C$2 million. "It's very hard to guess, because it has taken many years, especially in Red Deer, with at least one or two artists constantly employed," Sensen said.

CAVEman, seen through 3D glasses in a booth, appears to stand in front of the viewer. As in a video game, the controller can manipulate it and focus on body parts -- skin, bones, muscles, organs and veins.

Also, the researchers gave the most amazing quote I've heard in a good long while:

"We say that killing monsters is fun, but curing cancer is more important," Andrei Turinsky, a mathematician and computer scientist, said as he moved the model around using a joystick.

SWEET!

I'm envisioning an ad campaign set to NIN....."I wanna see you from the in-side....." Heh.

Now, will hospitals bite?

(Hat tip Bob Abu.)

More like this

Have You Ever Seen An Elephant ... Run?: Dr John Hutchinson, a research leader at the UK's Royal Veterinary College (RVC), has already shown that, contrary to previous studies and most popular opinion, elephants moving at speed appear to be running. Now with funding from the Biotechnology and…
No, it doesn't eat brains: that would be a fossil zombie dinosaur. From The Washington Post: A high school student hunting fossils in the badlands of his native North Dakota discovered an extremely rare mummified dinosaur that includes not just bones but also seldom seen fossilized soft tissue…
Google has announced that it will "wind down" Google Labs. This is disappointing given the power of these tools for teaching and research, not to mention that using these applications is just plain fun. I hope that Mr. Coughran at Google honors his statement that "we'll incorporate Labs products…
In February I wrote an article in Popular Science about a project to implant electrodes in a monkey's brain allowing the monkey to control a robot arm with its mind. The goal of this work is to let paralyzed people operate prosthetic limbs by thought alone. Now the research team has announced…

You might be interested to know that this "cave" technology is being applied to neuroscience as well. I'm starting neuroscience graduate school next year, and one of the people I interviewed with this spring described a grant proposal of his that was for funds to build a "cave" much like the one described, except that it would be used to visualize large scale computational simulations of neural networks. His lab had programmed the user interface already, which would basically display the neurons of the model all around the operator, probably color coded or something in proportion with their membrane potentials, and allow the user to select any neuron and see a graph of its activity, as well as place an LFP electrode wherever desired to see that kind of output.

That's all I remember about it, but it sounded really awesome. I omit the researcher's name and institution just because I'm not sure if future projects discussed in interviews are supposed to be public knowledge or what, and anyway a cursory google search reveals no information about the project so I don't think the name would help you learn anything else.

And I suppose "Even a Caveman Can Do It?". Great. Let's continue the stereotype...

Hi Shelley,

In defence of the (icky?) cadaver, I can tell you that we have a very good virtual cadaver program that uses axial sections to create a 3-D body. Speaking only for myself and the members of my anatomy group, I can say that we learned almost nothing from it.

I would argue the only way to truly learn anatomy is to put gloves on and root around in a body. It's messy, smelly, and often belittled as an antiquated med school ritual, but the only way for some of us to learn is to pick up a scalpel and go looking.

Shelly,
I think tazo's comments affirm my experience; the cadaver is still the best way to learn anatomy, only the problem being that it's not always available for those who need it. Here comes the second best, the 3-D virtual cadaver, accommodating the convenience of time and space availability. We will go foward for technical progresses, but certain basic things seem to remain the same.

AriSan

By AriSan in New York (not verified) on 25 May 2007 #permalink

Hmm, well sounds like the cadaver is here to stay. I suppose I can sympathize, as I'm pretty certain that I couldn't learn my animal surgeries that way either. Perhaps the 4d model can at least augment that understanding.

Hi Shelley,

Functional Magnetic Resonance Imaging [fMRI] is a similar 4D technique for the living, especially with the brain.

Their is also Positron Emission Tomography (PET).

A Harvard group has even adapted the imaging software for use in astronomy "AstroMed".

Maybe particle physicists will one day adapt this software for use in their field "StringMed" or "LoopMed"?

http://astromed.iic.harvard.edu/

when will there be a female model?