Knowledge Interoperability

i-291233727ef777b1a8690b0827d6d973-wilbanks150.jpg

Below, John Wilbanks answers our final question.


Cross-disciplinarity seems to work best when there's a problem that has a few facets that are apparently unconnected, but the disconnect comes from the artificial way we divide up the knowledge. Because in reality the problem is simply the problem, but scientists get trained into reductively narrow disciplines to become experts in those disciplines, get grants, and get tenure. Overcoming the narrow reductive natures that get trained is one of the challenges here—the scientists on cross-discipline teams spend a ton of time just learning the others' language of science! But some of the work taking place around sensors at UCLA in the Center for Embedded Network Sensing is a good pointer to what it's going to be like—see this link.

Where it's not appropriate is harder to figure out. My instinct is that in places where the local knowledge is sufficient enough to create falsifiable hypotheses and experiments, the time required to learn the language of others doesn't get rewarded by results—gene sequencing doesn't need a physicist, for example. My hope would be that we can get to enough technical standards so that this kind of science can be harvested, aggregated, and mashed up by people and machines into a higher level of discipline traversal. Right now the problem is we still think about cross-disciplinarity as a function of people choosing to work together. But the internet and the web give us a different model.

What's more cross-disciplinary than Google? But the language barrier among scientists is preserved—indeed, made worse—by the lack of knowledge interoperability at the machine level. It's the Tower of Babel made digital. Until we can get past that one, we're going to be stuck doing human speed knowledge construction on machine speed data generation...

More like this