The Frontal Cortex

Life as Music

At first glance, it sounds like a cheesy third-culture gimmick:

UCLA molecular biologists have turned protein sequences into original compositions of classical music.

“We converted the sequence of proteins into music and can get an auditory signal for every protein,” said Jeffrey H. Miller, distinguished professor of microbiology, immunology and molecular genetics, and a member of UCLA’s Molecular Biology Institute. “Every protein will have its unique auditory signature because every protein has a unique sequence. You can hear the sequence of the protein.”
“We assigned a chord to each amino acid,” said Rie Takahashi, a UCLA research assistant and an award-winning, classically trained piano player. “We want to see if we can hear patterns within the music, as opposed to looking at the letters of an amino acid or protein sequence. We can listen to a protein, as opposed to just looking at it.”

But when you actually listen to the modernist piano sonatas generated by amino acid sequences – I think the surface protein of the giardia parasite is my favorite; it’s like Satie on speed – you actually gain a new appreciation for biology’s intricate patterns. When you just look at a visual list of amino acids, the alphabetic clutter appears random. There doesn’t appear to be a logic governing the placement of glutamine (Q) as opposed to serine (S) or valine (V) or whatever. But when you hear the amino acid structure, the melody is suddenly obvious. This isn’t the sound of chaos. There is no cacophony. Instead, you can hear vague tonal structures and recurring rhythms. It turns out that much of life has the atonal unpleasantness of early Schoenberg. By translating the data of biology into music, you can suddenly hear the order that isn’t apparent visually.

Hat tip: Afarensis.


  1. #1 Brian Thompson
    May 18, 2007

    Interesting, apparently biologists are also turning bacteria genome sequences into long-term data storage.

  2. #2 Alan
    May 31, 2007

    What one “hears” in this context would depend entirely on the initial assignment of notes or chords to the data. How was that decided upon? And by whom? A microbiologist? A musician? Random chance?
    Inquiring minds want to know.