Biological sonar systems


Echolocation - or biological sonar - can be thought of as an auditory imaging system that is used by organisms in environments where vision is ineffective. It involves the emission of vocalizations by the animal, and the detection of the echoes of those sounds, which are used to produce three-dimensional information about the environment.

Echolocating organisms understand the world largely via the interpretation of the acoustic reflections, and possess specialized neural circuitry that performs the computations necessary for the perceptual organization of auditory information. This information is used for complex spatially-guided behaviours, such as navigation and determining the location of prey

In those groups of organisms that use it, echolocation evolved separately as an adaptation to a particular environment, and is essential to the survival of those species. In humans, echolocation is not so important. Echolocation research does, however, have many possible applications.

Echolocation in bats 

Organisms that echolocate include the toothed whales, small mammals such as rats and shrews, and two groups of birds, but most of the research into echolocation has been carried out in bats, in which it has evolved to the most sophisticated level. From the fossil record it appears that bats have been echolocating for over 50 million years. Bat species that echolocate belong to the suborder Microchiroptera, within the mammalian order Chiroptera. Within the Microchiroptera suborder are about 800 bat species, most of which are insectivorous and use echolocation to catch prey.

Bats can emit up to 200 sounds per second, and the echoes are interpreted within several milliseconds before being translated into motor outputs that produce the changes in flight manoeuvers required for orientation, prey capture and the avoidance of obstacles. Echolocation therefore involves complex sensorimotor coordination.

Echolocation can provide highly detailed information about a target. It enables bats to discriminate the acoustic image of small flying insects in environments containing dense vegetation with thousands of surfaces which themselves reflect sound waves. The shape of a target can be perceived by the spectrum of echoes from different parts of the target.

Other data obtained from the echo spectrum include the relative velocity of a target, the flutter of the target, and the target's size and elevation. The time delay between production of the click and reception of the echo provides information about the distance of objects. In air, sound travels at 340 metres per second, so a delay of 2 milliseconds between production and reception corresponds to a target that is 34 centimetres away.

The auditory cortex of the bat carries out the computations which transform the echo spectrum into estimates of distance. Bats' auditory systems are similar to those of other mammals. In bats, however, audition is the primary sensory modality. Sound waves are directed by the ears, and sometimes the nose, to the inner ear, where they fall on the basilar membrane of the cochlea. The basilar membrane vibrates in response to the sound waves, which stimulates hair cells to produce nervous impulses that are sent to the auditory cortex via the brainstem.

Like the cochlea of other mammals, that of the bat contains hair cells, each of which is adapted to respond to sound waves of a specific frequency. The frequencies of sound waves produced and perceived by bats (14-100 KHz) are well outside the hearing range of humans; in the auditory cortex, neurons in specialized ganglia are also adapted to respond to sounds of specific frequencies.

Bats have also evolved several specialized neuroanatomical structures for echolocation, and individual species have their own unique specializations. In all echolocating species, brainstem nuclei of the ascending auditory pathway are enlarged. Neurons in some nuclei of the lateral lemniscus are arranged in columns which receive tonotopically arranged inputs (ie. inputs arranged according to sound frequency). These inputs are received via conspicuously large calyx synapses, which are also found in dolphins. Neurotransmission across these synapses occurs particularly quickly, and the neurons in these nuclei function as time markers. These cells process frequency-modulated components of echolocating signals, and are specialized for detecting interaural time differences (the difference in time between the arrival of a sound at one ear and the other).

A few bat species have the remarkable ability to compensate for the changes in frequency of sound waves produced by their own movement and that of prey. The frequency of vocalization echoes that occurs while a bat is accelerating towards a target is higher than the frequency of the emitted sounds, because the acceleration 'squeezes' the peaks of the sound waves closer together (in the same way that the pitch of a siren increases as a police car moves towards you, then decreases as it moves away). This Doppler shift compensation involves an adjustment by the bat of the frequency of its emitted vocalizations, so that the frequency of the echoes falls within the range that can be perceived by its auditory system.

Sometimes being ugly pays off 

Echolocating bats have enlarged ears so that they can detect faint echoes. Some, like the Chinese species Rhinolophus paradoxolophus, and the horseshoe bat Rhinolophus ferrumequinum (top) can also send soundwaves through their nostrils, and have evolved large nose leaves to focus the sound waves. It was widely believed that these structures are involved in echolocation, but exactly how was not made clear until recently.

The role of noseleaves in echolocation was determined by computational physicists Rolf Muller and Qiao Zhuang of the Shandong University in Jinan, China, who used three-dimensional X-ray scanning to generate a computer model (below) of the horseshoe bat's noseleaves  to investigate how they interact with the emitted ultrasound pulses.


The pulses emitted by the horseshoe bat begin at a frequency of 60 kiloHertz (kHz, thousands of cycles per second), then quickly increase to 80 kHz before dropping back down to 60 kHz towards the end of the pulse. The computer model showed that two horizontal furrows in the noseleaves resonate strongly in response to, and enhance, the 60 kHz pulses which occur at the beginning and the end of the ultrasound beams emitted by the bat.

Essentially, the furrows, which are open-ended cavities, are "beam-shaping devices". The higher frequency beams emitted by the bat are focused in an oval-shaped spot directly ahead of the bat, whereas the lower frequency beams are focued into a wider spot, part of which is aimed above the bat's head. When the grooves in the computer model were filled, the simulation showed that low frequency beams would instead be focused in the same way as higher frequency beams. The noseleaves therefore cause different frequencies of sonar beams to be emitted in different spatial patterns, enabling the bat to put to best use the limited energy it has available for echolocation.

Muller speculates that these mechanisms enable the bat to simultaneously use echolocation for different tasks, such as navigating through a complex environment and detecting flying insects. "If you think of the bat looking at the world with an ultrasonic flashlight," he says, the study suggests that the noseleaves produce "an entire array of flashlights, each shining a spotlight of different size, shape, and position on its surroundings." 

An evolutionary battle fought in the airwaves 

The yellow underwing moth (Noctua pronuba) is one insect species preyed upon by bats. N. pronuba has extremely simple ears, consisting of just two vibration-sensitive neurons attached to the tympanic membrane (eardrum). The ear is most sensitive to sounds with a frequency of between 15-25kHz (kiloHertz, thousands of cycles per second); this is the frequency range of the ultrasonic signals generated by most bat species and, in fact, the moth ear appears to have evolved for a single purpose - to detect bats' signals. In response to the signals, moths behave in one of two ways - they either fly directly away from the source of the sound, or initiate a series of complex maneuvers, consisting of loops, spirals and dives, in an effort to evade the predator.

The yellow underwing moth has evolved a behavioural adaptation that enables it to detect the echolocation signals used by bats to prey on it. A recent study led by Professor Daniel Robert of Bristol University's School of Biological Sciences, shows that the moth's ears can mechanically tune up, or change the frequency to which they are most sensitive. This makes the bat's high frequency ultrasonic pulses audible to the moth, which can then detect when a bat is homing in on it. The finding is somewhat surprising, because it shows that an apparently simple structure such as the insect ear is actually remarkably sophisticated. Robert's team also found that, following the initial detection of a bat's signal, the moth's ears remain tuned to that frequency for several minutes, in case the bat should return.

It is also now known that bats can alter the frequency of the signals they emit. Nachum Ulanovsky and his colleagues at the University of Maryland presented bats with recordings of echolocation signals. They found that, in response, the bats increased the frequency of their emitted ultrasound signals, so that it was different from that of the signals being played back to them. This constitutes a 'jamming avoidance response'; the frequency of the bat's call is modulated to prevent interference between the signals produced by other bats in the immediate area. This occurs within less than 200 milliseconds (one-fifth of a second).

The feeding buzz 

When foraging, bats emit signals of specific frequencies during the search phase. Once potential prey has been located, the bat increases the amplitude (loudness) and frequency (pitch) of its signals as it homes in on its target, so that it can assess the prey in greater detail.

This increase in loudness and pitch of the bat's vocalizations is called the "feeding buzz". Sperm whales also echolocate, and are now known to generate feeding buzzes while hunting prey (mainly squid). But until recently, very little was known about  the foraging behaviour of sperm whales.

Using newly developed global positioning system (GPS) technology, Stephanie Watwood and her colleagues of the Woods Hole Oceanographic Institution, together with collaborators from the University of St. Andrews in Scotland, digitally tagged 37 sperm whales, enabling them to acoustically trace the whales and investigate their diving behaviour and vocalizations.

During the study, 198 individual foraging dives were observed in the Atlantic Ocean, the Gulf of Mexico and the Ligurian Sea. It was gound that the whales spent about three-quarters of their time on foraging cycles, each lasting an average of 45 minutes. The whales re-surfaced for around 9 minutes between dives, and, in the Atlantic, could dive to depths of nearly 1,000 metres. 

When embarking on foraging dives, the whales emit a series of clicks to help them locate prey. As the whales descend into the depths, the frequency of clicks increases, enabling the whales to obtain detailed information regarding the position and movements of prey. Regular clicks function as a long-range biological sonar; clicking frequency continues to increase as the whales approach their prey, until the clicks are so close together that they sound like a continuous buzz. The whales were found to descend an average of 392 metres between the onset of regular clicking and the first buzz, which corresponds to the point at which they are about to catch a squid. 

The FoxP2 gene and the evolution of bat echolocation

FoxP2 is a gene that has been implicated in the evolution and development of human language. Mutations in the human form of the gene are associated with deficits in the production and understanding of speech, and "knockout" mice that have had the gene deleted cannot produce the ultrasonic vocalizations that are normally elicited when they are taken away from their mothers.

Previously, it was thought that the FoxP2 is highly conserved in all mammals (i.e. it was thought that there is very little variation in the protein sequence). For example, the human gene differs from that of chimpanzees by only two amino acids, and these changes occurred at around the time that language is thought to have evolved. It is, therefore, hypothesized that the gene played a role in the emergence of language.

However, new research, published last week in the open access journal PLoS One shows that the FoxP2 gene has a high degree of variation in echolocating bats.

The Anglo-Chinese team that carried out the study knew that FoxP2 is expressed in regions of the bat brain that are involved in echolocation, and so reasoned that the gene might be associated with the vocalizations that echolocating bats generate.

The researchers therefore sequenced the FoxP2 gene from 13 different species of bats (some of which echolcate, and others that don't), as well as from 22 other mammalian species, two birds, and a reptile. They found large numbers of mutations in the DNA sequences in those bat species that echolocate, but not in those that do not, or in any of the other species they looked at.

Thus, the FoxP2 gene has undergone accelerated evolution in species of echolocating bats. It is likely that the variations in the DNA sequence have something to do with echolocation, as those bat species that produce similar echolocation signals tended to have similar variants of the gene, while non-echolocating bats have variants like those of other non-echolocating mammals.

Bat species capable of Doppler shift compensation were also found to have a FoxP2 mutation that other echolocating bats lacked, and some of the echolocating bat species had the same mutations that are associated with language deficits in humans.

So, the same gene implicated in the emergence of human language may also have played a role in the evolution and diversification of echolocation in bats. The new findings also support the idea that language and animal vocalizations such as those emitted in echolocation evolved from the same ancestral motor system. FoxP2 may have been recruited to coordinate the complex sensorimotor coordination required for both speech and echolocation.

Inspiration from dolphins 

The auditory signals used for echolocation can vary a great deal. Narrowband signals cover a narrow range of frequencies, have a long duration, and allow for detecting targets over long distances. Broadband signals cover a large range frequencies, have a typical duration of less than 5 milliseconds and are best adapted for localization.

Dolphins and other cetaceans also have highly sophisticated echolocation systems, but they are not as well understood as those of bats. Many dolphin species produce ultrasonic echolocation signals but some toothed whales, such as the sperm whale and the dolphin, produce signals that are audible to humans. Because sound travels faster in water than it does in air, the signals produced by toothed whales are of a much shorter duration than those produced by bats.

Dolphins produce two main types of echolocation signals - short duration broadband signals or narrowband signals of longer duration. The signals consist of bursts of clicks with frequencies of up to 160 kiloHertz (kHz). These are produced not in the larynx but in a series of air sacs within the nasal cavities above the brain. Associated with the air sacs are valves called bursae (or "monkey lips") which open into the blowhole passage. The signals are emitted from an air-filled cavity into water, and there is a mismatch between how sound waves are propagated in the two media.

This problem is overcome by a large deposit of fatty tissue, called the melon, located in the forehead. This is a sac-like pouch which consists mostly of fatty tissues and which extends into the nasal sac muscles. It acts to slow the transmission of sound waves as they are emitted from the nasal air sacs, thus making the transfer of the sound waves from one medium to the other smoother. The melon also acts as a lens, focusing the sound waves into a narrow beam which is projected forward. Echoes are believed to be received by the panbone in the dolphin's lower jaw; fatty tissue behind the panbone transmits the sound waves to the middle ear and then to the brain. Some researchers believe that the teeth are also involved in transmitting echoed sounds to the dolphin's brain.

Dolphins lack a complex external ear through which sound waves can be transmitted to the cochlea; the outer part of the ear consists simply of a small opening covered by a fibrous tissue. It is now widely accepted that the lower jaw is a major component of the echo-receptor in dolphins. It is through the lower jaw and surrounding structures that many of the acoustic signals to which dolphins are sensitive are transmitted to the middle and inner ear. Auditory stimuli delivered to the lower jaw evoke responses in the auditory regions of the brain stem. In behavioural experiments, bottlenose dolphins had hoods placed over their lower jaws, attenuating the reception of acoustic signals; this greatly hindered their ability to echolocate.

Like bats, dolphins use echolocation for navigation and for the accurate detection, localization and tracking of prey. And, like other biological sonar systems, that of the dolphin consists of a transmitter, receiver and processor. Using this system, targets as small as a sardine - which is 9-18 cm in length - can be located at ranges of zero to 100 metres or more. The echolocation system of dolphins far outperforms man-made sonar systems. It even works efficiently in shallow waters, unlike man-made systems, whose signals are confounded by water turbulence, suspended sediment and the increased reverberation of sound waves.

Dolphins lack a complex external ear through which sound waves can be transmitted to the cochlea; the outer part of the ear consists simply of a small opening covered by a fibrous tissue. It is now widely accepted that the lower jaw is a major component of the echo-receptor in dolphins. It is through the lower jaw and surrounding structures that many of the acoustic signals to which dolphins are sensitive are transmitted to the middle and inner ear. Auditory stimuli delivered to the lower jaw evoke responses in the auditory regions of the brain stem. In behavioural experiments, bottlenose dolphins had hoods placed over their lower jaws, attenuating the reception of acoustic signals; this greatly hindered their ability to echolocate.

The hypothesis that the teeth are a component of dolphins' echolocation receiver was first proposed by Goodson and Klinowska in 1990, and is based on a number of observations about the arrangement of the teeth in the jaw. Firstly, whereas humans have different types of teeth - incisors for cutting, molars for chewing, etc. - are homodonts - all of their teeth are of the same type. A dolphin's teeth are evenly spaced along the jaw, in two two straight lines, which diverge at an angle of 10-20 degrees. Acoustic signals received by the lower jaw are transmitted to the brain via the inner ear, each tooth is innervated by a nerve which projects directly to the brain. Delays in the propagation of nervous impulses in these fibres cause signals from all teeth to arrive simultaneously at the auditory cortex, the region of the brain in which the acoustic signals are processed.


According to this model, the teeth are resonant pressure transducers - they form an array of receivers which vibrate in response to the changes in pressure caused by sound waves as they travel through the water. There is some experimental evidence that dolphins' teeth might be involved in receiving acoustic signals. One group of researchers recently used a technique called laser Doppler vibrometry to measure resonances in the teeth, and found that they do in fact resonate at frequencies of 115-135 kHz in response to sounds; this frequency range overlaps with that of the sonar signals used by dolphins. Flanking the jaw are structures called mandibular canals, which contain fat-filled channels that could function to impede the propagation of sound waves to a similar extent as sea water. These channels may therefore transmit the resonances of the teeth towards the inner ear, but exactly how signals from the teeth might be transmitted to the channels is as yet unclear.

A better understanding of biosonar in dolphins could lead the development of improved sonar systems. To this end, Peter Dobbins of SEA Group Ltd., an engineering company based in Bristol, U. K., has developed a model of the echolocation system of the bottlenose dolphin, Turciops truncatus, based on the assumption that at least some of the echolocation signals are received through the teeth. SEA Ltd. has just been awarded a large contract by the British Ministry of Defence to develop advanced sonar systems. Any new technologies developed by SEA would be delivered to the MoD by Lockheed Martin, the world's largest defence contractor.

In developing his model, Dobbins assumed that most of the sounds detected by the dolphin would be coming from in front. He modelled the bottlenose dolphin's jaws as if they were two straight lines of receivers meeting at an angle of 10-20°, and assumed that sound waves would enter the jaw from the front. This is known as an endfire arrangement; it is used widely in radio and radar, but not in sonar systems, which employ a broadside system, whereby sound waves travel at right angles to the receivers. A drawback of broadside systems is the phenomenon of near-field degradation - that is, at close range, the broadside array cannot determine the direction of incoming sounds.

According to the model developed by Dobbins, endfire systems are less susceptible than broadside systems to near-field degradation, and are therefore far more efficient at close range. Dobbins also used his model to investigate other dolphin species with different arrangements of teeth. In river dolphins, for example, which usually live in shallow murky waters, the teeth near the tip of the jaw are larger and closer together than those nearer the base of the snout. On the basis of his model, Dobbins predicts that this arrangement makes the echolocation less susceptible to the frequency changes produced by the increased reverberations of sound waves in shallow water.

Whitlow Au, chief scientist on the Marine Mammal Research Program at the Hawaii Institute of Marine Biology, dismisses the idea that teeth are acoustic signal receivers as a "wild hypothesis." He points out that there are examples of captive dolphins that have lost all their teeth but are still capable of echolocation. Dobbins, however, says this is not important; he agrees that further investigation is needed to determine whether or not he has accurately modelled dolphin biosonar, but says that technologies based on his model could be developed regardless of whether or not dolphins' teeth are a component of the acoustic signal receiver. His aim is to develop compact, lightweight and high resolution sonar systems capable of operating effectively in shallow waters. These systems could be carried by divers or submersible vehicles during naval mine-clearing operations.

References & Further reading:

Dobbins, P. (2007). Dolphin sonar - modeling a new receiver concept. Bioinsp. Biomim. 2: 19-29.

Li, G., et al. Accelerated FoxP2 evolution in echolocating bats. PLoS One 2 (9) doi:10.1371/journal.pone.0000900. [Full text]

Gillam, E. H., et al. (2006). Rapid jamming avoidance in biosonar. Proc. R. Soc. B. doi: 10.1098/rspb.2006.0047 [Full text]

Grothe, B. (2003). New roles for synaptic inhibition in sound localization. Nat. Rev. Neurosci. 4: 1-11. [Full text]

Metzner, W., et al. (2002). Doppler-shift compensation behavior in horseshoe bats resvisited: auditory feedback controls both a decrease and an increase in call frequency. J. Exp. Biol. 205: 1607-1616. [Full text

Price, J. J., et al. (2004).The evolution of echolocation in swiftlets. J. Avian Biol. 35: 135-143. [Full text]

Surlykke, A. (1984). Hearing in Notodontid moths: A tympanic membrane with a single auditory neurone. J. Exp. Biol. 113: 323-335. [Full text]

Waters, D. A. & Jones, G. (1996). The peripheral auditory characteristics of Noctuid moths: Responses to the earch-phse echolocation calls of bats. J. Exp. Biol. 199: 847-856. [Full text]

Yin, T.C. T. Neural mechanisms of encoding binaural localization cues in the auditory brainstem. In Integrative Functions in the Mammalian Auditory Pathway, D. Oertel, A.N. Popper, and R.R. Fay, Eds. Springer-Verlag, Springer Handbook of Auditory Research, pp. 99-159, 2002.

Zhuang, Q. & Muller, R. (2006) Noseleaf furrowa in a horseshoe bat act as resonance cavities shaping the biosonar beam. Phys. Rev. Lett. doi: 10.1103/PhysRevLett.97.218701. [Abstract]


More like this

It posible than in primitive human for hunting wild animals where very important this fenomen to locate them.Or they could learn to calibrate animal sounds with the same reason.?

Thanks for posting this. Very fascinating that we are just starting to discover the more advanced capabilities of echolocation.

I recall a video I saw of a blind teenager who learned to navigate in his world via echolocation. That was fascinating.

It is indeed surprising how animals use ultrasound waves or infrared vision (snakes for example) to locate their preys. May be we could learn a lesson or two from them. Take Lockheed M, for example. It may be just a few days before 'these animals' are PATENTED!

Very cool article!

Typo watch: However, new research, published last week in the open access journal PLoS One shows that echolocating bats have a high degree of variation in echolocating bats.

I suspect one of those bats ought to be a FoxP2....

By David Harmon (not verified) on 25 Sep 2007 #permalink