Daniel Kish sees more than you might expect, for a blind man. Like many individuals deprived of sight, he relies on his non-visual senses to perceive, map, and navigate the world. But people tend to find Kish’s abilities rather remarkable. Reason being: Kish can echolocate. Yes, like a bat.
As a child, Kish taught himself to generate sharp clicking noises with his mouth, and to translate the sound reflected by surrounding objects into spatial information. Perhaps you’ve seen videos like this one, in which Kish uses his skills to navigate a new environment, describe the shape of a car, identify the architectural features of a distant building—even ride a bike:
Impressive as his abilities are, Kish insists he isn’t special. “People who are blind have been using various forms of echolocation to varying degrees of efficiency for a very long time,” he says. What’s more, echolocation can be taught. As president of World Access For the Blind, one of Kish’s missions is helping blind people learn to cook, travel, hike, run errands, and otherwise live their lives more independently—with sound. “But there’s never been any systematic look at how we echolocate, how it works, and how it might be used to best effect.”
A study published Thursday in PLOS Computational Biology takes a big step toward answering these questions, by measuring the mouth-clicks of Kish and two other expert echolocators and converting those measurements into computer-generated signals.
Researchers led by Durham University psychologist Lore Thaler performed the study in what’s known in acoustic circles as an anechoic chamber. The room features double walls, a heavy steel door, and an ample helping of sound-dampening materials like foam. To stand inside an anechoic chamber is to be sonically isolated from the outside world. To speak inside of one is to experience the uncanny effect of an environment practically devoid of echoes.
But to echolocate inside of one? I asked Kish what it was like, fully expecting him to describe it as a form of sensory-deprivation. Wrong. Kish says that, to him, the space sounded like standing before a wire fence, in the middle of an infinitely vast field of grass.
This unique space allowed Thaler and her team to record and analyze thousands of mouth-clicks produced by Kish and the other expert echolocators. The team used tiny microphones—one at mouth level, with others surrounding the echolocators at 10-degree intervals, suspended at various heights from thin steel rods. Small microphones and rods were essential; the bigger the equipment was, the more sound they would reflect, reducing the fidelity of their measurements.
Thaler’s team began the study expecting the acoustic properties of mouth-clicks to vary between echolocators. But the noises they produced were very similar. Thaler characterizes them as bright (a pair of high-pitched frequencies at around 3 and 10 kilohertz) and brief. They tended to last just three milliseconds before tapering off into silence. Here’s a looped recording of one of Kish’s clicks:
The researchers also analyzed the spatial path that the sound waves traveled after leaving the echolocators’ mouths. “You can think of it as an acoustic flashlight,” Thaler says. When you turn a flashlight on, the light distributes through space. A lot of it travels forward, but there’s scattering to the left and right, as well.” The beam patterns for clicks occupy space in a similar fashion—only with sound instead of light.
Thaler’s team found that the beam pattern for the mouth-clicks roughly concentrated in a 60-degree cone, emanating from the echolocators’ mouths—a narrower path than has been observed for speech. Thaler attributes that narrowness to the brightness of the click’s pitch. Higher frequencies tend to be more directional than lower ones, which is why, if you’ve ever set up a surround sound system, you know that a subwoofer’s placement is less important than that of a higher-frequency tweeter.
Thaler and her team used these measurements to create artificial clicks with acoustic properties similar to the real thing. Have a listen:
These synthetic clicks could be a boon for studies of human echolocation, which are often restricted by the availability of expert practitioners like Kish. “What we can do now is simulate echolocation, in the real world with speakers or in virtual environments, to develop hypotheses before testing them with human subjects,” Thaler says. “We can create avatars, objects, and environments in space like you would in a video game, and model what the avatar hears.” Preliminary studies like these could allow Thaler and other researchers to refine their hypotheses before inviting echolocation experts in to see how their models match the real thing.
These models won’t be perfect. To keep measurements consistent, Kish and the other echolocators had to keep still while inside the chamber. “But in the real world, they move their heads and vary the acoustic properties of their clicks, which can help them gain additional information about where things are in the world,” says Cynthia Moss, a neuroscientist at Johns Hopkins University whose lab studies the mechanisms of spatial perception. (Thaler says her team is currently analyzing the results of a dynamic study, the results of which they hope to publish soon.)
Still, Moss says the study represents a valuable step toward understanding how humans echolocate, and perhaps even building devices that could make the skill more broadly achievable. Not everyone can click like Kish. “I’ve worked with a guy who used finger-snaps, but his hand would get tired really fast,” Moss says. Imagine being able to hand someone a device that emits an pitch-perfect signal—one that they could learn to use before, or perhaps instead of, mastering mouth-clicks.
I ask Kish what he thinks about a hypothetical device that could one day produce sounds like he does. He says it already exists. About a third of his students are unable or unwilling to produce clicks with their mouths. “But you pop a castanet in their hands and you get instant results,” he says. “The sound they produce, it’s like ear candy. It’s uncanny how bright, clear, and consistent it is.”
But Kish says he’s all for more devices—and more research. “We know that these signals are critical to the echolocation process. Bats use them. Whales use them. Humans use them. It makes sense that those signals should be studied, understood, optimized.” With the help of models like Thaler’s, Kish might just get his wish.
Go Back to Top. Skip To: Start of Article.