When Paul Shuttleworth bought himself a very expensive pair of eyeglasses for Christmas last year, it was a peculiar gift. The glasses didn’t improve his eyesight, weren’t particularly stylish and uncomfortably pressed down on his nose.
But these glasses are a vital tool for him.
Shuttleworth, age 40, a resident of Manchester, England, is profoundly deaf in both ears and uses a mobile live audio transcription app as his main conversational tool. But the glasses have helped him talk and participate in conversations in ways that he had never been able to before. These fancy shades automatically transcribe the words someone is saying and display the text as subtitles on the lenses in front of his eyes.
On supporting science journalism
If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
The buzz is building for such “live-captioning glasses,” and a slew of companies have rolled out their own versions in the past few years. These high-tech glasses have the potential to help people who are deaf or hard of hearing communicate more seamlessly with hearing people. But they aren’t perfect, and they may never fully replace human translators.
In July Tom Pritsky, co-founder of TranscribeGlass, put out a TikTok video that demonstrated his company’s live-captioning glasses providing subtitles for words that he spoke, and it went viral.These glasses are not just a technological fad for Pritsky, who has moderate to severe bilateral hearing loss. They actively improve his quality of life and ability to converse with people. “It helps me fill in the blanks,” says Pritsky, a graduate student in biomedical informatics at Stanford University. He also wears hearing aids. “The hearing aids give me audio, but it’s unclear, and I miss words,” Pritsky says. “If I can fill in some of those words via captions and subtitles, then it helps me continue to understand the conversation and not lose the thread.”
Live-captioning glasses are now starting to hit the market, thanks to improvements in speech recognition technology and battery life. Nearly 2.5 billion people are projected to have some kind of hearing loss by 2050. Experts say that older people with hearing loss are the most likely to benefit from these glasses. Many older people need hearing aids but do not wear them because of stigma and difficulty adapting to the technology. Live-captioning glasses often resemble regular glasses, which are already more normalized in society than hearing aids are.
Giving older people with age-related hearing loss access to live captioning may improve their social relationship with family and friends, says Thad Starner, a computer science professor at the Georgia Institute of Technology. In the early 2010s Starner helped develop Google Glass, a now discontinued device resembling eyeglasses that projected information on a tiny prism in front of the wearer’s eye.
A Game-Changer
Real-time captioning is not a totally new technology. People who are deaf or hard of hearing routinely use apps such as Otter to transcribe a conversation as they’re having it. But having to ping-pong between the person they’re talking with and their phone can be frustrating and exhausting. Having captions in your field of vision is a game changer, Pritsky says.
Most live-captioning glasses are composed of the glasses themselves, a tiny microphone, an onboard computer that processes speech, a battery and some way to display text. Some glasses contain all of these components, whereas other devices sit on top of a regular pair of glasses. Improvements in speech recognition software have really enabled these technological advances, says Dan Scarfe, CEO of XRAI Glass, a company that makes a speech-processing app that can be used with an array of live-captioning glasses. “I don’t think we’re more than six months away from a killer piece of hardware that you can absolutely use for this. Then it’s just a question of finding people who want to use it,” he says.
Despite the media buzz, a relatively modest number of live-captioning glasses have been sold. Scarfe says more than 5,000 people are part of XRAI Glass’s pilot program, and Pritsky says “thousands” have ordered TranscribeGlass.
Nevertheless, these sales represent a significant step forward for the technology, says Starner, who has been a part of the field since its infancy. Starner, a wearable computing aficionado, has been wearing some kind of head-mounted display almost every day for 30 years. “It used to be that in order to have a head-borne display that had any sort of battery life, you ended up carrying around seven pounds of lead weight with you,” says Starner, who used to carry a battery in a shoulder bag. “Literally, my first battery was a motorcycle battery.”
Starner has seen a lot of technological failures during the past three decades. But now he believes that the current crop has promise and that live-captioning glasses will soon be a normalized, household technology because of more portable batteries and speech recognition upgrades. For this to happen, society’s attitudes toward the glasses must continue to shift. The devices need to become unremarkable, he says. “Suppose I walk up to somebody at the airport, and I ask, ‘Can I get directions to the bathroom?’ And the person says, ‘Oh, is that the new Apple head-borne display?’ What you really want to do is get to the bathroom. You want the conversation to be about the conversation, not about the technology,” Starner says.
An Important but Imperfect Tool
While live-captioning glasses could be an incredibly powerful tool for millions of people, they are not a complete solution. For many people with hearing loss, understanding a hearing friend in a crowded restaurant is a nightmare. Similarly, live-captioning software struggles to capture accurate conversation in these spaces. Background noise remains a thorny issue that these glasses do not yet fully solve.
Many people who are deaf or hard of hearing are excited by this technology, but some see it as a threat to requests for sign language interpreters. Captions do not display a speaker’s identity, emotions or inflections; interpreters can convey all of that context.
These glasses also place the burden of communication on the people using them—expectations that Deaf people are tired of having to meet, says Raja Kushalnagar, a professor and director of the Information Technology program at Gallaudet University. “The hearing person might think that, you know, ‘What’s wrong with you? What’s wrong with the glasses?’” he says. “They think it’s not the tools but the [Deaf people] themselves who are the problem.”
These glasses are mainly a tool to interface with the larger hearing public—Deaf communities that primarily use sign language have no need for them. But Kushalnagar has already seen the technology’s impact on his everyday life when conversing with his hearing son. “Say I’m cooking now—chopping vegetables or something in the kitchen. I’ll be able to read the captions while I’m still cutting,” he says. “I would never have been able to have that conversation with my son before like that. Now I can walk, and I can speak with my son at the same time.”
When Shuttleworth took a test a few years ago to become an IT technician, he passed with distinction because he had been given resources necessary for him to understand the information. Shuttleworth now uses his glasses alongside his mobile app at his job. He desperately wishes he had been given access to these tools growing up, tools that could have changed his whole life’s trajectory. “If I had that mobile app and the glasses, I wouldn’t be sitting here talking to you now,” he says.