However, one of the areas where its impact is most transformative – and often least discussed – is that of accessibility. For hearing impaired and hard of hearing people, AI has opened doors that have long remained closed.
As an oral deaf person, I closely follow technological innovations aimed at hearing accessibility, and I can say that artificial intelligence has been an essential ally in overcoming barriers in communication and everyday life. With each new tool launched, I feel that we are one step closer to a world where inclusion is not just a promise, but a tangible reality.
Communication as the main challenge
For many people who are hearing impaired or hard of hearing, communication in predominantly spoken environments represents one of the biggest difficulties. Despite advances in Libras (Brazilian Sign Language) and subtitles in audiovisual productions, there is still a significant gap in live interaction contexts, such as in the corporate environment, in work meetings, classes or even in informal conversations.
The lack of automatic captioning and the absence of interpreters in most public settings make everyday life for many people more complicated than it should be. As an oral deaf person, the high-tech hearing aid should meet the need, however, on a day-to-day basis I rely on lip-reading and the support of technologies to facilitate this communication when faced with the absence of attitudinal accessibility (people talking without facing each other, not looking each other in the eye, speaking softly, talking fast, hand to mouth, not vocalising well). This is exactly where artificial intelligence has been playing a crucial role.
AI-powered tools that facilitate communication
In recent years, several AI-driven tools have been developed to make communication more accessible for deaf people. Some of them are already part of my daily life and that of many other hearing impaired people.
Automatic captioning in virtual meetings
With the popularisation of remote working, platforms such as Google Meet, Microsoft Teams and Zoom have implemented real-time automatic captioning. This AI-powered functionality allows participants’ speeches to be transcribed almost instantaneously, ensuring that deaf or hard-of-hearing people can follow meetings without missing important details.
In my experience, automatic captioning has been a game changer! Anyone who lives with a deaf person like me will have noticed how observant, reflective we are and keep our eyes fixed on one point, as if we are always analysing something. This is because our brain is rapidly processing what we hear, while we quickly wonder if it was really what we heard, and at the same time another sentence comes next, (phew) and in the course of the conversation it seems that when it depends on a response from the oral deaf person, it sometimes comes with a little ‘delay’.
A curiosity that I like to mention and that is also noticeable in the automatic subtitle transcriptions, is that, in Portuguese, my mother tongue, I find more difficulty in attitudinal accessibility than in Spanish, a language I learned and have an advanced level of fluency in. This is due to the fact of how sentences are constructed grammatically and the vocal interpretation of sentences. In Portuguese there is a very fast linking and vocalisation, while in Spanish, due to the linguistic position, even in milliseconds, between one word and another it is possible to respect the pauses between sentences more. And this is only known by those who live!
Curiosities aside, let’s continue. Before, it was common for me to miss a lot of crucial information during online meetings, requiring close colleagues to send me summaries or to come and comment on the main points of the meeting to see if I could hear everything. Now, I can better follow the conversations in online meetings in real time, actively participating and having the opportunity to contribute more effectively. And here again there is a point of attention in the corporate world for career development, when it comes to hearing and visually impaired employees who sometimes did not have the opportunity to assimilate the conversation, and therefore did not have the opportunity to contribute their ideas to the project.
While there is still room for improvement, such as the accuracy of subtitles in environments with background noise or regional accents, the progress is already remarkable and reflects the power of AI to democratise access to information.
Real-time transcription applications
One resource that has revolutionised the lives of deaf people is the use of live transcription applications. Tools such as Ava, Otter.ai and Live Transcribe (from Google) use AI to transcribe speech in physical environments, allowing the conversation to be followed via mobile phone.
These applications are especially useful in everyday situations, such as doctor’s appointments, classes or even social gatherings. Yes, it is true that some are not yet available in Brazil! Recently in April 2024, I took a trip to Europe and at a conference, I used Ava to follow the lecture, even in a noisy environment, with many people talking simultaneously, I was able to understand the content presented, which provided me with a complete learning experience without barriers.
Artificial Intelligence in the translation of Libras in Brazil
In addition to automatic subtitling and transcription applications, AI is also advancing in the area of machine translation of Libras. Innovative projects, such as Hand Talk, use artificial intelligence to translate text and speech into Brazilian sign language.
Hand Talk, for example, has a digital avatar that interprets and translates content, making it easier for deaf people who communicate by sign language to access information on websites, videos and even in public places. This technology not only promotes inclusion, but also strengthens the presence of Libra as an official language, broadening its reach and visibility.
As an oral deaf person, I recognise the importance of valuing sign language, even though my primary communication is through orality. The diversity of tools builds accessibility pathways, as it allows, within the countless variations of deafness, each deaf person to choose the solution that best suits their profile, making accessibility even more personalised, which only the advancement of technology is able to achieve quickly!
Artificial Intelligence and Virtual Assistants
Another field in which AI has excelled is in the development of virtual assistants, such as Alexa (Amazon), Google Assistant and Siri (Apple). While these assistants were initially designed to respond to voice commands, many of them are now capable of interacting through text.
This evolution has been particularly beneficial for deaf people, as it allows them to control household devices, schedule tasks and even consult information without the need to hear or speak. For example, it is now possible to send text commands to Alexa (Amazon), controlling lights, alarms and other devices, making the routine of a signalled deaf person more practical and efficient.
Advances in hearing health
AI is also driving significant advances in the area of hearing health. New AI-enabled hearing aids, such as the ones I use from the Oticon More and Phonak Paradise range, can automatically adjust sound levels depending on the user’s environment.
These devices use AI to distinguish between different types of noise, prioritising speech and reducing unwanted background sounds. Some functions are automatically adjusted and others can be adjusted within an app on the smartphone, and in this sense of apps that add to the AI functions of hearing aids, Apple smartphones still have better options than Android smartphones.
Regardless of the additional adjustment through the smartphone, this AI functionality within the hearing aids provides a more natural and less strenuous listening experience, reducing the physical wear and tear on the oralised deaf person from straining muscle mechanics in an attempt to send sound through the ear canal, making communication in crowded places easier.
The first time I used an AI-powered hearing aid, I felt a noticeable difference in the clarity of sound. In crowded environments, such as restaurants or events, the technology helps filter conversations and reduce the impact of external noise, something conventional hearing aids cannot always do.
Challenges and opportunities
Despite all these advances, digital accessibility still faces challenges. Many AI tools rely on internet connectivity, which can limit access in rural areas or regions with poor infrastructure. In addition, the accuracy of subtitles and automatic transcriptions still varies, especially in less widely spoken languages or languages with many dialects.
Another important point is the need for greater representation. For AI to continue to evolve in an inclusive way, it is essential that deaf and hard of hearing people are actively involved in the development of these technologies, bringing their real perspectives and everyday needs.
Achievements and progress
Artificial intelligence is playing a transformative role in the lives of deaf and hard of hearing people, offering innovative solutions that facilitate communication and promote greater independence. From automatic captioning to virtual assistants and smart hearing aids, each advance represents a step towards a more inclusive world. As a person with an oral hearing impairment, I see AI as a powerful ally, capable of breaking down barriers and opening paths that once seemed insurmountable. The future of accessibility is being shaped by these technologies, and it is our role to continue to encourage and participate in this process, ensuring that no one is left behind in the digital revolution.