TH | EN
TH | EN
HomeInterviewCaptioning service gives deaf people access to information

Captioning service gives deaf people access to information

Real-time captioning brings equality to hearing-impaired students

“By nature, deaf youngsters will not communicate to people they don’t trust. But if we earn their trust, they will open their hearts.” 

So says Assistant Professor Pichain Junpoom, a lecturer in computer science at the Sakon Nakhon Rajabhat University’s Faculty of Science and Technology about hearing-impaired students under his supervision who have enrolled in an inclusive program that teaches both the hearing-impaired and hearing students. Such a program is designed to enhance the skills and knowledge of people with disabilities in order that they can live normal lives in society. 

Pichen says the program began three years ago as an educational alternative for the deaf, as the university believed deaf students should be able to study computer science and technology – or anything other than domestic science and fine arts. 

While hearing-impaired students are capable of studying at higher-education level, they face big challenges in the classroom because of their physical limitations. Sign-language interpreters are on hand to help them, but it is sometimes impossible for these interpreters to delve deeply into very specialized areas of scientific and technological knowledge. And by nature, the hearing-impaired do not easily open up to others, so even if they do not understand some lessons, they will not come forward to ask lecturers directly. 

Pichen refers to his students as “his kids”, and says he has tried to look for tools to help them as much as possible. Fortunately, he came across the Thailand Captioning Service Center (TCC). 

This center has deployed a tech solution developed by Dr. Ananlada Chotimongkol and a team known as Accessibility and Assistive Technology  (AAT) research team,. The AAT team is under the Assistive Technology and Medical Devices Research Center (A-MED), which, in turn, is part of the National Science and Technology Development Agency (NSTDA). 

For some years, the AAT team has researched and developed a real-time captioning system, with the aim of helping deaf people or others with hearing impairment to access information that is delivered in audio format from various sources. As well, it supports the work of organizations and media, especially TV stations, that are required by the National Broadcasting and Telecommunications Commission (NBTC) to prepare captioning services so as to ease the obstacles encountered by deaf people in accessing information. 

“Deaf people have problems accessing audio information, such as that delivered at seminars and in live broadcasts by state agencies, including live royal ceremonies, as well as in classes. They have to rely on sign-language interpretation. However, reading is also another option , and that’s why a tech-enabled captioning service is an important tool for them,” Ananlada says.

She points out that the development of real-time captioning focuses mainly on two things: speed and accuracy. Where it comes to speed, captions should appear no more than five seconds after the speech. Accuracy, meanwhile, must meet the minimum requirements of the NBTC.

Real-time captioning refers to a service that displays caption at the actual time during which the speech occurs. This has been developed not only to help the hearing-impaired, but also the elderly. If such a captioning service is available, people with hearing problems will be able to access information. Ananlada says a real-time captioning service has three steps. First, it receives an audio signal from the source of the speech. Second, the speech is transcribed, and third, the caption is displayed through a system linked to a monitor or a broadcasting device. She says the research team is responsible for integrating artificial intelligence (AI) into the two latter steps, with a strong emphasis on the second step. The first phase of the project was focused on simultaneous typing. The second phase, on which research is ongoing, focuses on ‘respeaking’.

The third phase will then be focused on an automatic speech recognition technology.  

At present, the AAT team has successfully introduced an innovative service from the project’s first phase. Ananlada says it is now proceeding with the second phase, with the third phase in clear sight. The ultimate goal of the three phases is to deliver a captioning service that is not only accurate, but also in real-time. In other words, her team is expecting to perfect a system that displays captions immediately, or no more than five seconds after the speech.

“The current captioning system has been in use for about four or five years. It was designed with a strong emphasis on accuracy, because we have to comply with the NBTC’s rules, which require that the captioning accuracy rate must be more than 90 percent, and that the captions must be easy to read and must not cause any misunderstanding. Our first phase features simultaneous typing. Three to four typists type a small chunk of speech each of them has heard as short text, which has a high accuracy rate. To assist the typists, we have developed a system that enables them to type faster and with greater accuracy,” Ananlada explains.The system also helps when displaying the captions, in what is called the ‘feature for real-time text formatting’. Ananlada says this is designed to make it easy to read the captions.  With this feature, the system ensures that the captions appear at a suitable pace not too fast to read or too slow that seem discontinued.

For TV display, the NBTC requires that each line must carry no more than 35 characters and each screen must has no more than two lines. Based on these rules, the researchers have developed features that help to separate captions into lines without cutting into the middle of words, which can cause misunderstanding.  

“By integrating AI, our display system works automatically. With its help, our real-time captioning service really provides real-time captions,” Ananlada says.

Synergy of human and tech powers

While the real-time captioning service takes full advantage of technologies, its strength lies in its ability to achieve a synergy of human and tech powers. While human typists are in charge of simultaneous typing in order to guarantee accuracy, they are still backed by tech solutions such as automatic displays of recommended words and autocorrecting for hard-to-spell words. With such tech-enabled features, the typists can work faster and avoid mistakes. 

To date, AAT’s real-time captioning has been used on various occasions. But its highlights have been the live broadcast of King Rama X’s Coronation Ceremony in 2019; press conferences of the Center for COVID-19 Situation Administration; Digital Thailand Big Bang 2019; and computer classes at the Sakon Nakhon Rajabhat University.

Greater understanding leads to reduced inequality 

Before the introduction of real-time captioning to his program, Pichen says it seemed like lecturers and hearing-impaired students were speaking different languages. After all, deaf students could not hear what lecturers were saying, while the lecturers did not know sign language. Even with the presence of sign-language interpreters, teaching and learning did not run smoothly because academic lessons required deep academic understanding of subjects being discussed. Sign-language interpreters were usually unable to help at this level. Real-time captioning has thus made a big difference. With this service, hearing-impaired students feel that they are really sitting in the same class as the hearing students. They need only to rely on themselves for their studies.

“When the kids saw or knew something, but could not communicate it with others, they often felt frustrated. And even if we explained, they might not understand it. However, when we integrated real-time captioning into our program, deaf students were able to see the whole picture of what we were discussing. Their understanding of subjects significantly changed their behavior. I have first-hand experience of this. Our hearing-impaired IT-major students were rather aggressive in the first and second years. They demanded attention, and so on. They even did unimaginable things. 

But after real-time captioning services were integrated one or two years ago, their aggressive behavior disappeared. It is as if real-time captioning services have reduced inequality among our students. Hearing-impaired kids started to see themselves as equals in society. They too have knowledge and abilities,” Pichen says.

The big impact of real-time captioning is not restricted to the educational sector. It has also contributed greatly to a TV station’s mission to deliver information that is accessible to all audiences, especially when it came to live programs. With a real-time captioning service in place, deaf people were able to keep abreast of the latest information from news programs and live broadcasts.

Yothin Sitthibodeekul, the Director of the Television and Radio Department at Thai PBS, says that as a public television station, the main mission of his channel is to give information to all audience groups in a comprehensive and equal manner. This includes the hearing-impaired. He says captioning problems occur mainly with live programs because there is no time to prepare captions in advance. 

“Captioning problems are mainly found in live programs such as news programs or live broadcasts of important national events. After we experimented for some time with the real-time captioning service, we became confident that this tech solution could really work. So, we told the Television Pool of Thailand that Thai PBS would broadcast live captions [on its pooled services]. That move marked the first time that real-time captioning was used for TV programs. The first event was the live broadcast of King Rama X’s Coronation Ceremony in 2019. The results showed that the real-time captioning service was really practical,” Yothin says.

Thanks to the tech-enabled real-time captioning service, it is now possible for Thai PBS to offer Live Captions – something that seemed far-fetched in the past, but now enables the broadcaster to fulfil its Media for All mission. Real-time captioning gives the hearing-impaired people access to information, just like the hearing people, even in events such as live broadcasts. 

Ananlada points out that real-time captioning has also benefited the hearing people. When they are in areas where audio reception is poor, they can still get accurate and clear information by reading the captions.

An alternative for the hearing-impaired

The president of the National Association of the Deaf in Thailand, Wityoot Bunnag (‘Ajarn J’) says real-time captioning gives the hearing-impaired a workable alternative. The deaf have big obstacles in accessing information from the media, so to them, captions are really important. 

“I hope captioning services become more widespread. I hope they will expand beyond news programs. I want to see captioning services used in various other aspects too, so that the deaf can access all information just like the hearing people. I hope they will have plenty of choices. Today, their choices are very limited. I really feel that they have no equality. I hope captioning will give the deaf opportunities to take part in all kinds of activities in society. That’s what equality is,” Wiyoot says.

However, even with captioning services, some hearing-impaired people will still face constraints because only the literate can benefit from captions. In Thailand, there are more than 300,000 hearing-impaired people, and most of them cannot read, or have limited reading ability. In spite of this fact, Wiyoot believes captioning will be really useful to the hearing-impaired. If it is widely available, the deaf will practice reading and become more fluent in using the Thai language. He goes as far as suggesting that captioning may be a way to boost the language skills of hearing-impaired people. 

As a researcher and developer of real-time captioning, Ananlada believes there is still room for improvement. Further upgrades should be able to lower costs and make the system easier to use. She has set her sights on further development of the real-time captioning system in the hope that it will be more widely used. She emphasizes that real-time captioning benefits not only the hearing-impaired, but also the hearing people who find themselves in situations where listening ability is restricted. Moreover, captions are very useful when archiving and data retrieval are concerned. 

Ananlada adds that the next goal of her team’s real-time captioning project is to turn the Thailand Captioning Service Center (TCC) into a social enterprise. Not only will captions give hearing-impaired Thais access to information, but they will also benefit foreigners who are trying to fit into Thai society or learn Thai language. Such captioning services are already available in many countries such as the United States, Japan, Britain and many European nations.

“I hope captioning services become more widespread in Thailand. I hope everyone will recognize that captioning services are necessary and useful. Join the efforts to back captioning services. Help to develop them if you can, or at the very least, use them. Demand for use of captioning will automatically help to expand the service,” Ananlada concludes. 

STAY CONNECTED

0FansLike
0FollowersFollow
0SubscribersSubscribe

Lastest News

MUST READ