Medindia
Why Register as Premium Member if you have Hypertension? Click Here
Medindia » News on IT in Healthcare

Talking Gloves Developed for Differently-Abled: IIT, AIIMS Jodhpur

by Hannah Joy on December 6, 2021 at 3:04 PM

'Talking Gloves' have been developed for people with speech impairments by a team of innovators from Indian Institute of Technology (IIT) and All India Institute Of Medical Science (AIIMS) in Jodhpur.


The device uses principles of Artificial Intelligence (AI) and Machine Learning (ML) to automatically generate speech that will be language independent and facilitate the communication between mute people and normal people.

‘Talking gLoves have in-built sensors and can be worn on both hands. After which, it generates electrical signals due to hand movements. Later, an audio signal is generated by an audio transmitter.’

It can also help individuals to convert hand gestures into text or pre-recorded voices. Hence, makes a differently-abled person independent and communicates the message effectively. The device costs less than Rs 5,000, the institutes said in a statement.

"The language-independent speech generation device will bring people back to the mainstream in today's global era without any language barrier. Users of the device only need to learn once and they would be able to verbally communicate in any language with their knowledge," said Prof Sumit Kalra, Assistant Professor, Department of Computer Science and Engineering, IIT Jodhpur in a statement.

"Additionally, the device can be customised to produce a voice similar to the original voice of the patients which makes it appear more natural while using the device," he added.

The device with in-built sensors can be worn on both hands, and it generates electrical signals due to hand movements, which are then received at a signal processing unit.

The magnitude of the received electrical signals is compared with a plurality of pre-defined combinations of magnitudes stored in a memory by using the signal processing unit. By using AI and ML algorithms, these combinations of signals are then translated into phonetics corresponding to at least one of consonant and a vowel.

For example, the consonant and the vowel can be from Hindi language phonetics. A phonetic is assigned to the received electrical signals based on the comparison.

An audio signal is then generated by an audio transmitter corresponding to the assigned phonetic and based on trained data associated with vocal characteristics stored in a machine learning unit.

The generation of audio signals according to the phonetics having combination of vowels and consonants leads to the generation of speech and enables the mute people to audibly communicate with others. The speech synthesis technique of the present subject matter uses phonetics, and therefore the speech generation is independent of any language.

The team is further working to enhance the features such as durability, weight, responsiveness, and ease-of-use, of the developed device.

The developed product will be commercialised through a start-up incubated by IIT Jodhpur.



Source: IANS

View Non AMP Site | Back to top ↑