A group at the University of Washington has developed software that for the first time enables deaf and hard-of-hearing Americans to use sign language over a mobile phone. UW engineers got the phones working together this spring, and recently received a National Science Foundation grant for a 20-person field project that will begin next year in Seattle.

This is the first time two-way real-time video communication has been demonstrated over cell phones in the United States, the group says. Since posting a video of the working prototype on YouTube, deaf people around the country have been writing on a daily basis.

"A lot of people are excited about this," said principal investigator Eve Riskin, a UW professor of electrical engineering.

For mobile communication, deaf people now communicate by cell phone using text messages. "But the point is you want to be able to communicate in your native language," Riskin said. "For deaf people that’s American Sign Language."

Video is much better than text-messaging because it’s faster and it’s better at conveying emotion, said Jessica DeWitt, a UW undergraduate in psychology who is deaf and is a collaborator on the MobileASL project. She says a large part of her communication is with facial expressions, which are transmitted over the video phones.

Low data transmission rates on US cellular networks, combined with limited processing power on mobile devices, have so far prevented real-time video transmission with enough frames per second that could be used to transmit sign language. Communication rates on US cellular networks allow about one-tenth of the data rates common in places such as Europe and Asia (sign language over cell phones is already possible in Sweden and Japan).

Even as faster networks are becoming more common in the United States, there is still a need for phones that would operate on the slower systems.

"The faster networks are not available everywhere," said doctoral student Anna Cavender. "They also cost more. We don’t think it’s fair for someone who’s deaf to have to pay more for his or her cell phone than someone who’s hearing."

The team tried different ways to get comprehensible sign language on low-resolution video. They discovered that the most important part of the image to transmit in high resolution is around the face. This is not surprising, since eye-tracking studies have already shown that people spend the most time looking at a person’s face while they are signing.

For the full version of the story, visit Medical News Today.
 

[Source: Medical News Today]