Authors
Yi-Hao Peng, Ming-Wei Hsi, Paul Taele, Ting-Yu Lin, Po-En Lai, Leon Hsu, Tzu-chuan Chen, Te-Yen Wu, Yu-An Chen, Hsien-Hui Tang, Mike Y Chen
Publication date
2018/4/21
Book
Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems
Pages
1-10
Description
Deaf and hard-of-hearing (DHH) individuals encounter difficulties when engaged in group conversations with hearing individuals, due to factors such as simultaneous utterances from multiple speakers and speakers whom may be potentially out of view. We interviewed and co-designed with eight DHH participants to address the following challenges: 1) associating utterances with speakers, 2) ordering utterances from different speakers, 3) displaying optimal content length, and 4) visualizing utterances from out-of-view speakers. We evaluated multiple designs for each of the four challenges through a user study with twelve DHH participants. Our study results showed that participants significantly preferred speechbubble visualizations over traditional captions. These design preferences guided our development of SpeechBubbles, a real-time speech recognition interface prototype on an augmented reality head …
Total citations
2018201920202021202220232024391210182112
Scholar articles
YH Peng, MW Hsi, P Taele, TY Lin, PE Lai, L Hsu… - Proceedings of the 2018 CHI Conference on Human …, 2018