Multimodal Behavior Analysis in the Wild: Advances and Challenges - Chapter 4: Designing Audio-Visual Tools to Support Multisensory Disabilities

Elsevier, Multimodal Behavior Analysis in the Wild, Advances and Challenges, Computer Vision and Pattern Recognition, 2019, Pages 79-102
Nicoletta Noceti, Luca Giuliani, Joan Sosa-Garciá, Luca Brayda, Andrea Trucco, Francesca Odone

This chapter discusses the technologies devised in Glassense—Wearable technologies for sensory supplementation, a regional project whose aim was to develop a proof of concept prototype of a sensorized pair of glasses to assist users with limited technology skills and possibly multiple disabilities. The potential beneficiaries of this technology are primarily older users with partial vision and/or hearing loss. For people with low vision, the glasses offer an object recognition functionality based on a collaborative principle: the user provides information on an object category of interest with an audio command, the system then performs object-instance recognition and delivers an audio feedback, if any, to the user. As for the hearing loss impairment, the system provides a complementary input audio source to existing acoustic prostheses: the device enhances the speech only if it comes from a sound source in front of the person wearing the glasses. The technology helps to solve the so-called cocktail party problem.