Back to Search
Start Over
Census-based vision for auditory depth images and speech navigation of visually impaired users
- Source :
- IEEE Transactions on Consumer Electronics. 57:1883-1890
- Publication Year :
- 2011
- Publisher :
- Institute of Electrical and Electronics Engineers (IEEE), 2011.
-
Abstract
- In neuroscience and psychology, visual imagery is the subjective experience of seeing in the absence of visual stimulation. Someone may experience touch or sound as a result of visual imagery. In this paper, a new visual image aid which can provide a different way to visualize the image for visually impaired users is proposed. It is done by applying the depth image to an Image-To-Sound Mapping (ITSM) system. The proposed algorithm utilizes a sparse Census transform (SCT) and color segmentation to obtain an illuminationinvariant depth image. The depth image is applied to the ITSM system and then a clear and simple sound output is obtained for constructing a mental image. Moreover, the reliable three-dimensional (3D) data of close objects are extracted and interpreted as a semantic speech output. Experimental results show that visually impaired users can perceive the image easily and without training by adding verbal description to the visually image aid. In good and poor illuminated environments, the performance is 82% and 80% respectively. The performance of our proposed systems was not influenced by various lighting. All subjects also commented that the systems would be potentially useful1.
- Subjects :
- Visually impaired
Computer science
business.industry
Speech recognition
Feature extraction
ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION
Image segmentation
Speech processing
Image (mathematics)
Visualization
Media Technology
Computer vision
Segmentation
Artificial intelligence
Electrical and Electronic Engineering
business
Mental image
Subjects
Details
- ISSN :
- 00983063
- Volume :
- 57
- Database :
- OpenAIRE
- Journal :
- IEEE Transactions on Consumer Electronics
- Accession number :
- edsair.doi...........dd0f9775e080da2ddb52f92513d40459
- Full Text :
- https://doi.org/10.1109/tce.2011.6131167