1. Method for Generating Captions for Clothing Images to Support Visually Impaired People
- Author
-
Kei Sawai, Noboru Takagi, Hiroyuki Masuta, Kiri Tateno, and Tatsuo Motoyoshi
- Subjects
0209 industrial biotechnology ,Visually impaired ,Computer science ,business.industry ,Deep learning ,Feature extraction ,02 engineering and technology ,Representation (arts) ,Clothing ,Visualization ,020901 industrial engineering & automation ,Human–computer interaction ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,business - Abstract
Visually impaired people are able to access the shape and texture of objects and recognize them by the sense of touch of hands. However, visual information such as color and pattern of clothing cannot be accessed. Therefore, it is difficult for visually impaired people to coordinate clothes without the assistance of a sighted person. In this study, we aim to develop a system to support visually impaired people to choose clothing. In this paper, we describe a method for converting visual information acquired from clothing into verbal representation using Deep Neural Networks (DNNs). In the experiment, we have developed a caption generating model using our dataset. As a result of try the computer experiment to generate caption for 100 images, we have observed subjectively acceptable captions for about 80% of the results.
- Published
- 2020
- Full Text
- View/download PDF