Back to Search Start Over

Image Generative Semantic Communication with Multi-Modal Similarity Estimation for Resource-Limited Networks

Authors :
Hosonuma, Eri
Yamazaki, Taku
Miyoshi, Takumi
Taya, Akihito
Nishiyama, Yuuki
Sezaki, Kaoru
Publication Year :
2024

Abstract

To reduce network traffic and support environments with limited resources, a method for transmitting images with minimal transmission data is required. Several machine learning-based image compression methods, which compress the data size of images while maintaining their features, have been proposed. However, in certain situations, reconstructing only the semantic information of images at the receiver end may be sufficient. To realize this concept, semantic-information-based communication, called semantic communication, has been proposed, along with an image transmission method using semantic communication. This method transmits only the semantic information of an image, and the receiver reconstructs it using an image-generation model. This method utilizes a single type of semantic information for image reconstruction, but reconstructing images similar to the original image using only this information is challenging. This study proposes a multi-modal image transmission method that leverages various types of semantic information for efficient semantic communication. The proposed method extracts multi-modal semantic information from an original image and transmits only that to a receiver. Subsequently, the receiver generates multiple images using an image-generation model and selects an output image based on semantic similarity. The receiver must select the result based only on the received features; however, evaluating the similarity using conventional metrics is challenging. Therefore, this study explores new metrics to evaluate the similarity between semantic features of images and proposes two scoring procedures for evaluating semantic similarity between images based on multiple semantic features. The results indicate that the proposed procedures can compare semantic similarities, such as position and composition, between the semantic features of the original and generated images.<br />Comment: 14 pages, 15 figures, this paper has been submitted to IEICE Transactions on Communications

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2404.11280
Document Type :
Working Paper