Image retrieval is a computer technique that effectively recovers target objects from hundreds or thousands of images. A well-organized image retrieval system should have the capability to recover images that are both important and cover various aspects i.e., query‟s diversity. A methodology based on Multi-Modal Approach was proposed which address the problem of relevance and diversification in social image retrieval. In this methodology, clustering techniques and adaptive multi-model relevance feedback algorithm were used for social image retrieval. However, the identification of relevancy between text and visual feature descriptor in social media is challenging for large collection of images. An Improved Multi-Modal Approach (IMMA) is proposed in this paper which combined text and visual descriptors using optimized deep learning and Canonical Correlation Analysis (CCA) that improve the social image retrieval process. Initially, an image database is formed using trendy social networking sites are Facebook, Google+ and Twitter. Then, the textual features of the image descriptors and visual features of the images are extracted using optimized deep learning called optimized AlexNet. The extracted features are fused and their dimensionality is reduced by using CCA. The fused features are used in FCA, HCA and adaptive relevance feedback algorithm for effective social image retrieval.
Volume 11 | 10-Special Issue
Pages: 1447-1456
DOI: 10.5373/JARDCS/V11SP10/20192990