[image 02993] Category-Based Deep CCA for Fine-Grained Venue Discovery from Multimodal Data

Yi Yu yiyu @ nii.ac.jp
2018年 5月 9日 (水) 10:13:05 JST


Dear colleagues,

We would like to draw your attention to our recent research "Category-Based Deep CCA for Fine-Grained Venue Discovery from Multimodal Data", which have been reported in 
https://arxiv.org/pdf/1805.02997.pdf (a full pdf version of this work can be found) .

Any comments are welcome. Thank you.

Category-Based Deep CCA for Fine-Grained Venue Discovery from Multimodal Data


In this work, travel destination and business location are taken as venues. Discovering a venue by a photo is very important for context-aware applications. Unfortunately, few efforts paid attention to complicated real images such as venue photos generated by users. Our goal is fine-grained venue discovery from heterogeneous social multimodal data. To this end, we propose a novel deep learning model, Category-based Deep Canonical Correlation Analysis (C-DCCA). Given a photo as input, this model performs (i) exact venue search (find the venue where the photo was taken), and (ii) group venue search (find relevant venues with the same category as that of the photo), by the cross-modal correlation between the input photo and textual description of venues. In this model, data in different modalities are projected to a same space via deep networks. Pairwise correlation (between different modal data from the same venue) for exact venue search and category-based correlation (between different modal data from different venues with the same category) for group venue search are jointly optimized. Because a photo cannot fully reflect rich text description of a venue, the number of photos per venue in the training phase is increased to capture more aspects of a venue. We build a new venue-aware multimodal dataset by integrating Wikipedia featured articles and Foursquare venue photos. Experimental results on this dataset confirm the feasibility of the proposed method. Moreover, the evaluation over another publicly available dataset confirms that the proposed method outperforms state-of-the-arts for cross-modal retrieval between image and text. 

Best regards,

Yi Yu 
http://research.nii.ac.jp/~yiyu/


image メーリングリストの案内