Learning Deep Structure-Preserving Image-Text Embeddings

Liwei Wang1, Yin Li2, and Svetlana Lazebnik1

1University of Illinois at Urbana-Champaign

2Georgia Institute of Technology

 

 

CVPR 2016

 

Abstract:

This paper proposes a method for learning joint embeddings of images and text using a two-branch neural network with multiple layers of linear projections followed by nonlinearities. The network is trained using a large-margin objective that combines cross-view ranking constraints with within-view neighborhood structure preservation constraints inspired by metric learning literature. Extensive experiments show that our approach gains significant improvements in accuracy for image-to-text and text-to-image retrieval. Our method achieves new state-of-the-art results on the Flickr30K and MSCOCO image-sentence datasets and shows promise on the new task of phrase localization on the Flickr30K Entities dataset.

 

Paper:

 

Learning Deep Structure-Preserving Image-Text Embeddings.
L. Wang, Y. Li, and S. Lazebnik. [pdf]
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016.

Code:

We released the matlab version code of our two branch deep embedding method for only academic research use. For instructions of this code, please see Readme.txt in the following folder.

[code] [data]

 

Any questions, please feel free to contact Liwei Wang, lwang97@illinois.edu