15
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: not found
      • Article: not found

      End-to-End Learning of Deep Visual Representations for Image Retrieval

      Read this article at

      ScienceOpenPublisher
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Related collections

          Most cited references47

          • Record: found
          • Abstract: not found
          • Conference Proceedings: not found

          CNN Features Off-the-Shelf: An Astounding Baseline for Recognition

            Bookmark
            • Record: found
            • Abstract: not found
            • Conference Proceedings: not found

            Video Google: a text retrieval approach to object matching in videos

              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              FaceNet: A Unified Embedding for Face Recognition and Clustering

              , , (2015)
              Despite significant recent advances in the field of face recognition, implementing face verification and recognition efficiently at scale presents serious challenges to current approaches. In this paper we present a system, called FaceNet, that directly learns a mapping from face images to a compact Euclidean space where distances directly correspond to a measure of face similarity. Once this space has been produced, tasks such as face recognition, verification and clustering can be easily implemented using standard techniques with FaceNet embeddings as feature vectors. Our method uses a deep convolutional network trained to directly optimize the embedding itself, rather than an intermediate bottleneck layer as in previous deep learning approaches. To train, we use triplets of roughly aligned matching / non-matching face patches generated using a novel online triplet mining method. The benefit of our approach is much greater representational efficiency: we achieve state-of-the-art face recognition performance using only 128-bytes per face. On the widely used Labeled Faces in the Wild (LFW) dataset, our system achieves a new record accuracy of 99.63%. On YouTube Faces DB it achieves 95.12%. Our system cuts the error rate in comparison to the best published result by 30% on both datasets. We also introduce the concept of harmonic embeddings, and a harmonic triplet loss, which describe different versions of face embeddings (produced by different networks) that are compatible to each other and allow for direct comparison between each other.
                Bookmark

                Author and article information

                Journal
                International Journal of Computer Vision
                Int J Comput Vis
                Springer Nature America, Inc
                0920-5691
                1573-1405
                September 2017
                June 5 2017
                September 2017
                : 124
                : 2
                : 237-254
                Article
                10.1007/s11263-017-1016-8
                a3f79179-8c55-4ba7-b3cb-d02fe53ae699
                © 2017

                http://www.springer.com/tdm

                History

                Comments

                Comment on this article