6
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Spectrum-based deep neural networks for fraud detection

      Preprint
      , , ,

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          In this paper, we focus on fraud detection on a signed graph with only a small set of labeled training data. We propose a novel framework that combines deep neural networks and spectral graph analysis. In particular, we use the node projection (called as spectral coordinate) in the low dimensional spectral space of the graph's adjacency matrix as input of deep neural networks. Spectral coordinates in the spectral space capture the most useful topology information of the network. Due to the small dimension of spectral coordinates (compared with the dimension of the adjacency matrix derived from a graph), training deep neural networks becomes feasible. We develop and evaluate two neural networks, deep autoencoder and convolutional neural network, in our fraud detection framework. Experimental results on a real signed graph show that our spectrum based deep neural networks are effective in fraud detection.

          Related collections

          Most cited references1

          • Record: found
          • Abstract: not found
          • Article: not found

          Graph based anomaly detection and description: a survey

            Bookmark

            Author and article information

            Journal
            2017-06-02
            Article
            1706.00891
            7f17e4bd-e379-4a36-bb9f-ff79d9ec7d04

            http://arxiv.org/licenses/nonexclusive-distrib/1.0/

            History
            Custom metadata
            cs.CR cs.LG cs.SI

            Social & Information networks,Security & Cryptology,Artificial intelligence

            Comments

            Comment on this article