11
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Comparative Analysis of Deepfake Image Detection Method Using Convolutional Neural Network

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Generation Z is a data-driven generation. Everyone has the entirety of humanity's knowledge in their hands. The technological possibilities are endless. However, we use and misuse this blessing to face swap using deepfake. Deepfake is an emerging subdomain of artificial intelligence technology in which one person's face is overlaid over another person's face, which is very prominent across social media. Machine learning is the main element of deepfakes, and it has allowed deepfake images and videos to be generated considerably faster and at a lower cost. Despite the negative connotations associated with the phrase “deepfakes,” the technology is being more widely employed commercially and individually. Although it is relatively new, the latest technological advances make it more and more challenging to detect deepfakes and synthesized images from real ones. An increasing sense of unease has developed around the emergence of deepfake technologies. Our main objective is to detect deepfake images from real ones accurately. In this research, we implemented several methods to detect deepfake images and make a comparative analysis. Our model was trained by datasets from Kaggle, which had 70,000 images from the Flickr dataset and 70,000 images produced by styleGAN. For this comparative study of the use of convolutional neural networks (CNN) to identify genuine and deepfake pictures, we trained eight different CNN models. Three of these models were trained using the DenseNet architecture (DenseNet121, DenseNet169, and DenseNet201); two were trained using the VGGNet architecture (VGG16, VGG19); one was with the ResNet50 architecture, one with the VGGFace, and one with a bespoke CNN architecture. We have also implemented a custom model that incorporates methods like dropout and padding that aid in determining whether or not the other models reflect their objectives. The results were categorized by five evaluation metrics: accuracy, precision, recall, F1-score, and area under the ROC (receiver operating characteristic) curve. Amongst all the models, VGGFace performed the best, with 99% accuracy. Besides, we obtained 97% from the ResNet50, 96% from the DenseNet201, 95% from the DenseNet169, 94% from the VGG19, 92% from the VGG16, 97% from the DenseNet121 model, and 90% from the custom model.

          Related collections

          Most cited references55

          • Record: found
          • Abstract: not found
          • Conference Proceedings: not found

          FaceNet: A unified embedding for face recognition and clustering

            Bookmark
            • Record: found
            • Abstract: not found
            • Conference Proceedings: not found

            Densely connected convolutional networks

              Bookmark
              • Record: found
              • Abstract: not found
              • Conference Proceedings: not found

              Deep Face Recognition

                Bookmark

                Author and article information

                Contributors
                Journal
                Comput Intell Neurosci
                Comput Intell Neurosci
                cin
                Computational Intelligence and Neuroscience
                Hindawi
                1687-5265
                1687-5273
                2021
                16 December 2021
                : 2021
                : 3111676
                Affiliations
                1Department of Electrical and Computer Engineering, North South University, Dhaka 1229, Bangladesh
                2School of Computing and IT, Manipal University Jaipur, Jaipur, Rajasthan, India
                3Department of Computer Science, College of Computers and Information Technology, Taif University, Taif 21944, Saudi Arabia
                4Department of Information Technology, College of Computers and Information Technology, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia
                Author notes

                Academic Editor: Suneet Kumar Gupta

                Author information
                https://orcid.org/0000-0002-4334-4319
                https://orcid.org/0000-0001-7414-7951
                https://orcid.org/0000-0003-4499-8696
                https://orcid.org/0000-0002-5544-3701
                https://orcid.org/0000-0003-0779-8820
                https://orcid.org/0000-0003-4631-6601
                https://orcid.org/0000-0001-9519-3391
                https://orcid.org/0000-0002-6638-7039
                Article
                10.1155/2021/3111676
                8702341
                34956345
                f60ccfc2-1e84-422a-9871-acac72ad8954
                Copyright © 2021 Hasin Shahed Shad et al.

                This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

                History
                : 3 September 2021
                : 30 November 2021
                Funding
                Funded by: Taif University
                Award ID: TURSP-2020/26
                Categories
                Research Article

                Neurosciences
                Neurosciences

                Comments

                Comment on this article