5
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Prediction of age and sex from paranasal sinus images using a deep learning network

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          This study was conducted to develop a convolutional neural network (CNN)-based model to predict the sex and age of patients by identifying unique unknown features from paranasal sinus (PNS) X-ray images.

          We employed a retrospective study design and used anonymized patient imaging data. Two CNN models, adopting ResNet-152 and DenseNet-169 architectures, were trained to predict sex and age groups (20–39, 40–59, 60+ years). The area under the curve (AUC), algorithm accuracy, sensitivity, and specificity were assessed. Class-activation map (CAM) was used to detect deterministic areas. A total of 4160 PNS X-ray images were collected from 4160 patients. The PNS X-ray images of patients aged ≥20 years were retrieved from the picture archiving and communication database system of our institution. The classification performances in predicting the sex (male vs female) and 3 age groups (20–39, 40–59, 60+ years) for each established CNN model were evaluated.

          For sex prediction, ResNet-152 performed slightly better (accuracy = 98.0%, sensitivity = 96.9%, specificity = 98.7%, and AUC = 0.939) than DenseNet-169. CAM indicated that maxillary sinuses (males) and ethmoid sinuses (females) were major factors in identifying sex. Meanwhile, for age prediction, the DenseNet-169 model was slightly more accurate in predicting age groups (77.6 ± 1.5% vs 76.3 ± 1.1%). CAM suggested that the maxillary sinus and the periodontal area were primary factors in identifying age groups.

          Our deep learning model could predict sex and age based on PNS X-ray images. Therefore, it can assist in reducing the risk of patient misidentification in clinics.

          Related collections

          Most cited references14

          • Record: found
          • Abstract: found
          • Article: not found

          Deep learning.

          Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            EPOS 2012: European position paper on rhinosinusitis and nasal polyps 2012. A summary for otorhinolaryngologists

            The European Position Paper on Rhinosinusitis and Nasal Polyps 2012 is the update of similar evidence based position papers published in 2005 and 2007. The document contains chapters on definitions and classification, we now also proposed definitions for difficult to treat rhinosinusitis, control of disease and better definitions for rhinosinusitis in children. More emphasis is placed on the diagnosis and treatment of acute rhinosinusitis. Throughout the document the terms chronic rhinosinusitis without nasal polyps (CRSsNP) and chronic rhinosinusitis with nasal polyps (CRSwNP) are used to further point out differences in pathophysiology and treatment of these two entities. There are extensive chapters on epidemiology and predisposing factors, inflammatory mechanisms, (differential) diagnosis of facial pain, genetics, cystic fibrosis, aspirin exacerbated respiratory disease, immunodeficiencies, allergic fungal rhinosinusitis and the relationship between upper and lower airways. The chapters on paediatric acute and chronic rhinosinusitis are totally rewritten. Last but not least all available evidence for management of acute rhinosinusitis and chronic rhinosinusitis with or without nasal polyps in adults and children is analyzed and presented and management schemes based on the evidence are proposed. This executive summary for otorhinolaryngologists focuses on the most important changes and issues for otorhinolaryngologists. The full document can be downloaded for free on the website of this journal: http://www.rhinologyjournal.com.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning

              Traditionally, medical discoveries are made by observing associations, making hypotheses from them and then designing and running experiments to test the hypotheses. However, with medical images, observing and quantifying associations can often be difficult because of the wide variety of features, patterns, colours, values and shapes that are present in real data. Here, we show that deep learning can extract new knowledge from retinal fundus images. Using deep-learning models trained on data from 284,335 patients and validated on two independent datasets of 12,026 and 999 patients, we predicted cardiovascular risk factors not previously thought to be present or quantifiable in retinal images, such as age (mean absolute error within 3.26 years), gender (area under the receiver operating characteristic curve (AUC) = 0.97), smoking status (AUC = 0.71), systolic blood pressure (mean absolute error within 11.23 mmHg) and major adverse cardiac events (AUC = 0.70). We also show that the trained deep-learning models used anatomical features, such as the optic disc or blood vessels, to generate each prediction.
                Bookmark

                Author and article information

                Journal
                Medicine (Baltimore)
                Medicine (Baltimore)
                MEDI
                Medicine
                Lippincott Williams & Wilkins (Hagerstown, MD )
                0025-7974
                1536-5964
                19 February 2021
                19 February 2021
                : 100
                : 7
                : e24756
                Affiliations
                [a ]Department of Otorhinolaryngology-Head and Neck Surgery
                [b ]Division of Big Data and Artificial Intelligence, Chuncheon Sacred Heart Hospital
                [c ]Institute of New Frontier Research, Hallym University College of Medicine, Chuncheon
                [d ]Medical Artificial Intelligence Center, Hallym University Medical Center, Anyang
                [e ]Division of Biomedical Informatics, Seoul National University Biomedical Informatics (SNUBI), Seoul National University College of Medicine, Seoul, Republic of Korea.
                Author notes
                []Correspondence: Bum-Joo Cho, Department of Ophthalmology, Hallym University Sacred Heart Hospital, 22, Gwanpyeong-ro 170beon-gil, Dongan-gu, Anyang-si 14068, Gyeonggi-do, Republic of Korea (e-mail: bjcho8@ 123456gmail.com ).
                Article
                MD-D-20-08357 24756
                10.1097/MD.0000000000024756
                7899822
                33607821
                b5018e5b-4602-4a18-b449-32e10b387337
                Copyright © 2021 the Author(s). Published by Wolters Kluwer Health, Inc.

                This is an open access article distributed under the Creative Commons Attribution License 4.0 (CCBY), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. http://creativecommons.org/licenses/by/4.0

                History
                : 22 August 2020
                : 21 December 2020
                : 25 January 2021
                Funding
                Funded by: National Research Foundation of Korea
                Award ID: 2017M3A9E8033207
                Award Recipient : Dong-Kyu Kim
                Funded by: National Research Foundation of Korea
                Award ID: 2018R1D1A3B07040862
                Award Recipient : Not Applicable
                Categories
                6000
                Research Article
                Observational Study
                Custom metadata
                TRUE

                artificial intelligence,deep learning,machine learning,neural networks,paranasal sinuses,sinus

                Comments

                Comment on this article