39
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Artificial Intelligence and Pharmacometrics: Time to Embrace, Capitalize, and Advance?

      article-commentary

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Artificial intelligence (AI) has been described as the machine for the fourth industrial revolution. Without exception, AI is predicted to change the face of every industry. In the field of drug development, AI is employed to enhance efficiency. Pharmacometrics and systems pharmacology play a vital role in the drug‐development decision process. Thus, it is important to recognize and embrace the efficiencies that AI can bring to the pharmacometrics community. The Dartmouth summer research project (1955) of AI proceeded on the basis that every aspect of learning or any other feature of intelligence can be described so precisely that a machine can be created to simulate it.1 Deep learning (DL) and machine learning (ML) algorithms constitute the essential building blocks of AI systems. DL represents learning using multiple processing layers, whereas ML techniques use learning algorithms that self‐learn and improve efficiency for a period of time.2 There have already been significant contributions of AI in medicine, and some notable examples include the diagnosis of tuberculosis,3 metastasis of breast cancer4 and retinal changes as a result of diabetes.5 AI/DL/ML methods have made their way into a handful of areas in drug development. Some of these inroads are described later and include pharmacometric modeling and drug repurposing. We believe that these methods, which have proven valuable in other fields, have the promise to make important contributions to a wide range of drug‐development disciplines. This commentary describes some of the applications of AI/DL/ML in this space and calls for the examination of these methods in other areas. ML Methods for Pharmacometrics Model selection in pharmacometrics is often described as a linear process, starting with “structural” features and followed by random effects and covariate effects, each tested one at a time. In the optimization field, this is known as a “greedy” or local search algorithm. Similar to other greedy algorithms (such as the quasi‐Newton used for parameter optimization), this algorithm is at risk for arriving at local minima. A global search method called Genetic Algorithm (GA) can be proposed as a better alternative. GA creates a user‐defined “search space” of candidate models representing all hypotheses to be tested (e.g., number of compartments, covariates, random effects). This set of possible hypotheses are then coded into a “genome” consisting of a string of 0s and 1s, with genes representing each hypothesis (Figure 1). GA then searches this space for the optimal combination of “features.” GA uses the mathematics of “survival of the fittest” with mutation and cross‐over for the optimal combination of “features” (e.g., compartment, random effect, covariate effects, initial parameter estimates) based on a user‐provided function that describes the quality of the model, typically based on log‐likelihood and a parsimony penalty. This user‐provided function is known as the “fitness function” in GA. A prototype software has been developed to implement GA using NONMEM for the parameter estimation.6 Initial experience suggests that GA consistently finds a better model (based on Akaike information criteria and a likelihood ratio test) than human pharmacometricians.7, 8 Such an approach with a global search algorithm offers an objective and robust method for identifying optimal pharmacometric models. Figure 1 Coding of model options into a genome using Genetic Algorithm. CL‐ Clearance, BSA‐ Body surface area, K23 and K32‐ micro rate constants, Q‐ intercompartmental clearance, VSS‐ steady‐state volume of distribution, V2‐volume of distribution DL for Drug Repurposing Efforts Hit identification is the first and a crucial step in identifying a drug against a biological target of interest. Excelra has developed a multilayered AI‐driven platform aimed at identifying novel chemical hits (Figure S1). This platform screens drug candidates by passing them through a sequential process of filtration using ML and DL techniques integrated with chemo‐informatics. The filtered candidates are then passed through the target‐based data points to define potential drug candidates. Below are several key components required to build such AI/ML models: An appropriate data set (including active and inactive compounds) An optimized set of target specific descriptors DL and ML algorithms Perfect match of algorithms and parameters Unbiased validation set to authenticate model performance An optimized set of normalized chemical features (descriptors) represents the crucial component for building ML and DL models. Five different statistical algorithms were used to reduce noise and obtain a set of ~100 descriptors with no/fewer outliers. This information was subsequently used to construct ML (using ~30 algorithms) and DL models with their hyper‐parameterization, entailing an equal ratio of active (highly potent) and inactive compounds obtained from proprietary and public databases. Of the 150 models created for each target, 5 different reduction methods and more than 30 algorithms were employed to generate a performance score that was used to identify the “best model.” In addition, the multilayer ML and DL methods were integrated with traditional chemo‐informatics and molecular‐docking approaches to yield the best possible results. Finally, at the end of the pipeline, human intelligence is coupled with AI to achieve the best outcome using this multipronged approach. All models were validated using well‐curated proprietary (GOSTAR) and public (ChEMBL and PubChem) libraries. For example, the pipeline for Bruton's tyrosine kinase was built using this approach. A total of 52 randomly selected compounds were passed through the pipeline to assess its efficiency in differentiating between actives and inactives. These compounds were subsequently tested in vitro using a specific Bruton's tyrosine kinase activity assay. The pipeline was able to predict actives and inactives with an accuracy of 79%, with 80% sensitivity and 78% specificity (Table S1). Similarly, the models were built for two other targets with an accuracy ranging between 68%–87% (data not shown). Accelerating Therapeutics for Opportunities in Medicine (ATOM)—A Multidisciplinary Effort for AI/ML/DL–Based Accelerated Drug Development The convergence of recent advances in data science and computing is primed for applications in the pharmaceutical industry, and a multidisciplinary approach to integrating these fields will enable the rapid acceleration of drug development through applications of AI. Although pharmaceutical companies and clinical centers are transforming their data infrastructure to enable more value to be derived from the information collected, there is also an incredible opportunity to deliver more value to patients by sharing data, models, and cross‐industry expertise in AI. ATOM is a public–private consortium formed to transform preclinical drug discovery into a rapid, parallel, patient‐centric model through an in silico–first, precompetitive platform. The founding members of the consortium are the Department of Energy's Lawrence Livermore National Laboratory, GlaxoSmithKline, the National Cancer Institute's Fredrick National Laboratory for Cancer Research, and the University of California, San Francisco. The consortium is open to other partners and is actively seeking new members. ATOM aims to develop, test, and validate a multidisciplinary approach to drug discovery in which modern science, technology and engineering, supercomputing simulations, data science, and AI are highly integrated into a single drug‐discovery platform that can ultimately be shared with the drug‐development community at large. The ATOM platform integrates an ensemble of algorithms, built on shared public and private data, to simultaneously evaluate candidate drug molecules for efficacy, safety, pharmacokinetics, and developability. The current data set includes information on 2 million compounds from GlaxoSmithKline's drug discovery and development programs and screening collections. To date, ATOM has built thousands of models on that data, with an initial focus on pharmacokinetics and safety parameters that feed into human physiologically based pharmacokinetics and systems toxicology models. The ATOM active learning workflow, highlighted in Figure 2, will drive the acquisition of additional experimental and computational data where needed to improve prediction performance. Ultimately, this integrated approach is expected to lower attrition, reduce unproductive preclinical experimentation, and improve clinical translation that will result in better patient outcomes. Figure 2 Multidisciplinary approach to the convergence of data science, drug discovery efforts, and translational modeling at the ATOM‐Accelerating Therapeutics for Opportunities in Medicine Consortium. PK, pharmacokinetics. ML‐Based Methods in Regulatory Review Applications of AI/DL/ML in the biomedical field are not limited to academia or industry. There are examples in which the US Food and Drug Administration has accelerated the approval of AI‐based devices and algorithms for diagnostic purposes, e.g., detecting diabetes‐related retinopathy and wrist fractures.9 The potential for the application in regulatory reviews is also being explored.10 ML could serve as a powerful tool for pharmacometrics analysis with its capacity to leverage high‐dimensional data and describe nonlinear relationships. This was illustrated by a simulation case study employing ML‐based techniques in the exposure–response analysis. The case study was based on a simulation system in which both drug clearance and treatment outcome were described by highly nonlinear functions. In addition, drug clearance was designed to be independently associated with treatment response via confounders. The objective of this simulation was to assess whether ML‐based techniques were able to estimate the causal relationships between drug exposure and treatment outcome, without bias, when data from only one dose level were available. Two analysis strategies involving ML were evaluated. Strategy A was based on a marginal structural model with inverse probability weighting, in which ML was employed to improve model robustness. In strategy A, pseudo‐subjects were grouped into five quintiles based on individual exposure; the propensity scores for each subject were estimated. Here, the propensity score represents the probability of the subject being assigned to their observed exposure quintile given a set of covariates. An unbiased propensity score is required to correctly generate the inverse probability weighting and assess the causal effects of exposure in marginal structural models. The simulation showed that, in a nonlinear system, ML proved more robust than traditional multinomial logistic regression in estimating the propensity score, thus correctly recovering the exposure–response relationship. Strategy B estimated the exposure–response relationship by employing an artificial neural network as a universal function approximator to recover the data‐generating mechanism without the requirement of accurately hand‐crafting the whole simulation system. A fully connected “feed forward” network was trained based on data simulated from one dose level. The results demonstrated that the trained network was able to correctly predict the treatment effects across a certain range of adjacent dose levels. In contrast, the traditional regression provided biased predictions even when all confounders were included in the model. Conclusion Given the Bayesian framework of pharmacometrics, it seems to be a natural evolution to embrace AI/DL/ML as additional tools and opportunities for collaboration. It is critical to actively expand our understanding of AI/ML/DL to appreciate their impact at various stages of drug development. Failure to acknowledge or shying away from AI/DL/ML may not be an option. On the contrary, pharmacometricians are poised to be a natural fit to the AI/DL/ML space and advance this field to the next level. Funding No funding was received for this work. Conflict of Interest M.S. is an employee of Nuventra, N. Gattu is an employee of Excelra, and N. Goyal and S.C.‐T. are employees of GlaxoSmithKline. All other authors declared no competing interests for this work. Supporting information Supplement: Figure S1 and Table S1. Click here for additional data file.

          Related collections

          Most cited references5

          • Record: found
          • Abstract: found
          • Article: not found

          Diagnostic Assessment of Deep Learning Algorithms for Detection of Lymph Node Metastases in Women With Breast Cancer

          Application of deep learning algorithms to whole-slide pathology images can potentially improve diagnostic accuracy and efficiency.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            Deep learning for chest radiograph diagnosis: A retrospective comparison of the CheXNeXt algorithm to practicing radiologists

            Background Chest radiograph interpretation is critical for the detection of thoracic diseases, including tuberculosis and lung cancer, which affect millions of people worldwide each year. This time-consuming task typically requires expert radiologists to read the images, leading to fatigue-based diagnostic error and lack of diagnostic expertise in areas of the world where radiologists are not available. Recently, deep learning approaches have been able to achieve expert-level performance in medical image interpretation tasks, powered by large network architectures and fueled by the emergence of large labeled datasets. The purpose of this study is to investigate the performance of a deep learning algorithm on the detection of pathologies in chest radiographs compared with practicing radiologists. Methods and findings We developed CheXNeXt, a convolutional neural network to concurrently detect the presence of 14 different pathologies, including pneumonia, pleural effusion, pulmonary masses, and nodules in frontal-view chest radiographs. CheXNeXt was trained and internally validated on the ChestX-ray8 dataset, with a held-out validation set consisting of 420 images, sampled to contain at least 50 cases of each of the original pathology labels. On this validation set, the majority vote of a panel of 3 board-certified cardiothoracic specialist radiologists served as reference standard. We compared CheXNeXt’s discriminative performance on the validation set to the performance of 9 radiologists using the area under the receiver operating characteristic curve (AUC). The radiologists included 6 board-certified radiologists (average experience 12 years, range 4–28 years) and 3 senior radiology residents, from 3 academic institutions. We found that CheXNeXt achieved radiologist-level performance on 11 pathologies and did not achieve radiologist-level performance on 3 pathologies. The radiologists achieved statistically significantly higher AUC performance on cardiomegaly, emphysema, and hiatal hernia, with AUCs of 0.888 (95% confidence interval [CI] 0.863–0.910), 0.911 (95% CI 0.866–0.947), and 0.985 (95% CI 0.974–0.991), respectively, whereas CheXNeXt’s AUCs were 0.831 (95% CI 0.790–0.870), 0.704 (95% CI 0.567–0.833), and 0.851 (95% CI 0.785–0.909), respectively. CheXNeXt performed better than radiologists in detecting atelectasis, with an AUC of 0.862 (95% CI 0.825–0.895), statistically significantly higher than radiologists' AUC of 0.808 (95% CI 0.777–0.838); there were no statistically significant differences in AUCs for the other 10 pathologies. The average time to interpret the 420 images in the validation set was substantially longer for the radiologists (240 minutes) than for CheXNeXt (1.5 minutes). The main limitations of our study are that neither CheXNeXt nor the radiologists were permitted to use patient history or review prior examinations and that evaluation was limited to a dataset from a single institution. Conclusions In this study, we developed and validated a deep learning algorithm that classified clinically important abnormalities in chest radiographs at a performance level comparable to practicing radiologists. Once tested prospectively in clinical settings, the algorithm could have the potential to expand patient access to chest radiograph diagnostics.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Using a deep learning algorithm and integrated gradients explanation to assist grading for diabetic retinopathy

              To understand the impact of deep learning diabetic retinopathy (DR) algorithms on physician readers in computer-assisted settings.
                Bookmark

                Author and article information

                Contributors
                ayyappach@gmail.com
                Journal
                CPT Pharmacometrics Syst Pharmacol
                CPT Pharmacometrics Syst Pharmacol
                10.1002/(ISSN)2163-8306
                PSP4
                CPT: Pharmacometrics & Systems Pharmacology
                John Wiley and Sons Inc. (Hoboken )
                2163-8306
                05 June 2019
                July 2019
                : 8
                : 7 ( doiID: 10.1002/psp4.2019.8.issue-7 )
                : 440-443
                Affiliations
                [ 1 ] UNT University of North Texas System College of Pharmacy UNTHSC University of North Texas Health Science Center Fort Worth Texas USA
                [ 2 ] ATOM Accelerating Therapeutics for Opportunities in Medicine Consortium GlaxoSmithKline San Francisco California USA
                [ 3 ] Office of Clinical Pharmacology US Food and Drug Administration Silver Spring Maryland USA
                [ 4 ] Nuventra Raleigh North Carolina USA
                [ 5 ] Excelra Hyderabad India
                [ 6 ] Clinical Pharmacology GlaxoSmithKline Collegeville Pennsylvania USA
                Author notes
                [*] [* ]Correspondence: Ayyappa Chaturvedula ( ayyappach@ 123456gmail.com )
                Article
                PSP412418
                10.1002/psp4.12418
                6657004
                31006175
                309afc54-6dc6-431e-b74f-e9973904c132
                © 2019 The Authors CPT: Pharmacometrics & Systems Pharmacology published by Wiley Periodicals, Inc. on behalf of the American Society for Clinical Pharmacology and Therapeutics.

                This is an open access article under the terms of the http://creativecommons.org/licenses/by-nc/4.0/ License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited and is not used for commercial purposes.

                History
                : 22 January 2019
                : 20 March 2019
                Page count
                Figures: 2, Tables: 0, Pages: 5, Words: 2152
                Categories
                Commentary
                Perspectives
                Commentary
                Custom metadata
                2.0
                psp412418
                July 2019
                Converter:WILEY_ML3GV2_TO_NLMPMC version:5.6.6.2 mode:remove_FC converted:25.07.2019

                Comments

                Comment on this article