4
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: not found
      • Article: not found

      Enhancing computational fluid dynamics with machine learning

      ,
      Nature Computational Science
      Springer Science and Business Media LLC

      Read this article at

      ScienceOpenPublisherPubMed
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Related collections

          Most cited references127

          • Record: found
          • Abstract: not found
          • Article: not found

          Physics-Informed Neural Networks: A Deep Learning Framework for Solving Forward and Inverse Problems Involving Nonlinear Partial Differential Equations

            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            Inertial Ranges in Two-Dimensional Turbulence

              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead

              Black box machine learning models are currently being used for high stakes decision-making throughout society, causing problems throughout healthcare, criminal justice, and in other domains. People have hoped that creating methods for explaining these black box models will alleviate some of these problems, but trying to explain black box models, rather than creating models that are interpretable in the first place, is likely to perpetuate bad practices and can potentially cause catastrophic harm to society. There is a way forward - it is to design models that are inherently interpretable. This manuscript clarifies the chasm between explaining black boxes and using inherently interpretable models, outlines several key reasons why explainable black boxes should be avoided in high-stakes decisions, identifies challenges to interpretable machine learning, and provides several example applications where interpretable models could potentially replace black box models in criminal justice, healthcare, and computer vision.
                Bookmark

                Author and article information

                Contributors
                (View ORCID Profile)
                Journal
                Nature Computational Science
                Nat Comput Sci
                Springer Science and Business Media LLC
                2662-8457
                June 2022
                June 27 2022
                : 2
                : 6
                : 358-366
                Article
                10.1038/s43588-022-00264-7
                38177587
                2c622b00-cb2e-422c-9462-62493cc6b56a
                © 2022

                https://www.springer.com/tdm

                https://www.springer.com/tdm

                History

                Comments

                Comment on this article