1
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: not found
      • Article: not found

      A systematic review of personal thermal comfort models

      , ,
      Building and Environment
      Elsevier BV

      Read this article at

      ScienceOpenPublisher
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Related collections

          Most cited references71

          • Record: found
          • Abstract: not found
          • Article: not found

          A Coefficient of Agreement for Nominal Scales

            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead

            Black box machine learning models are currently being used for high stakes decision-making throughout society, causing problems throughout healthcare, criminal justice, and in other domains. People have hoped that creating methods for explaining these black box models will alleviate some of these problems, but trying to explain black box models, rather than creating models that are interpretable in the first place, is likely to perpetuate bad practices and can potentially cause catastrophic harm to society. There is a way forward - it is to design models that are inherently interpretable. This manuscript clarifies the chasm between explaining black boxes and using inherently interpretable models, outlines several key reasons why explainable black boxes should be avoided in high-stakes decisions, identifies challenges to interpretable machine learning, and provides several example applications where interpretable models could potentially replace black box models in criminal justice, healthcare, and computer vision.
              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI

                Bookmark

                Author and article information

                Contributors
                (View ORCID Profile)
                (View ORCID Profile)
                (View ORCID Profile)
                Journal
                Building and Environment
                Building and Environment
                Elsevier BV
                03601323
                January 2022
                January 2022
                : 207
                : 108502
                Article
                10.1016/j.buildenv.2021.108502
                01b9f59d-eea1-40e7-b13f-426dc1d0d468
                © 2022

                https://www.elsevier.com/tdm/userlicense/1.0/

                https://doi.org/10.15223/policy-017

                https://doi.org/10.15223/policy-037

                https://doi.org/10.15223/policy-012

                https://doi.org/10.15223/policy-029

                https://doi.org/10.15223/policy-004

                History

                Comments

                Comment on this article