1
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      LangFair: A Python Package for Assessing Bias and Fairness in Large Language Model Use Cases

      Preprint

      Read this article at

          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Large Language Models (LLMs) have been observed to exhibit bias in numerous ways, potentially creating or worsening outcomes for specific groups identified by protected attributes such as sex, race, sexual orientation, or age. To help address this gap, we introduce LangFair, an open-source Python package that aims to equip LLM practitioners with the tools to evaluate bias and fairness risks relevant to their specific use cases. The package offers functionality to easily generate evaluation datasets, comprised of LLM responses to use-case-specific prompts, and subsequently calculate applicable metrics for the practitioner's use case. To guide in metric selection, LangFair offers an actionable decision framework.

          Related collections

          Author and article information

          Journal
          06 January 2025
          Article
          2501.03112
          41046b63-fec1-4863-ad70-a863f4e46d99

          http://arxiv.org/licenses/nonexclusive-distrib/1.0/

          History
          Custom metadata
          Journal of Open Source Software; LangFair repository: https://github.com/cvs-health/langfair
          cs.CL cs.AI cs.CY cs.LG

          Theoretical computer science,Applied computer science,Artificial intelligence

          Comments

          Comment on this article

          Related Documents Log