Inviting an author to review:
Find an author and click ‘Invite to review selected article’ near their name.
Search for authorsSearch for similar articles
3
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Reducing alcohol use through alcohol control policies in the general population and population subgroups: a systematic review and meta-analysis

      Read this article at

          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Summary We estimate the effects of alcohol taxation, minimum unit pricing (MUP), and restricted temporal availability on overall alcohol consumption and review their differential impact across sociodemographic groups. Web of Science, Medline, PsycInfo, Embase, and EconLit were searched on 08/12/2022 and 09/26/2022 for studies on newly introduced or changed alcohol policies published between 2000 and 2022 (Prospero registration: CRD42022339791). We combined data using random-effects meta-analyses. Risk of bias was assessed using the Newcastle–Ottawa Scale. Of 1887 reports, 36 were eligible. Doubling alcohol taxes or introducing MUP (Int$ 0.90/10 g of pure alcohol) reduced consumption by 10% (for taxation: 95% prediction intervals [PI]: −18.5%, −1.2%; for MUP: 95% PI: −28.2%, 5.8%), restricting alcohol sales by one day a week reduced consumption by 3.6% (95% PI: −7.2%, −0.1%). Substantial between-study heterogeneity contributes to high levels of uncertainty and must be considered in interpretation. Pricing policies resulted in greater consumption changes among low-income alcohol users, while results were inconclusive for other socioeconomic indicators, gender, and racial and ethnic groups. Research is needed on the differential impact of alcohol policies, particularly for groups bearing a disproportionate alcohol-attributable health burden. Funding Research reported in this publication was supported by the 10.13039/100000027 National Institute on Alcohol Abuse and Alcoholism of the 10.13039/100000002 National Institutes of Health under Award Number R01AA028009.

          Related collections

          Most cited references66

          • Record: found
          • Abstract: found
          • Article: found
          Is Open Access

          The PRISMA 2020 statement: an updated guideline for reporting systematic reviews

          The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement, published in 2009, was designed to help systematic reviewers transparently report why the review was done, what the authors did, and what they found. Over the past decade, advances in systematic review methodology and terminology have necessitated an update to the guideline. The PRISMA 2020 statement replaces the 2009 statement and includes new reporting guidance that reflects advances in methods to identify, select, appraise, and synthesise studies. The structure and presentation of the items have been modified to facilitate implementation. In this article, we present the PRISMA 2020 27-item checklist, an expanded checklist that details reporting recommendations for each item, the PRISMA 2020 abstract checklist, and the revised flow diagrams for original and updated reviews.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            Interrater reliability: the kappa statistic

            The kappa statistic is frequently used to test interrater reliability. The importance of rater reliability lies in the fact that it represents the extent to which the data collected in the study are correct representations of the variables measured. Measurement of the extent to which data collectors (raters) assign the same score to the same variable is called interrater reliability. While there have been a variety of methods to measure interrater reliability, traditionally it was measured as percent agreement, calculated as the number of agreement scores divided by the total number of scores. In 1960, Jacob Cohen critiqued use of percent agreement due to its inability to account for chance agreement. He introduced the Cohen’s kappa, developed to account for the possibility that raters actually guess on at least some variables due to uncertainty. Like most correlation statistics, the kappa can range from −1 to +1. While the kappa is one of the most commonly used statistics to test interrater reliability, it has limitations. Judgments about what level of kappa should be acceptable for health research are questioned. Cohen’s suggested interpretation may be too lenient for health related studies because it implies that a score as low as 0.41 might be acceptable. Kappa and percent agreement are compared, and levels for both kappa and percent agreement that should be demanded in healthcare studies are suggested.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Basics of meta-analysis: I(2) is not an absolute measure of heterogeneity.

              When we speak about heterogeneity in a meta-analysis, our intent is usually to understand the substantive implications of the heterogeneity. If an intervention yields a mean effect size of 50 points, we want to know if the effect size in different populations varies from 40 to 60, or from 10 to 90, because this speaks to the potential utility of the intervention. While there is a common belief that the I(2) statistic provides this information, it actually does not. In this example, if we are told that I(2) is 50%, we have no way of knowing if the effects range from 40 to 60, or from 10 to 90, or across some other range. Rather, if we want to communicate the predicted range of effects, then we should simply report this range. This gives readers the information they think is being captured by I(2) and does so in a way that is concise and unambiguous. Copyright © 2017 John Wiley & Sons, Ltd.
                Bookmark

                Author and article information

                Contributors
                (View ORCID Profile)
                (View ORCID Profile)
                (View ORCID Profile)
                Journal
                eClinicalMedicine
                eClinicalMedicine
                Elsevier BV
                25895370
                May 2023
                May 2023
                : 59
                : 101996
                Article
                10.1016/j.eclinm.2023.101996
                8f742ced-6956-4329-805b-1bb225480c73
                © 2023

                https://www.elsevier.com/tdm/userlicense/1.0/

                http://creativecommons.org/licenses/by/4.0/

                History

                Comments

                Comment on this article