8
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      The reliability and discriminant validity of physical, technical, and perceptual-physiological measures during a game-specific basketball activity simulation protocol

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Activity simulation protocols offer useful applications in research and practice; however, the specificity of such protocols to basketball game-play is currently lacking. Consequently, this study aimed to develop a game-specific basketball activity simulation protocol representative of typical playing durations and assess its reliability and discriminant validity. The simulation protocol was modified from an original version (i.e., Basketball Exercise Simulation Test) to incorporate regular breaks indicative of time-outs, free-throws, and substitutions. Twelve competitive male and female adult basketball players competing in the fourth or fifth Spanish basketball division underwent repeated trials of the simulation protocol (min. 4 to max. 14 days apart) for reliability analyses. In turn, 13 competitive male (fifth division), 9 competitive female (fourth division), and 13 recreational male adult basketball players completed the simulation protocol to assess discriminant validity via comparisons between sexes (competitive players) and playing levels (males). A range of physical, technical, and perceptual-physiological variables were collected during and following the simulation protocol. Several physical and heart rate variables displayed the strongest reliability (intraclass correlation coefficient [ICC] = 0.72–0.96; coefficient of variation [CV] = 1.78–6.75%), with physical decrement, technical, blood lactate concentration, and rating of perceived exertion (RPE) variables having the weakest (ICC = 0.52–0.75; CV = 10.34–30.85%). Regarding discriminant analyses between sexes, males demonstrated significantly greater physical outputs in several variables and lower RPE compared to females ( p < 0.05, moderate-to- large effects). Comparisons between playing levels revealed competitive males had significantly greater physical outputs across many variables, alongside higher mean heart rate and lower RPE than recreational males ( p < 0.05, moderate-to- large effects). This study presents a novel game-specific basketball activity simulation protocol replicating actual playing durations and game configurations that might be successfully applied for both training and research purposes. Reliability statistics are provided for several variables to inform end-users on potential measurement error when implementing the simulation protocol. Discriminant validity of the simulation protocol was supported for several variables, suggesting it may hold practical utility in benchmarking or selecting players. Future research on this topic is encouraged examining wider samples of male and female basketball players at different levels as well as additional forms of validity for the protocol.

          Related collections

          Most cited references53

          • Record: found
          • Abstract: found
          • Article: not found

          Progressive statistics for studies in sports medicine and exercise science.

          Statistical guidelines and expert statements are now available to assist in the analysis and reporting of studies in some biomedical disciplines. We present here a more progressive resource for sample-based studies, meta-analyses, and case studies in sports medicine and exercise science. We offer forthright advice on the following controversial or novel issues: using precision of estimation for inferences about population effects in preference to null-hypothesis testing, which is inadequate for assessing clinical or practical importance; justifying sample size via acceptable precision or confidence for clinical decisions rather than via adequate power for statistical significance; showing SD rather than SEM, to better communicate the magnitude of differences in means and nonuniformity of error; avoiding purely nonparametric analyses, which cannot provide inferences about magnitude and are unnecessary; using regression statistics in validity studies, in preference to the impractical and biased limits of agreement; making greater use of qualitative methods to enrich sample-based quantitative projects; and seeking ethics approval for public access to the depersonalized raw data of a study, to address the need for more scrutiny of research and better meta-analyses. Advice on less contentious issues includes the following: using covariates in linear models to adjust for confounders, to account for individual differences, and to identify potential mechanisms of an effect; using log transformation to deal with nonuniformity of effects and error; identifying and deleting outliers; presenting descriptive, effect, and inferential statistics in appropriate formats; and contending with bias arising from problems with sampling, assignment, blinding, measurement error, and researchers' prejudices. This article should advance the field by stimulating debate, promoting innovative approaches, and serving as a useful checklist for authors, reviewers, and editors.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Effect size estimates: current use, calculations, and interpretation.

            The Publication Manual of the American Psychological Association (American Psychological Association, 2001, American Psychological Association, 2010) calls for the reporting of effect sizes and their confidence intervals. Estimates of effect size are useful for determining the practical or theoretical importance of an effect, the relative contributions of factors, and the power of an analysis. We surveyed articles published in 2009 and 2010 in the Journal of Experimental Psychology: General, noting the statistical analyses reported and the associated reporting of effect size estimates. Effect sizes were reported for fewer than half of the analyses; no article reported a confidence interval for an effect size. The most often reported analysis was analysis of variance, and almost half of these reports were not accompanied by effect sizes. Partial η2 was the most commonly reported effect size estimate for analysis of variance. For t tests, 2/3 of the articles did not report an associated effect size estimate; Cohen's d was the most often reported. We provide a straightforward guide to understanding, selecting, calculating, and interpreting effect sizes for many types of data and to methods for calculating effect size confidence intervals and power analysis.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Statistical methods for assessing measurement error (reliability) in variables relevant to sports medicine.

              Minimal measurement error (reliability) during the collection of interval- and ratio-type data is critically important to sports medicine research. The main components of measurement error are systematic bias (e.g. general learning or fatigue effects on the tests) and random error due to biological or mechanical variation. Both error components should be meaningfully quantified for the sports physician to relate the described error to judgements regarding 'analytical goals' (the requirements of the measurement tool for effective practical use) rather than the statistical significance of any reliability indicators. Methods based on correlation coefficients and regression provide an indication of 'relative reliability'. Since these methods are highly influenced by the range of measured values, researchers should be cautious in: (i) concluding acceptable relative reliability even if a correlation is above 0.9; (ii) extrapolating the results of a test-retest correlation to a new sample of individuals involved in an experiment; and (iii) comparing test-retest correlations between different reliability studies. Methods used to describe 'absolute reliability' include the standard error of measurements (SEM), coefficient of variation (CV) and limits of agreement (LOA). These statistics are more appropriate for comparing reliability between different measurement tools in different studies. They can be used in multiple retest studies from ANOVA procedures, help predict the magnitude of a 'real' change in individual athletes and be employed to estimate statistical power for a repeated-measures experiment. These methods vary considerably in the way they are calculated and their use also assumes the presence (CV) or absence (SEM) of heteroscedasticity. Most methods of calculating SEM and CV represent approximately 68% of the error that is actually present in the repeated measurements for the 'average' individual in the sample. LOA represent the test-retest differences for 95% of a population. The associated Bland-Altman plot shows the measurement error schematically and helps to identify the presence of heteroscedasticity. If there is evidence of heteroscedasticity or non-normality, one should logarithmically transform the data and quote the bias and random error as ratios. This allows simple comparisons of reliability across different measurement tools. It is recommended that sports clinicians and researchers should cite and interpret a number of statistical methods for assessing reliability. We encourage the inclusion of the LOA method, especially the exploration of heteroscedasticity that is inherent in this analysis. We also stress the importance of relating the results of any reliability statistic to 'analytical goals' in sports medicine.
                Bookmark

                Author and article information

                Contributors
                URI : https://loop.frontiersin.org/people/829371/overviewRole: Role: Role: Role: Role: Role: Role: Role:
                URI : https://loop.frontiersin.org/people/653102/overviewRole: Role: Role: Role:
                URI : https://loop.frontiersin.org/people/638161/overviewRole: Role:
                Role:
                URI : https://loop.frontiersin.org/people/638143/overviewRole: Role:
                Role: Role:
                Role: Role: Role:
                URI : https://loop.frontiersin.org/people/639699/overviewRole: Role:
                Journal
                Front Psychol
                Front Psychol
                Front. Psychol.
                Frontiers in Psychology
                Frontiers Media S.A.
                1664-1078
                21 June 2024
                2024
                : 15
                : 1414339
                Affiliations
                [1] 1Department of Biomedical Sciences, Dental Sciences, and Morpho-Functional Imaging, University of Messina , Messina, Italy
                [2] 2UCAM Research Center for High Performance Sport, UCAM Universidad Católica de Murcia , Murcia, Spain
                [3] 3SCS—Strength & Conditioning Society , Murcia, Spain
                [4] 4Facultad de Deporte, UCAM Universidad Católica de Murcia , Murcia, Spain
                [5] 5NAR—Nucleus of High Performance in Sport , São Paulo, Brazil
                [6] 6Department of Movement, Human and Health Sciences, University of Rome “Foro Italico” , Rome, Italy
                [7] 7School of Health, Medical, and Applied Sciences, Central Queensland University , Rockhampton, QLD, Australia
                Author notes

                Edited by: Javier Courel-Ibáñez, University of Murcia, Spain

                Reviewed by: Francisco Tomás González-Fernández, University of Granada, Spain

                Feng Li, Beijing Sport University, China

                Corrado Lupo, University of Turin, Italy

                *Correspondence: Davide Ferioli, davide.ferioli@ 123456unime.it
                Article
                10.3389/fpsyg.2024.1414339
                11229049
                38979070
                4358892a-045e-45b1-bed2-936c6e4af4b0
                Copyright © 2024 Ferioli, Alcaraz, Freitas, Trimarchi, Conte, Formica, Chung and Scanlan.

                This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

                History
                : 08 April 2024
                : 20 May 2024
                Page count
                Figures: 2, Tables: 2, Equations: 3, References: 53, Pages: 11, Words: 8131
                Funding
                The author(s) declare that no financial support was received for the research, authorship, and/or publication of this article.
                Categories
                Psychology
                Original Research
                Custom metadata
                Movement Science

                Clinical Psychology & Psychiatry
                team sport,testing,retest,shooting,rpe,heart rate,simulated match-play
                Clinical Psychology & Psychiatry
                team sport, testing, retest, shooting, rpe, heart rate, simulated match-play

                Comments

                Comment on this article