315
views
1
recommends
+1 Recommend
1 collections
    20
    shares
      scite_
       
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Evaluation metrics in science: current status and prospects

      editorial
      Overview Bookmark

            Abstract

            The evaluation of science uses a variety of bibliometric indicators mostly based on citations despite not having an unequivocal relationship between citations and scientific quality. These indicators however, encompass more than an indication of visibility, relevance and impact of the articles and may represent in a researcher's career prestige, job opportunities, career promotion, awards, reserach grants and other rewards. Indicators of scientific impact One of the first and most widely used indexes ever created, the Impact Factor (IF), dates from 1975, when Eugene Garfield, the founder of the Institute for Scientific Information - ISI, introduced it to support the selection of subscription journals by libraries 1 . Along the way, from this start to its ubiquitous use to rank publications, researchers and institutions, its characteristics and peculiarities were neglected for the sake of convenience of an index that was easy to calculate and widely disseminated by all areas of knowledge worldwide. Until mid- 2016, the Journal Citation Reports (JCR) database, which publishes the IF, and the Web of Science belonged to the Thomson Reuters Company, and now iintegrate the products of Clarivate Analytics. The IF had no serious competition until 2004, when multinational publisher Elsevier created the Scopus bibliographic database, and from it, the SCImago Journal & Country Rank (SJR) index was launched in 2008, available in open access, unlike JCR, which requires a subscription. The form of calculating the SJR and IF present a few difference, but both basically appraise citations per time interval and have a linear relationship. In 2005, physicist Jorge E. Hirsch from the University of California, San Diego created the h index to measure not only the impact, but also the productivity of researchers. This indicator gained popularity quickly and is also applied to journals or institutions, often being referred in curricula vitae, such as the Lattes Platform. Additionally, there are indexes such as Eigenfactor and Article Influence 2 , also citation-based and whose calculations use elegant algorithms and are available in open access, are not frequently used or mentioned. Despite being widely used in science evaluation processes, the limitation and precariousness of the use of citation indicators are recognized by the global scientific community, given the peculiarities and biases of the citation indicators used to measure the performance of articles, journals, researchers, institutions and countries. Initiatives that aim to curb or discourage their use such as the San Francisco Declaration on Research Assessment 3 (DORA) or the Leiden Manifesto 4 are supported by researchers and institutions around the world. These initiatives are supported by scientific societies, universities, funding agencies and journals, among others 5 . Understanding the nature of the indicators, how they are calculated, their applicability and limits is essential not only for specialists in scientometrics and funding agency technicians, but for the entire scientific community. After all, the researchers themselves evaluate their peers in hiring and career progression processes; therefore, it is advisable to go beyond the simple analysis of publication numbers, journal IF or their h index. Associate Philosophy of Science Professor at UFRJ Antonio Augusto P. Vieira's consideration is eloquent: "The fact that the use of an indicator makes one author or the other eligible due to the fact that he/she has published in a journal with a higher IF, should be surprising, since more importance is given to where he/she was published than the reading of his/her work" 6 . All those involved in the evaluation of science and in reward systems must be committed to this concept so as not to infer snap judgments or be unjust. One of the criticisms made to bibliometric indices based on citations is because the practice of citing articles is extremely complex and influenced by countless factors. Thus, the true reasons for citing one article and not another has nothing to do with quality, validity or relevance of the studies 7 . In fact, it was not possible to establish a relationship about the most cited study and his/her best study in researcher's self-evaluation 8 . Standardizing citation metrics might level the indices per area of knowledge, publication age, type of document and comprehensiveness of the database in which they were recorded, thus allowing for better balanced comparisons in evaluation processes 9 . Alternative metrics or altmetrics Social media are very efficient for the sharing of news, opinions and content in general. More recently, they have been widely used as science evaluation metrics and are called altmetrics or "alternative metrics" 10 . Studies estimate that only taking formal citations into consideration, we will be disregarding almost 50% of the scientific literature published worldwide. Altmetrics 11 have been gaining credibility in the evaluation of publications and researchers. The Altmetric index monitors various social networks in the sharing of scientific articles: blogs, Twitter, Facebook, Mendeley, YouTube, ResearchGate, Google, Reddit, LinkedIn, print and online news, mention in the elaboration of public policies, and others. One study 12 shows that altmetrics present a correlation with impact indices based on citations and can be used to complement them along with peer review and usage measures such as access and download. Like any new concept, it often generates doubts and questions about its legitimacy, especially due to the fact it uses 'informal' tools to measure the impact of science, which is essentially formal. It is possible that the skepticism of the academic community towards altmetrics is comparable to the reaction caused by the use of the Internet in the 1990's as a platform to publish scientific journals. It is important to consider new forms of scientific communication, which are already influencing how research results are being published, disseminated and evaluated. These are preprint electronic repositories. The first one is arXiv 13 , created in 1991 to publish preliminary versions of articles in the areas of physics, astronomy, computer sciences and statistics. The authors post their articles before formally submitting them to a journal, in order to receive comments from the scientific community and to ensure the authorship of an idea or research result. Many articles, however, don't get formally published, not because of lack of relevance or quality, but because publishing in the repository per se is academically recognized, at least in the area of physics, with the same weight as a journal article. Comments are posted online and the authors can update their articles based on this post-publication peer review. Based on the success of arXiv, preprint repositories for different disciplines are being created. BioRxiv was launched in 2013 for the life sciences, and in February of 2017, it had over 8,000 preprints. Preprint repositories in the areas of chemistry (ChemRxiv), psychology (PsyArXiv) and social sciences (SocArXiv) are in the process of being implemented. The academic community's incentive and recognition of this form of publishing is demonstrated by initiatives such as ASAP Bio 14 , which encourages the publishing of preprints and post-publication peer review, in addition to the growing adoption and recognition of preprint repositories by institutions, international organizations and funding agencies 15 . This is a form of publishing particularly suited for the fast and open dissemination of results, as required in cases of public health emergencies, such as the recent Ebola and Zika epidemics. Scientific communication as we know is rapidly evolving for the better, I believe, with more agility, transparency, responsibility, improved access and the use of research for the benefit of individuals and society, and we must follow these changes in order to extract the most benefits for everyone.

            Main article text

            The evaluation of science uses a variety of bibliometric indicators mostly based on citations despite not having an unequivocal relationship between citations and scientific quality. These indicators however, encompass more than an indication of visibility, relevance and impact of the articles and may represent in a researcher's career prestige, job opportunities, career promotion, awards, reserach grants and other rewards.

            Indicators of scientific impact

            One of the first and most widely used indexes ever created, the Impact Factor (IF), dates from 1975, when Eugene Garfield, the founder of the Institute for Scientific Information - ISI, introduced it to support the selection of subscription journals by libraries 1 . Along the way, from this start to its ubiquitous use to rank publications, researchers and institutions, its characteristics and peculiarities were neglected for the sake of convenience of an index that was easy to calculate and widely disseminated by all areas of knowledge worldwide. Until mid- 2016, the Journal Citation Reports (JCR) database, which publishes the IF, and the Web of Science belonged to the Thomson Reuters Company, and now iintegrate the products of Clarivate Analytics.

            The IF had no serious competition until 2004, when multinational publisher Elsevier created the Scopus bibliographic database, and from it, the SCImago Journal & Country Rank (SJR) index was launched in 2008, available in open access, unlike JCR, which requires a subscription. The form of calculating the SJR and IF present a few difference, but both basically appraise citations per time interval and have a linear relationship.

            In 2005, physicist Jorge E. Hirsch from the University of California, San Diego created the h index to measure not only the impact, but also the productivity of researchers. This indicator gained popularity quickly and is also applied to journals or institutions, often being referred in curricula vitae, such as the Lattes Platform.

            Additionally, there are indexes such as Eigenfactor and Article Influence 2 , also citation-based and whose calculations use elegant algorithms and are available in open access, are not frequently used or mentioned.

            Despite being widely used in science evaluation processes, the limitation and precariousness of the use of citation indicators are recognized by the global scientific community, given the peculiarities and biases of the citation indicators used to measure the performance of articles, journals, researchers, institutions and countries. Initiatives that aim to curb or discourage their use such as the San Francisco Declaration on Research Assessment 3 (DORA) or the Leiden Manifesto 4 are supported by researchers and institutions around the world. These initiatives are supported by scientific societies, universities, funding agencies and journals, among others 5 .

            Understanding the nature of the indicators, how they are calculated, their applicability and limits is essential not only for specialists in scientometrics and funding agency technicians, but for the entire scientific community. After all, the researchers themselves evaluate their peers in hiring and career progression processes; therefore, it is advisable to go beyond the simple analysis of publication numbers, journal IF or their h index. Associate Philosophy of Science Professor at UFRJ Antonio Augusto P. Vieira's consideration is eloquent: "The fact that the use of an indicator makes one author or the other eligible due to the fact that he/she has published in a journal with a higher IF, should be surprising, since more importance is given to where he/she was published than the reading of his/her work" 6 . All those involved in the evaluation of science and in reward systems must be committed to this concept so as not to infer snap judgments or be unjust.

            One of the criticisms made to bibliometric indices based on citations is because the practice of citing articles is extremely complex and influenced by countless factors. Thus, the true reasons for citing one article and not another has nothing to do with quality, validity or relevance of the studies 7 . In fact, it was not possible to establish a relationship about the most cited study and his/her best study in researcher's self-evaluation 8 . Standardizing citation metrics might level the indices per area of knowledge, publication age, type of document and comprehensiveness of the database in which they were recorded, thus allowing for better balanced comparisons in evaluation processes 9 .

            Alternative metrics or altmetrics

            Social media are very efficient for the sharing of news, opinions and content in general. More recently, they have been widely used as science evaluation metrics and are called altmetrics or "alternative metrics" 10 .

            Studies estimate that only taking formal citations into consideration, we will be disregarding almost 50% of the scientific literature published worldwide. Altmetrics 11 have been gaining credibility in the evaluation of publications and researchers. The Altmetric index monitors various social networks in the sharing of scientific articles: blogs, Twitter, Facebook, Mendeley, YouTube, ResearchGate, Google, Reddit, LinkedIn, print and online news, mention in the elaboration of public policies, and others. One study 12 shows that altmetrics present a correlation with impact indices based on citations and can be used to complement them along with peer review and usage measures such as access and download.

            Like any new concept, it often generates doubts and questions about its legitimacy, especially due to the fact it uses 'informal' tools to measure the impact of science, which is essentially formal. It is possible that the skepticism of the academic community towards altmetrics is comparable to the reaction caused by the use of the Internet in the 1990's as a platform to publish scientific journals.

            It is important to consider new forms of scientific communication, which are already influencing how research results are being published, disseminated and evaluated. These are preprint electronic repositories. The first one is arXiv 13 , created in 1991 to publish preliminary versions of articles in the areas of physics, astronomy, computer sciences and statistics. The authors post their articles before formally submitting them to a journal, in order to receive comments from the scientific community and to ensure the authorship of an idea or research result. Many articles, however, don't get formally published, not because of lack of relevance or quality, but because publishing in the repository per se is academically recognized, at least in the area of physics, with the same weight as a journal article. Comments are posted online and the authors can update their articles based on this post-publication peer review. Based on the success of arXiv, preprint repositories for different disciplines are being created. BioRxiv was launched in 2013 for the life sciences, and in February of 2017, it had over 8,000 preprints. Preprint repositories in the areas of chemistry (ChemRxiv), psychology (PsyArXiv) and social sciences (SocArXiv) are in the process of being implemented. The academic community's incentive and recognition of this form of publishing is demonstrated by initiatives such as ASAP Bio 14 , which encourages the publishing of preprints and post-publication peer review, in addition to the growing adoption and recognition of preprint repositories by institutions, international organizations and funding agencies 15 . This is a form of publishing particularly suited for the fast and open dissemination of results, as required in cases of public health emergencies, such as the recent Ebola and Zika epidemics.

            Scientific communication as we know is rapidly evolving for the better, I believe, with more agility, transparency, responsibility, improved access and the use of research for the benefit of individuals and society, and we must follow these changes in order to extract the most benefits for everyone.

            References

            1. Garfield E. The History and Meaning of the Journal Impact Factor. JAMA. 2006. Vol. 295(1):90–93. [Cross Ref] [PubMed]

            2. Eigenfactor. Database. Washinton: University of Washington. 2007. www.eigenfactor.org

            3. Declaration on Research Assessment. European Association of Science Editors (EASE) Statement on Inappropriate Use of Impact Factors. 2017. http://am.ascb.org/dora

            4. Hicks D, Wouters P, Waltman L, de Rijcje S, Rafols I. Bibliometrics: The Leiden Manifesto for research metrics. Nature. 2015. Vol. 520(7548):429–431. [Cross Ref] [PubMed]

            5. Corneliussen ST. Bad summer for the journal impact factor. Physics Today. 2016. [Cross Ref]

            6. Videira AAP. Declaração recomenda eliminar o uso do Fator de Impacto na avaliação de pesquisa. Estudos de CTS. 2013. https://estudosdects.wordpress.com/2013/07/29/declaracao-recomenda-eliminar-o-uso-do-fator-de-impacto-na-avaliacao-de-pesquisa

            7. Nassi-Calò L. Study proposes a taxonomy of motives to cite articles in scientific publications. SciELO em Perspectiva. 2014. http://blog.scielo.org/en/2014/11/07/study-proposes-a-taxonomy-of-motives-to-cite-articles-in-scientific-publications

            8. Ioannidis JPA, Boyack KW, Small H, Sorensen AA, Klavans R. Bibliometrics: Is your most cited work your best? Nature. 2014. Vol. 514(7524):561–562. [Cross Ref] [PubMed]

            9. Nassi-Calò L. Is it possible to normalize citation metrics? SciELO em Perspectiva.. 2016. http://blog.scielo.org/en/2016/10/14/is-it-possible-to-normalize-citation-metrics

            10. Spinak E. What can alternative metrics - or altmetrics - offer us? SciELO em Perspectiva. 2014. http://blog.scielo.org/en/2014/08/07/what-can-alternative-metrics-or-altmetrics-offer-us

            11. Altmetric. Database. 2017. http://www.altmetric.com

            12. Hoffmann CP, Lutz C, Meckel M. Impact Factor 2.0: Applying Social Network Analysis to Scientific Impact Assessment. 47th Hawaii International Conference on System Science; Hilton Waikoloa Village. 2014. [Cross Ref]

            13. Cornell University Library. Database. Ithaca, NY, USA: http://arxiv.org2017

            14. Nassi-Calò L. Saiu no NY Times: From the NY Times: Biologists went rogue and publish directly on the Internet. SciELO em Perspectiva. 2016. http://blog.scielo.org/en/2016/04/07/from-the-ny-times-biologists-went-rogue-and-publish-directly-on-the-internet

            15. Velterop J. Preprints - the way forward for rapid and open knowledge sharing. SciELO em Perspectiva. 2017. http://blog.scielo.org/en/2017/02/01/preprints-the-way-forward-for-rapid-and-open-knowledge-sharing

            Author and article information

            Journal
            Rev Lat Am Enfermagem
            Rev Lat Am Enfermagem
            rlae
            Revista Latino-Americana de Enfermagem
            Escola de Enfermagem de Ribeirão Preto / Universidade de São Paulo
            0104-1169
            1518-8345
            05 June 2017
            2017
            : 25
            : e2865
            Affiliations
            Coordinator of Scientific Communication in Health at BIREME/PAHO/WHO and a collaborator of SciELO. E-mail: calolili@ 123456paho.org
            Article
            00201
            10.1590/1518-8345.0000.2865
            5479366
            28591293
            44cb72bd-a16b-477b-889b-755bfd3fde24

            This is an open-access article distributed under the terms of the Creative Commons Attribution License

            History
            Page count
            Figures: 3, Tables: 0, Equations: 0, References: 15, Pages: 1
            Categories
            Editorial

            Comments

            Comment on this article