2
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Open data to evaluate academic researchers: an experiment with the Italian Scientific Habilitation

      Preprint
      , ,

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          The need for scholarly open data is ever increasing. While there are large repositories of open access articles and free publication indexes, there are still a few examples of free citation networks and their coverage is partial. One of the results is that most of the evaluation processes based on citation counts rely on commercial citation databases. Things are changing under the pressure of the Initiative for Open Citations (I4OC), whose goal is to campaign for scholarly publishers to make their citations as totally open. This paper investigates the growth of open citations with an experiment on the Italian Scientific Habilitation, the National process for University Professor qualification which instead uses data from commercial indexes. We simulated the procedure by only using open data and explored similarities and differences with the official results. The outcomes of the experiment show that the amount of open citation data currently available is not yet enough for obtaining similar results.

          Related collections

          Most cited references7

          • Record: found
          • Abstract: not found
          • Article: not found

          The relationship between Recall and Precision

            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Does the Committee Peer Review Select the Best Applicants for Funding? An Investigation of the Selection Process for Two European Molecular Biology Organization Programmes

            Does peer review fulfill its declared objective of identifying the best science and the best scientists? In order to answer this question we analyzed the Long-Term Fellowship and the Young Investigator programmes of the European Molecular Biology Organization. Both programmes aim to identify and support the best post doctoral fellows and young group leaders in the life sciences. We checked the association between the selection decisions and the scientific performance of the applicants. Our study involved publication and citation data for 668 applicants to the Long-Term Fellowship programme from the year 1998 (130 approved, 538 rejected) and 297 applicants to the Young Investigator programme (39 approved and 258 rejected applicants) from the years 2001 and 2002. If quantity and impact of research publications are used as a criterion for scientific achievement, the results of (zero-truncated) negative binomial models show that the peer review process indeed selects scientists who perform on a higher level than the rejected ones subsequent to application. We determined the extent of errors due to over-estimation (type I errors) and under-estimation (type 2 errors) of future scientific performance. Our statistical analyses point out that between 26% and 48% of the decisions made to award or reject an application show one of both error types. Even though for a part of the applicants, the selection committee did not correctly estimate the applicant's future performance, the results show a statistically significant association between selection decisions and the applicants' scientific achievements, if quantity and impact of research publications are used as a criterion for scientific achievement.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              Do altmetrics correlate with the quality of papers? A large-scale empirical study based on F1000Prime data

              In this study, we address the question whether (and to what extent, respectively) altmetrics are related to the scientific quality of papers (as measured by peer assessments). Only a few studies have previously investigated the relationship between altmetrics and assessments by peers. In the first step, we analyse the underlying dimensions of measurement for traditional metrics (citation counts) and altmetrics–by using principal component analysis (PCA) and factor analysis (FA). In the second step, we test the relationship between the dimensions and quality of papers (as measured by the post-publication peer-review system of F1000Prime assessments)–using regression analysis. The results of the PCA and FA show that altmetrics operate along different dimensions, whereas Mendeley counts are related to citation counts, and tweets form a separate dimension. The results of the regression analysis indicate that citation-based metrics and readership counts are significantly more related to quality, than tweets. This result on the one hand questions the use of Twitter counts for research evaluation purposes and on the other hand indicates potential use of Mendeley reader counts.
                Bookmark

                Author and article information

                Journal
                08 February 2019
                Article
                1902.03287
                b2518b79-ba60-4153-94f4-37b8c8f3d41b

                http://creativecommons.org/licenses/by/4.0/

                History
                Custom metadata
                12 pages, 1 figure, 6 tables, submitted to the 17th International Conference on Scientometrics and Informentrics (ISSI 2019)
                cs.DL

                Information & Library science
                Information & Library science

                Comments

                Comment on this article