13th International Conference on Evaluation and Assessment in Software Engineering (EASE) (EASE)
Evaluation and Assessment in Software Engineering (EASE)
20 - 21 April 2009
Background: One aspect of undertaking a systematic literature review is to perform a quality evaluation of primary studies. Most quality checklists adopted from medicine, psychology and social studies assume that the experimental unit in an experiment is a human being. However, in empirical studies in software engineering the experimental unit may be a technology, an application or an algorithm. Aim: This paper presents a checklist we are developing to evaluate the quality of empirical technology-centred testing studies. Discussion points: The checklist was developed by considering entities used in technology-centred testing studies and the validation problems associated with them. The planned validation process includes face validation, usability and reliability assessment and external validation. As yet the external validation has not been performed. Conclusions: The checklists appear to be usable and after some experience applying them they appear to give consistent results. However, their validation is as yet incomplete. The method of developing and evaluating the checklist may be of use to other researchers requiring a means of assessing the quality of technology-centred software engineering studies.