Testers’ Experiences of Tools and Automation

Software testing is vital, expensive, time-consuming yet a necessary part of software development. Testers perform repeated actions during testing, where automation and tools could reduce costs, timescale and human error. However, challenges to tools adoption have been identified in academic research and industry, which are blockers to success with automation. A survey of over 180 testing practitioners, to collect testers’ experiences with tools and understand their tools challenges, uncovered rich stories of emotional stress, as well as evidence of ways in which usability and HCI techniques are misapplied in test tool design. This leads to suggestions for future research.


INTRODUCTION
Software is ubiquitous, and software testing is vital, yet expensive and time-consuming. This essential part of the software development process includes testers performing many repeated actions in test execution and management. Use of automation and tools could reduce costs and timescale, and remove human error, however, there are challenges to successful tools adoption (Graham and Fewster 2012;Wiklund 2015) identified in academic research and in industry practice.
My research initially focused on collecting testers' experiences with tools, to understand their challenges. I discovered a much richer story, which told of emotional stresses and life experiences within the software testing community. I also identified ways in which usability and HCI techniques are misapplied in test tool and test automation design.
Evidence from survey responses of over 180 testing professionals provides data about their experiences with automation, and usability of testing tools. From analysis of that data, my findings to date are: (1) usability is necessary, but not sufficient for successful test tool adoption, (2) test tool design could be improved by HCI/UX methods such as personas to understand testers better and (3) test automation, with all its benefits, affects motivation, causing disassociation of testers from their roles, and affecting their job-task mix (Evans et al. 2020a,b). Following these findings, my next research studies will explore what is required to provide suitable UX guidelines to tools and automation builders, who may not have the necessary UX expertise.

BACKGROUND
The level and rate of change in the IT industry is a challenge which increases pressure on testing teams. They are asked to save time and costs, reduce time to release to the marketplace, and increase certainty about the quality of the software and services being released (Tassey 2002;Jones 2015). This has fueled the move to agile development and devops 1 , which with the combination of time/cost pressures encourages automation of repetitive tasks, including testing, as well as encouraging other tool support for testing. However, as well as the academic research about impediments to success with test tools (Wiklund 2015), there is a taken-for-granted assumption among testers that many software testing tools are "shelfware" (purchased but not used) because they are hard to implement and use successfully (Kaner 1998;Graham and Fewster 2012;Gamba and Graham 2018;Brockley 2018).

MY JOURNEY
In 2017, I set out on this PhD journey with the thought that, if there is a problem with shelfware, that raises important questions: Is this because the tools are flawed, and don't give the testers the support and information they need? Or is it because the testers need to become more technical and "step up" to the requirements of the tools? These questions matter because testing is time consuming, difficult to do, expensive, and heavily relied on by teams and organisations to provide information affecting decisions about the readiness of software for its customers (Tassey 2002;Jones 2015).
During the last three years, the data I have collected and analysed has changed my perceptions of the challenges for testers, and the questions that I want to ask. The motivation for my research remains broadly the same as in 2017: to enable people doing testing to do a better job.
Based on the findings, the specific focus for the research is now to help the people designing and building test tools and automation. Improved tools would support people who test software to do a better job. The people who design and build the tools may be tools vendors, developers in the open source tools community, in-house automation specialists, and may also be the people doing the testing. These different groups may have different needs and viewpoints which should be considered.
My approach is people-focused. This takes an alternative approach to other research by focusing on people doing the testing, rather than on the technologies or the tools. I want to discuss people, rather than humans or users, because this will focus me holistically, to include the personal and emotional, in an empathetic and sympathetic way, as well as the technical and organisational. To do this, I will need HCI and UX approaches. My work so far has been data driven, using secondary sources from academic literature, and industry publications including practitioners' websites and blogs. I have also collected primary data via interviews and surveys. In preparing for the next part of the research, I have taken a researcher-driven approach, using influence diagrams to map what I already know from industry experience and previous research (Aurini et al. 2016).

METHOD
I used a mixed method approach to explore the interactions of testers with their automation tools, seeking to understand what problems hinder successful tool adoption. Following preliminary observations during conversations at conferences, I interviewed several testing and automation experts, and carried out a literature review, which informed my research question: "What are the experiences of testers with automation?" I ran a series of workshops and surveys to collect testers' stories about automation and tools. The data was analysed using frequency counts and thematic analysis, with themes drawn from the literature review initially, and then emerging from analysis of the interviews and surveys. Workshop data was excluded, as I experienced instances where workshop participants told me post workshop that they would have responded differently if answering privately. I therefore focused on the anonymous surveys and the interviews. Responses from a total of 180 participants were analysed. Survey responses that did not answer the question "Tell me a story about an experience you have had with a test tool" were excluded.

PROBLEM AREA
In work so far, I identified three problem areas: • The Testers' Experience (TX) (Evans et al. 2020b): TX is the testers' lived experience (LX). I found that testers and others are emotionally invested in their work and emotionally affected by the tools and automation they use. Some of these emotions are positive, however I realised that the automation also caused frustration, anger and other negative emotions which could lead to demotivation.
• The Illusion of Usability (IoU) (Evans et al. 2020a): IoU is the misapplication of usability methods. Flawed attempts to solve usability problems can misfire, which is potentially both wasteful and demotivating • Shelfware: The data collected indicate this long-standing problem in test tools projects remains a problem, leading to waste, demotivation, and reduced trust.
This matters because, if we cannot provide ourselves with good tools, how can we build them for other people? There seems, from data collected so far, to be lip service paid to usability and UX of test tools. This, maybe, reflects a wider issue: that UX and usability are sometimes either an afterthought or disconnected from the rest of the software development process. For example, in the study by (Catania et al. 2019) testers removed "usability" from the set of responsibilities they perceived themselves as having in their roles. It would be interesting to understand the extent to which methods such as DevOps include UX design in their multi-disciplinary skill set for engineers. Therefore these questions will inform the focus for the next stage of my research.

FUTURE WORK
At present I am planning my next steps and writing my transfer report. I have identified a large number of potential research areas, some of which are multidisciplinary, or set outwith the ICT discipline, or are too large to scope within a PhD. Investigating the causes of and solutions to Testers' Lived Experience Challenges, the Illusion of Usability and Shelfware could include technical, managerial, organisational, and people-based research work. Disciplines required could include HCI, and also management science, sociology, computer science and software engineering, project management methods, psychology, history, sociology, economics, among others. To make progress in second half of the PhD, I need to focus on a small, achievable set of tasks that lead to a contribution both to industry and to academia.
My likely next steps are to support UX methods being used in test tool and automation design. To do this, I propose developing evidence based taxonomies of people, approaches, and tools used in testing. I intend to map these together, as guidelines for test tools designers, that could inform their development of personas for their target users. I hope to trial the models in industry settings, and also to collect more data from practitioners and experts. At a later date, I aspire to further research about the lived experience of software testers (TX).

SUMMARY
Software testing is vital, and organisations and teams seek to support it with tools. However, there are challenges to successful tool implementation. In seeking to understand testers' challenges, I uncovered evidence of both lived experience and usability challenges, which potentially could be overcome by using UX methods more effectively. Guidelines for test tools designers could aid the industry to overcome these challenges.