Using Freedom of Information Requests to Understand Usability Problems with e-government Websites

This paper investigates usability problems found in 14 UK council websites as described by citizens. Freedom of Information (FOI) requests were used to collect messages about citizens’ interaction with these websites. Content analysis was used to categorise 713 usability problems found in 620 messages. 49 distinct problem categories grouped in 9 major categories were identified. Problems related to Content completeness (40.8%) and Interactive Functionalities (36.04%) were the most frequently reported. The implications of these findings for research and design of e-government websites, as well as the advantages and limitations of using FOI requests in usability research, are discussed.


INTRODUCTION
Increasingly citizens are expected to interact with government services through a website, whether it is taxing their car or paying their taxes.However, the usability and accessibility of such websites is far from satisfactory.A recent survey of 416 local council websites across the UK, based on a bespoke expert evaluation methodology (used by all UK councils), showed that 56% of those websites exhibit low levels of usability and accessibility (SOCITM, 2016).
The usability of e-government websites, as with other interactive systems, is usually evaluated by means of user and expert methods such as user testing and heuristic evaluation.However, these methods lack fundamental ecological validity.Evaluations are conducted in artificial settings and users or evaluators are given fictional tasks, personas and scenarios.It is not clear to what extent these approaches capture the diversity of problems citizens experience in their real use of e-government websites.
Another approach to investigate usability problems in e-government websites is to examine the comments citizens submit about the websites.In the UK, such comments constitute public information and can be obtained via Freedom of Information (FOI) requests.FOI laws that guarantee citizens' right of access to information held by government institutions exist in more than 100 countries (Walby and Luscombe, 2016).In the UK, access to public information is regulated by the UK Freedom of Information Act (2000) and the Freedom of Information (Scotland) Act 2002.Although FOI requests are increasingly being used to obtain data in other disciplines, their use in HCI research remains under-explored.As far as we are aware, this is the first study using FOI datasets as a source of information for researching usability problems in the e-government domain.
This paper aims to investigate firstly whether the types of usability problems found in a qualitative analysis of citizens' comments about e-government websites differ from those found using standard user and expert methods.To address this question, the distribution of usability problems found will be compared with that found by Petrie and Power (2012) who investigated usability problems in UK government websites (albeit central government, not council websites) with user testing and expert evaluation.Secondly what can citizens' comments tell us about the usability of e-government websites and users' experience of them.

METHOD
14 datasets of citizens' comments about their experience interacting with UK council websites were obtained.Requests were sent to 36 UK 2 councils from 12 areas in the UK.The number of citizens on the electoral register (Office of National Statistics, 2015) was used to classify councils in three types: large (above 200,001 registered voters), medium (100,000 -200,000), and small (below 100,000).In Scotland, Northern Ireland and Wales one council per type was included.Given the larger number of registered voters and councils in England, one council per type was selected for each region: Eastern, London, North East, North West, South East, South West, East Midlands, West Midlands, Yorkshire and the Humber.The selection of councils was made to include large and small urban councils, as well as a number of rural ones in the sample.To maintain confidentiality, councils will not be named.
The FOI request asked for digital copies of all the messages received by the council from service users between June and August 2016 whose content was related to the council's website features or functions.Councils were asked to delete all personal information and provide only the content of these messages and their date of submission to the council website.14 councils provided the information requested: 1 from Wales, 2 from Northern Ireland, 3 from Scotland and 9 from England.The main reason for refusing the request was the amount of effort required to collect the information which is one of the exemptions allowed under the FOI Act.
A total of 2483 comments were received from the 14 councils.6 councils provided very large datasets of comments (105 -895 comments, mean 387.2), 8 councils provided small datasets (4 -55, mean: 20.0).It was decided to include all legitimate comments from the small datasets and a maximum of 100 legitimate comments from the large datasets, to ensure reasonable equality of representation of comments from the different sources in the final sample.Comments were excluded if they were not related to the website (e.g.comments about experiences with the call centre), or impossible to categorise (e.g.unintelligible, vague comments).Although due to the anonymity of the comments, there was no way of knowing how many comments came from any one individual, all comments analysed were legitimate usability problems.The resulting corpus comprised 620 comments (545 comments from the large datasets and 75 comments from the small datasets).
Content analysis was used to categorise the comments.The coding scheme involved a combination of a priori and emergent categories.The 34 usability problems categories identified by Petrie and Power (2012) were used as a priori categories.Emergent categories were developed as needed.Using an iterative open coding technique, initially the two authors collaboratively summarised and grouped problems from 50 comments.As a result, 15 new categories emerged.The resulting set of 49 specific categories were grouped in 9 major categories (see Table 1).The authors then independently coded a sample of 70 comments (11% of the sample) each.Inter-coder reliability between the two coders was very high (Cohen's kappa: κ = 0.87).The first author then coded the remainder of the problems.

Results
713 usability problems were identified.Table 1 presents the number of usability problems and percentages per major category.Content format comprises 3 sub-categories related to the format in which users expect to access information.The most mentioned sub-category was Content format not as expectedmaps (8/1.12%).
Information architecture comprises 5 sub-categories related to the underlying organisation of contents and interactive elements in the website.This was one of the least mentioned categories with only six problems (0.84%).The most mentioned subcategory was Structure not clear enough (3/0.42%).
Interaction comprised nine sub-categories of problems that appear within an interaction sequence.The most mentioned sub-category was Duplication/excessive effort required by user (19/2.66%)followed by Interaction not as expected (13/1.82%).
Interactive elements comprised 7 sub-categories related to problems with different interactive elements (e.g.labels, icons, buttons).27 problems were found in this category (1.68%).The most mentioned sub-category was Options not logical/complete (15/2.10%).Followed by Input and input formats unclear (6/0.84%).
Interactive functionality comprised 4 sub-categories related to usability problems resulting from website functionalities that do not behave as expected by users.The most frequently mentioned sub-category was Access failure (106/14.87%),by far the largest categories of problems found.The second most mentioned sub-category was Operational failure (74/10.38%).
Compatibility issues comprised 2 sub-categories of problems related to accessing the website via different browsers or mobile devices.7 problems were found in this category (0.98%).The most mentioned sub-category was Content not compatible with mobile devices (4/0.56%).

DISCUSSION
The first aim of this study was investigate whether the types of usability problems of citizens' comments differ from those found using standard methods such as user testing and expert evaluation.The categories of problems found in the current analysis was compared with those proposed by Petrie and Power (2012) who evaluated six UK e-government websites with a range of expert and user methods.
Table 2 shows that over half of their problems were categorised as Interactivity (59.4%), whereas in this study only 10.5% of problems were in this category.
The second most frequently used category in their study was Physical Presentation (21.0%) which in this study only accounted for 2.7% of problems.In addition, Table 3 shows that nearly 80% of the problems identified in the current student where in the emergent categories, and thus not identified in the Petrie and Power study.Thus there appear to be many problems encountered in real interaction with government websites that are not captured by a standard usability study, even one with users.Of course there are a number of differences between the studies, apart from the method of data collection, including the specific type of egovernment website (central government in the Petrie and Power study, local government in this website) and the specific participants.However, two differences which are both the strengths and weaknesses of the FOI method are that citizens have to care enough to make the effort to report a problem to the council and they have to remember the specific nature of the problem.
The second aim of our study was to identify what the problems identified in these datasets could tell us about the usability of e-government websites and users' experience of them.It seems that neither the physical presentation nor the information architecture of local government websites are relevant enough to citizens to complain about them.What our analysis revealed is that citizens complain when these websites fail to provide the contents and interactions they expect.Due to the authenticity of problems described by citizens it is likely that the problems that directly affect their daily lives would be reported the most.This is not to say that the usability problems found using standard evaluation methods are not important.They just may not be the ones that citizens would remember to report.Citizens may still experience other problems which may be better captured by standard methods that facilitate their identification and recall.
However, the fact that these problems emerge from authentic interactions may raise questions about the ecological validity of the usability problems that users and experts report in artificial settings.For instance, the motivations and emotional implications of trying to pay the local council tax or applying for employment benefits in real life are very different from doing these tasks in a role-playing scenario for an evaluation.The consequences of being unable to pay the council tax, or not getting social benefits may impact users' understanding of the interaction, the importance of particular problems and their perception of their severity.
These findings may also have implications for designers, developers, and content providers.In the process of developing effective and efficient egovernment websites that support actual citizens' needs it may be necessary to pay attention to the contents and experience of interaction from a different perspective.Developers of e-government websites should pay special attention to content as an essential aspect of interaction.The same is valid for the interactive functionalities of e-government websites.Ensuring that council websites have the contents and interactive functionalities citizens expect may not be a new lesson but one that needs to be applied to provide better user experiences.
Finally we would like to reflect on the use of FOI requests in HCI research.Although not all the councils replied with data, a considerable corpus of comments was collected.Datasets differed in size, format and quality (e.g.councils provided spreadsheets, word documents, and scanned PDFs).However, after cleaning and standardising the datasets, their utility was undeniable and more importantly the process was cost-effective.Collecting this amount of data about the websites of 14 different councils from real users using standard methods would have taken much more effort and would have been considerably more expensive and time-consuming.
Working with FOI requests in research has ethical implications that need to be considered.In our study it was evident that all councils took efforts in making the data anonymous by removing names, email addresses, and other personal data.However there were a small number of instances where personal data was not removed, and we anonymised the comment.It is reasonable to expect a margin of error considering the size of the datasets the councils had to anonymize.Therefore, we would recommend researchers interested in using FOI requests to take further measures to anonymize the datasets.

CONCLUSIONS
Despite the effectiveness and convenience of using FOI requests for researching usability, this approach remains underexplored, and a comparison with other traditional methods was not found in the literature.This study found that datasets of citizens' comments about local e-government websites can be effectively used to identify usability problems in a cost-effective manner.The use of FOI requests to access a set of ecologically valid usability problems is the main methodological contribution of this paper.This approach did not only allow to identify known types of usability problems in local egovernment websites, but also revealed that problems related to Content completeness and Interactive functionalities concentrated 80% of the problems citizens complain about in these websites.These findings suggest that it might be necessary to revise the classifications of usability problems and heuristics used in usability evaluation of websites, as well as the complementarity and ecological validity of different evaluation methods.

Table 1 :
Number and percentage of usability problems per major category.
mentioned sub-category was Contact information is missing with (66/9.26%)closely followed by Content is not accurate/up-to-date (65/9.12%).

Table 2 :
Petrie and Power (2012)found byPetrie and Power (2012)and current study

Table 3 :
Problems in emergent categories in the current study