Introduction
Amongst academics, one senses growing dissatisfaction, disillusion, even despair with life in universities (e.g. Gill, 2009; Ginsberg, 2011a; Burrows, 2012; Haack, 2013). In discussions with colleagues from other institutions, virtually all speak of increasing frustration with their university, whether that university is in my own country (the UK) or elsewhere in Europe, or in North America or Australasia. (I am less familiar with the situation in Latin America, Africa and Asia, although since preparing this paper I have received evidence to suggest the pattern is common there, too.) All have tales of the latest management idiocy, of some new bureaucratic nonsense, of patronising instruction as to how to teach, of the latest crude ‘performance target’ they must meet in their research.1
It is puzzling what might be driving all this. Why, when the management literature of the last two decades has stressed the benefits of flatter organisational structures, of decentralisation and local initiative, of flexible and ‘lean’ systems and processes, have many universities been intent on moving in precisely the opposite direction of greater centralisation with a more hierarchical, organisational structure, top-down management and decreased local autonomy for departments, and ever more cumbersome and intrusive procedures? Why, when academics are so quick to criticise other organisations for bureaucratic inefficiency, do we seem to be so keen on creating ever more exquisite forms of bureaucracy in our own institutions? Why, when the literature on pedagogy points to the dangers of intrusive micro-management, do we believe that teaching to some centrally designed template is the way forward? Why, when it is well known that the application of performance indicators encourages blatant game-playing to maximise one’s score on the designated indicators and a neglect of other activities which, however worthy, are not captured by the chosen metrics, do we assume that this approach will result in ever more ‘excellent’ research with ever greater ‘impact’? And, perhaps most surprisingly, why, when one could hardly imagine a more intelligent and articulate group, nor one better placed to make its views heard, have academics (with just a few exceptions)2 remained so quiet and so meekly acquiescent to their fate?3
This proposition paper considers four main types of problems relating respectively to top-down university management, bureaucratic administrative procedures, teaching to a prescribed formula, and research driven by assessment and performance targets. The analysis draws upon a range of illustrative examples. It should be stressed that these are real examples based on extensive discussions with numerous academic colleagues from higher education institutions round the world. They should not necessarily be interpreted as a reflection of problems within my own organisation. The reader will doubtless recognise many of the problems as ones present in some form or another in their particular institution.
After a brief review of the literature and what it reveals about the relationship between organisational structure and the performance and effectiveness of organisations, we examine examples of the four types of problems. This is followed by an analysis of the possible drivers of these growing problems, including the search for ever greater ‘efficiency’, the rise of the audit society, the continuing development of new public management (including its digital offshoot) (see Dunleavy et al., 2005), the escalating international competition into which all universities are now drawn, the growth in the number of administrative staff and a range of other factors, such as growing reliance on headhunters to help fill senior university positions. The paper concludes by asking whether academics are in imminent danger of suffering the fate of the boiled frog.
Centralised top-down management
Twenty or thirty years ago, many universities were relatively decentralised.4 University departments, schools, faculties, research centres and other units were granted considerable autonomy in their teaching programmes, student recruitment, research projects and other activities. This is not to imply that such a structure was necessarily ‘better’, merely that it was different from that encountered today in most universities. The previous structure certainly had its problems, including the emergence of local fiefdoms, lack of consistency in the treatment of students, weak or incoherent research strategies, inordinate amounts of time spent on committees trying to coordinate efforts across departments, and so on. Faced with such problems, the solution seemed obvious to many vice chancellors, rectors and presidents – more centralisation, combined with stronger and more hierarchical top-down management and more formalised procedures (often involving ‘performance management’). Ironically, universities have been moving in this direction at precisely the same time as the management and organisational literature has been increasingly emphasising the benefits of flatter organisational structures, wider spans of control (in particular, taking advantage of the opportunities offered by IT), decentralisation and local autonomy for departments or sections.
Over recent decades, there has been extensive research by management and organisational scholars on the relationship between organisational structure and performance. Much of this has focussed on centralisation, ‘the extent to which decision-making power is concentrated at the top levels of the organization’ (Caruana et al., 1998, p.18). As Zheng et al. (2010, p.765) conclude from an extensive review of the literature, ‘the majority of scholars have agreed that a decentralized organizational structure is conducive to organizational effectiveness’. Burns and Stalker (1961) were among the first to point to the advantages of a decentralised ‘organic’ structure, stressing how this facilitated effective communication horizontally as well as vertically.5 Later researchers point to the benefits of decentralisation in terms of encouraging creativity (Khandwalla, 1977), and generating imaginative solutions to problems (Deal and Kennedy, 1982). Dewar and Werbel (1979) show how a decentralised structure increased the level of motivation and satisfaction among staff, while Schminke et al. (2000), find that a decentralised structure results in increased responsiveness to changes in the external environment.
Over the last 20 or 30 years of globalisation and growing competitive pressures, there has been increasing emphasis on the ability of organisations to generate and successfully implement innovation, both technological and organisational. Kimberly and Evanisko (1981) were among the first to demonstrate that the adoption of technological and organisational innovation is more prevalent in decentralised organisations. Later, in a very influential meta-review of the determinants of organisational innovation, Damanpour (1991) confirms the significant negative influence of centralisation and of formalisation on organisational innovation. [Formalisation can be defined as ‘the degree to which decisions and working relationships are governed by formal rules, standard policies, and procedures’ (see Lee and Choi, 2003, p.192).]
Later work shows, first, that the importance of decentralisation is even greater for organisations operating in uncertain environments (e.g. Baum and Wally, 2003; Nahm et al., 2003). Secondly, as we move towards a more knowledge-intensive economy and society, the importance of knowledge management has become all the greater. Various studies demonstrate that a decentralised organisational structure is more conducive to effective knowledge management. For instance, Nahm et al. (2003) show that the benefits of decentralisation are all the greater in organisations where there is more learning, more knowledge-based work and more knowledge sharing. Likewise, Pertusa-Ortega et al. (2010) reveal how decentralisation fosters knowledge creation because more individuals become involved in decision-making, generating a greater number and variety of ideas (which may result in the creative integration of divergent perspectives) and helping to ensure the successful implementation of the chosen ideas.
While there are many studies of the relationship between organisational structure and performance in the private sector, there are far fewer on public organisations, and very few indeed focussing on universities. One exception is the study by Cameron and Tschirhart (1992), which concludes that ‘Participative decision processes are more effective than autocratic or centralized decision processes primarily because in a post-industrial environment the need for multiple sources of information and multiple perspectives is escalated’ (p.102). More recent studies have been more critical. Diefenbach (2005), in a case study of the effects of implementation of new public management on a major university, reveals fundamental internal contradictions in the approach and notes the ‘cynical use of latest management techniques by senior managers in order to gain more power and control internally’ (p.126). Nedeva and Boden (2006) analyse the impact of neo-liberalism on universities, identifying the dangers it brings in terms of loss of capacity to generate ‘understanding’ type knowledge. More recently, By et al. (2008, p.21) argue that
… the audit culture and managerialism have created an environment that encourages opportunistic behaviour such as cronyism, rent-seeking and the rise of organizational psychopaths.6 This development will arguably not only lead to a waste of resources, change for the sake of change, further centralization, formalization and bureaucratization but, also, to a disheartened and exploited workforce, and political and short-term decision-making.
Given that universities operate in uncertain environments and are centrally involved in the generation, diffusion and application of knowledge, as well as in nurturing creativity, innovation and problem-solving abilities, there is all the more reason to expect the trend in universities over the last 20 years would have been towards a more decentralised structure. However, the reverse appears to have been mostly the case. Why might this be?
A new vice chancellor, rector or president (henceforth the term ‘vice chancellor’ is used to cover all these titles for a university head) has generally been appointed to address specific problems and to improve the university’s performance in certain respects (often financial and in terms of its position in various league tables). Almost without exception, the vice chancellor assumes that the solution to these problems and challenges involves a more centralised approach to decision-making and running the university. Invariably plans will include ‘growth’ (vice chancellors feel it is essential to demonstrate to those who appointed them that numbers have gone up during their period in office, not least to justify the sizeable salary increases they have come to expect as a right), and they may well assume that they have no option but to centralise decision-making in order to maintain ‘control’ as their university increases in size. Moreover, faced with escalating competition (whether for students, income or league-table positions), vice chancellors again automatically tend to assume the solution is greater centralisation of decision-making. Perhaps they have no confidence that a decentralised but well motivated institution can survive in an era of intense competition. Yes, there may well be problems with lower-performing departments that need to be solved, but surely this does not mean the imposition of a lowest common denominator approach in all departments. Different faculties or departments operate in different environments or market niches, and they may therefore benefit more from the local autonomy to experiment, adapt and evolve, as opposed to having a standard centralised approach imposed from above.
In addition, many new vice chancellors, particularly those appointed from outside the university, reach instinctively for the ‘lever’ of restructuring – merging departments and other units into larger agglomerations of schools or faculties. This has the benefit of resulting in a cleaner, simpler organisational chart, and of fewer individuals reporting to the vice chancellor. Yet, there is no rigorous evidence that bigger operating units in higher education institutions are more efficient, a belief with overtones strangely reminiscent of Soviet ideology that scale is the solution. The belief has often resulted in more layers of hierarchy (deans of faculties, heads of schools, departmental heads, and so on). Over time, there has been a concomitant withering away of consultation. Instead, changes tend to be imposed by fiat, often introduced by documents or emails that begin with that ominous Orwellian phrase ‘The university has decided …’. Such structural changes frequently seem to defy logic. Those upon whom the change is being inflicted are left wondering, ‘If this is the solution, what is the problem that is supposedly being solved?’. There is no apparent awareness within senior management of the potential disadvantages of the new structure, let alone of any balance sheet of the respective pros and cons of the old versus the new structure. All of which has resulted in deteriorating morale and a growing sense of disaffection and even alienation among staff (e.g. By et al., 2008; Gill, 2009; Burrows, 2012).
In the past, prior to a proposed restructuring or any other major management change, there was normally an extensive process of consultation with faculty; for example, with senior university officials attending departmental meetings to explain the problem and the proposed solution, to address any queries and indeed to listen to any alternative solutions. This would then be followed by intensive debate in the university senate (or whatever body was concerned with academic governance). Most of this has long since gone, replaced by email directives and summonses to attend termly meetings with the vice chancellor and the senior management team, meetings that essentially take the form of a presentation by the senior managers followed by a couple of desultory questions from the audience on some relatively trivial matter (such as car parking arrangements).7
The drift towards centralisation also takes other forms. For instance, in the past, departments and other operating units would contain quite a few support staff (secretaries, technicians, librarians, financial administrators, and so on). However, for reasons that defy logic (or at least have not been explained to university faculty), these have often been moved from departments to central offices.8 To take one example, in the past most IT staff were ‘on the ground’. Since the majority of IT problems turn out to be relatively simple for an expert to resolve, it would take them just a few minutes to solve most problems. However, once IT staff has been centralised, the academic facing an IT problem needs to go online and submit a request for help (difficult when your computer has just crashed!) and to receive a booking number. With luck, an IT worker may schedule a visit two or three days later, by which time a crucial deadline for submitting a research proposal may have been missed.
Similarly, the removal of financial administrators from departments to central offices can increase the delay in submitting research applications (often against tight deadlines). The centralisation of other administrative staff means academics inevitably end up doing more for themselves (photocopying, making travel arrangements and so on), thus lowering their productivity with respect to their core jobs. Moreover, the centralisation of support staff charged with student support can have a negative impact on student satisfaction as central administrators lack the local knowledge required to address many of the needs of students.
In summary, despite the wealth of management literature on the benefits of a decentralised structure, particularly for organisations where knowledge, creativity and innovation are central and for organisations operating in uncertain environments, vice chancellors have assumed that further centralisation is the answer to their problems. At the same time, they have mostly failed to explain to their staff what specifically are the goals of centralisation, why centralisation offers the best means of achieving these goals, and what are the success criteria against which such changes should be judged. In any other organisation, academics would ruthlessly expose such failings.
Bureaucracy
Universities today operate in a demanding and fast-moving environment, subject to a plethora of pressures, expectations, regulations and laws (Bozeman, 2015).9 Yet, in responding to these, the tendency among many universities – and perhaps this is more pronounced in the UK than elsewhere (see Hoggett, 1996) – is to interpret all these over-literally, and to devise some gold-plated solution to even a relatively minor problem, so that they can then triumphantly claim to have adopted best practice. The result is all too often a disproportionate response in terms of greater formalisation and more burdensome bureaucracy (see Bozeman, 2015), with no consideration of the load (particularly the cost in terms of additional time) being imposed on those lower down the organisation.10
For example, in the case of the UK, and perhaps other countries as well, government concern with illegal immigrants has sometimes focussed on the problem of some students registering at colleges in order to obtain entry visas, and then not turning up at these colleges, but instead disappearing into the general population. This may well be a problem at language schools, lower-level business or administrative colleges and the like. But there is little evidence that it is a significant problem at universities. Nevertheless, many UK universities have responded with gusto. In one case, lecturers suddenly received an email instruction (all too many initiatives these days, especially those of a more difficult or controversial nature, are launched by email)11 to complete an attendance register in each and every lecture. This ignored the fact that there could be several hundred students in a lecture hall, so a roll call would take far too long. It ignored the fact that such a procedure would be seen as demeaning by students, harbouring resentment among them. The alternative was to circulate an attendance list, though lazy students who cannot be bothered to turn up for lectures simply ask a friend to sign in their place. The relevant report from the immigration body which, according to the university, was the source of this new requirement, did indeed mention roll calls, but only for schools and colleges. Universities were not expected or required to adopt such measures, but merely to report whether, on the basis of existing mechanisms, they were aware of any student who had failed to show up.12 Most academics probably ignored this instruction. Although other emails followed, demanding in ever more strident terms that the procedure be followed, they eventually stopped, leaving a legacy of resentment and a hardening of the division between them and us.
A year or so later, concern had evidently spread to the monitoring of PhD students at UK universities. It was never clear whether this was also fuelled by worry that some might not be bona fide students (even more unlikely at PhD level), or whether it was instead a concern that PhD supervisions resulted in little or no written record and hence were potentially vulnerable to official complaint from disaffected students that their doctoral training was adequate. Whatever the cause (and the fact it was never explained is symptomatic of the wider problems examined here), various universities decided to introduce procedures to provide an official record that PhD supervisions had taken place. One university came up with a bizarre solution. Again launched by a collective email, supervisors were instructed to book a room on the university computerised room-booking service prior to each supervision. This was ironic as a year earlier, in a move towards centralisation, the power to book a room had been removed from academics (who presumably could no longer be trusted to do this responsibly) and given to a few dedicated administrators. In consequence, few academics still knew how to use the computerised room-booking service. However, the university had anticipated this problem and offered training sessions for supervisors. No coherent justification was ever given for this bureaucratically baroque procedure for recording PhD supervisions. There was no recognition that some supervisions are not booked in advance – they just happen when a doctoral student knocks on a supervisor’s door to ask for advice, or when the supervisor and student bump into each other over coffee. Another fundamental flaw was that the room-booking service could be used only to book designated teaching or committee rooms; other rooms were not part of the system, so supervisions conducted elsewhere – say, in the coffee area or over lunch in the cafeteria – were all ‘unsupervisions’ in Orwell’s terminology.
Research ethics is another area where there has often been a process of bureaucratic overkill. For certain specific types of research, it is perfectly right and proper that the principal investigator should go through an ethical review procedure – for example, research involving medical patients or animals, or studies involving vulnerable individuals, such as children or illegal immigrants. But outside these specific areas, for 90 or 95% of research projects, there are no significant ethical issues that need to be externally reviewed. (There are, after all, already well-established conventions from professional bodies to guide researchers in dealing with such matters as data confidentiality and anonymity of interviewees.) In response to pressures, primarily from medical research funding agencies, many universities have developed a thorough, but also extremely complicated, ethical review procedure, which they have then proceeded to apply indiscriminately to all research projects. For instance, any project involving interviews is often seen as requiring ethical review even though most do not involve vulnerable individuals. Most involve employees of organisations who are being interviewed by virtue of their position or professional expertise. Those responsible for enforcing such an all-embracing ethical review procedure are apparently unable to conceive of a distinction between vulnerable individuals and other interviewees. A typical university approach involves a multi-page form requiring inordinate amounts of redundant information. To oversee the ethical review process, a large central committee of senior university staff and external members must be established, along with numerous departmental sub-committees, to review the hundreds of cases caught up in this procedure. Indeed, the procedure may be applied not just to the research projects of academic staff, but also to PhD projects, MSc dissertation projects and even undergraduate dissertation projects. Undergraduates who might previously have carried out a few interviews of other students or local businessmen must now go through this complex and time-consuming ethical review procedure. Many may be disinclined to pursue this sort of research. Likewise, social work students may be discouraged from fieldwork despite the demands of their future employment.
In short, this is a typical example of a response to a problem that is totally disproportionate to the nature and extent of the problem. The result is a substantial addition to the workload of all those involved, in particular of busy academics. On the one hand, they are being exhorted by senior university management to be more entrepreneurial in bringing in research funds: on the other, they are finding themselves beset with ever more bureaucratic hurdles to leap. It is by no means clear that a cumbersome ethical review procedure results in greater attention being paid by academics to ethical issues. It may actually encourage the belief that the problem goes away with the filling in of forms and becomes someone else’s responsibility. In the past, when an ethical issue arose in a research project, it was normally picked up by the academic involved and discussed with senior colleagues. Sensible measures for dealing with the matter would be put in place, providing the academic with a sense of ownership of the measures, and of responsibility for ensuring that no ethical problems arose. The new approach, by contrast, is more likely to generate a sense of infantilisation among academics. Ethics and integrity are certainly of vital importance in research, but surely an approach that focuses on training students and early-career academics to be sensitive in dealing with ethical issues is better than one that suppresses ethical judgement.
While much of this creeping bureaucracy may be attributed to external requirements, some of it is self-imposed. For example, in days gone by, many decisions about teaching administration could be taken by the individual course convenor or programme director in consultation with relevant academic staff. On one occasion, following a switch to requiring that all courses be assessed and not just those later in the programme, uncertainty arose over the correct procedure for a student who had failed assessment in only the first term. The director of the teaching programme duly consulted with other academic staff and all agreed the student should retake the assessment by a certain date. However, when the programme director reported the plan to the university, he was informed that the matter could not be decided in this way. A formal meeting of the teaching committee would have to be convened, attended by a quorum of academic staff, and the appropriate administrative staff to take minutes. On turning up for the meeting, for which this was the sole item on the agenda, the academics were surprised to find not one but five university administrators. And instead of the meeting taking a few minutes, as they had anticipated, it went on for an hour as the various administrators thought of more and more procedural matters that supposedly needed to be addressed. Finally, towards the end, the programme director asked if the procedure which had just been laboriously agreed over the course of the previous hour could be applied if other students were to fail a course the following term. He was informed that this was not permissible: the teaching committee would have to be reconvened to consider the matter all over again.
Doubtless the reader can come up with a plethora of other examples where academic activities have, in recent decades, become ever more formalised, complicated, bureaucratic and time-consuming. Indeed, it is difficult to think of a single area of university activity that has become less bureaucratic.
Teaching
In the past, it was certainly the case that some lecturers were inadequate. They were not formally trained to teach, and they received little or no structured feedback. Improvements were needed. Yet, as with other improvements in universities, the tendency is to take things too far. The assumption is that if something is good (for example, training courses), then more is better. However, as with most things, diminishing returns quickly set in, while the costs of yet further ‘improvements’ rise, not least in terms of time demands. Another cost takes the form of increased irritation among academic staff that they are being treated like children rather than professionals; for example, with patronising instructions on how to use Powerpoint in lectures. Many meekly follow instructions; for example, that each lecture must begin with learning outcomes, which results in a form of teaching by template. Lecturers preparing new courses are likely to encounter particular problems as they wrestle with complex instructions about modularisation and credits. For example, courses or modules must be a certain size in terms of credits. Those deemed too big or too small must become the right size. Why is known only to central university administrators. Such changes are introduced by fiat, with no prior consultation and no coherent rationale. Again, the result is resentment, cynicism and sullen acquiescence.
In previous years, similar experiences were encountered with ‘semesterisation’, an ugly neologism typical of the managerialist terminology invoked for changes that lack any credible rationale. In many countries, the academic year has traditionally been divided into three terms, each of 10–12 weeks, although some US universities have long operated on the basis of two semesters. At a certain point, many UK universities decided to move to a system of two four-month semesters. (The fact that a semester is, by definition, a six-month period seems to have been overlooked.) This was claimed to be ‘better’, even though the first semester is interrupted by Christmas and New Year holidays, and the second by Easter holidays. Again, there may have been some explanation for the change that was known to senior university officials, but academics were left struggling to think what could be the problem to which two semesters was the solution.
In order to improve the quality of teaching, a process of student feedback has been in operation for several decades. Forms are completed by students at the end of each course, and the results passed back to the lecturers involved. In addition, student representatives or departmental teaching committees may pursue the issues raised. In recent years, however, new procedures have been introduced, not least because of the rapid increase in student fees and students feeling that they are customers. The national student survey was introduced in the UK in 2005.13 As with any new assessment procedure, this quickly affected the behaviour of the system it was monitoring, not always in quite the way intended. Individual universities realised they needed to ensure a high response rate (a low response rate was self-evidently ‘a bad thing’), as well as a high proportion of positive responses. As the survey time approaches, a sequence of increasingly panicky instructions is sent to lecturers by email (as usual), asking them to do all they can to encourage students to complete the form (for example, by setting aside time in lectures so they can make sure students do as they are told, or by offering students incentives to complete the survey). Lecturers are also expected by university administrators to explain to students that too many negative responses will result in the university looking bad, hence devaluing the standing of their eventual degree in the eyes of employers. In contrast, a very positive response will enhance their employment prospects. It is not clear what significance, if any, can be attached to the results of such a survey (see Stark and Freishtat, 2014). Does a positive evaluation mean that the quality of teaching in a university is high, or merely that the university has pursued an aggressive strategy in encouraging its students to provide positive feedback? There must also be concerns about the impact of this particular aspect of ‘education’ on the ethical values of students coerced or nudged into playing the game.
Likewise, it is not clear what significance can now be attributed to the proportion of first class or upper second class degrees each university awards. In the past, this was used as an indicator of the quality of graduates and the education provided by universities. However, just as the use of exam passes as an indicator of school performance in the UK has resulted in barely credible year-on-year ‘improvements’, so this university performance indicator has inevitably been the victim of grade inflation, with success rates rising year after year (Johnes, 2004, p.472).
In short, a succession of doubtless well-intentioned exercises to enhance the quality of university teaching has consumed growing amounts of effort and time, stimulated various forms of game-playing (some involving questionable ethics), and encouraged teaching to a template while having little or no benefit in terms of the quality of teaching offered to students.
Research
As with teaching, so there have undoubtedly been problems in the past with university research – for example, research findings that were never published; the lack of a clear and coherent research strategy whether at the level of the department or the university as a whole; and some staff choosing to do no research in order to concentrate on high-quality teaching. Beginning with the UK, many countries have introduced various forms of research assessment. In the UK, the first research assessment exercise (RAE) was conducted in 1986 and since then a further six have been carried out. In the first two or three of these, significant progress was undoubtedly made in tackling the sort of problems identified above. Then diminishing returns began to set in, the easy gains having already been made (Geuna and Martin, 2003). The costs of each successive RAE rose inexorably as universities put more and more effort into preparing their RAE submissions in order to do better than their competitors – a classic example of the Red Queen effect (Van Valen, 1973). In each university, a senior officer (or an entire team) would be designated to oversee the university’s preparations. And in each department or unit of assessment, one or more senior academics would be delegated the task of working with staff over the years before the RAE submission to ensure each had four good publications ready in time. When the cost of all the time spent on these preparations is added to the direct cost of operating all the panels to assess all the university submissions, then the total cost of the 2014 assessment exercise is estimated to be about £246 million (Technopolis, 2015, p.1). This is a rather expensive solution to the problem of distributing research funds to 100 or so UK universities. (Other government departments would be ridiculed if they spent a similar proportion of funding on resource allocation.)
Over time, the RAE has tilted the balance of effort from teaching to research as only in the latter does improved performance brings financial rewards (although this may change if the proposed teaching excellence framework is introduced in UK universities). In addition, promotion decisions are more likely to be swayed by RAE performance than teaching ability, sending a strong signal to academics that this is where they should concentrate their efforts. Secondly, given that the RAE has been structured around traditional disciplines, and given that departments rated highly by the discipline-based panels are those perceived to have contributed most to the disciplinary mainstream, the RAE and its successor, the research excellence framework, have sent a strong signal that single disciplines are most highly valued. Younger researchers in particular may have been discouraged from attempting interdisciplinary, heterodox and non-mainstream research, as well as risky and long-term research. More generally, the trend has been for researchers to become more compliant with disciplinary authority (Martin and Whitley, 2010).
The RAE focuses on research publications to the neglect of other research outputs (such as trained people, new instrumentation or techniques, presentations at conferences and other meetings, contract research and consultancy), many of which are often as important in terms of transferring ideas or knowledge (Salter and Martin, 2001). Moreover, in the 2014 research excellence framework (REF) – another term with somewhat sinister Orwellian overtones – the focus in some fields narrowed further to just papers in leading journals, with books, book chapters, papers in lesser journals and so on counting for little, irrespective of the quality of research they contain (Martin and Whitley, 2010). Emphasis on what are called ‘4* journals’ encourages a very one-dimensional view of the work of an academic. In some fields or certain types of research, these journals may be the most important outlet for research findings, but not in others. For interdisciplinary research (or work within a specialty that does not fit within one of the established disciplines around which REF panels are organised), for user-oriented research, for long-term and large-scale research where the results can be conveyed only in a book, or for research focussing on a topic too narrow to be of interest to a mainstream journal, leading journals may not be suitable, and researchers will be disadvantaged, as they have been in all the RAEs. Assessment panels organised on the basis of traditional disciplines, while reasonably effective at evaluating research from departments operating within the mainstream, are intrinsically incapable of dealing with, and treating fairly, any department or unit operating outside the disciplinary mainstream (Rafols et al., 2012).
The introduction of national research assessment systems, almost invariably organised around disciplines, also means that in the UK and elsewhere, there lurks an inherent fundamental contradiction at the heart of science policy. For 20 years or more, governments have exhorted researchers who are supported with public funding to go forth and find ‘users’ for their research, to ascertain the long-term research needs of these users, and to factor these into their research agenda, involving them at an early stage in the research rather than just approaching them in the final stages. Many academic units have responded to such policies, recognising they had a responsibility to offer something to society in return for public funding of their research. Those that have done so have almost invariably found that this takes them into interdisciplinary research of one form or another. Users’ problems rarely, if ever, come neatly wrapped within a single discipline. The results of such user-oriented research, where they merit academic publication (and not all do), may be suited to specialist interdisciplinary journals, or perhaps to journals in several adjacent disciplines. Then, every five years or so, along comes the research assessment exercise (now REF), and researchers in such interdisciplinary research units are forced to screw themselves up into a single disciplinary pigeonhole, ready to be assessed by a panel drawn almost entirely from the mainstream of that discipline.14 Faced with a submission containing publications in specialist interdisciplinary journals and in journals spanning several disciplines, with at best only a small fraction seen as top journals in any single discipline, the panel will find it very hard to award a top grade, even to a unit widely regarded as a world leader in its field.
In an effort to square the circle of pursuing research ‘excellence’ while at the same time addressing economic or societal needs, the recent UK research excellence framework added the assessment of ‘impact’ to the existing assessment of research excellence. While laudable in principle, it immediately raised a new problem. Impact comes in a great variety of forms. Impact for an engineer is very different from impact for a biomedical researcher, or a sociologist or a historian. It is far from clear how one can assess impact systematically and rigorously across all fields and across all institutions in a truly comprehensive and reliable manner (Martin, 2011; see also Samuel and Derrick, 2015). There will undoubtedly be widespread criticism of the simple-minded approach adopted in the 2014 REF, and work will start to ‘improve’ the assessment of impact in the following REF. REF 2 will consequently be more elaborate, more burdensome and more time-consuming. It will also encourage more sophisticated game-playing. Already, some universities, intent on doing well in terms of impact, have gone down the route of hiring professionals with expertise in advertising, marketing or PR to help write their impact stories, creating a new industry. In order to keep up, other universities will increasingly be forced to do likewise. (In other words, the policy for assessing impact has already had an impact of its own.) And so the cycle of ever increasing elaboration and game-playing will be repeated in REF 3 and beyond in a new version of the Red Queen effect (Martin, 2011).
What of the benefits of adding impact assessment to research assessment? While there are undoubtedly some, as researchers and institutions pay more attention to increasing the impact of their research,15 there will also be costs. The response to any attempt to measure impact is that the assessed focus their efforts on those forms of impact more easily captured by the assessment methodology. Other sorts of impact that are less direct, and more diffuse, complex and long-term are neglected (Bevan and Hood, 2006; McLean et al., 2007). Yet, these other forms of impact may be at least as important. As Einstein is reputed to have observed: ‘Not everything that counts can be counted, and not everything that can be counted counts’.16 In short, while some of the early efforts to improve the efficiency of university research may have resulted in significant gains, attempts to achieve yet further gains have come at a disproportionate cost. Assessment schemes and performance indicators have over time tended to skew research towards safe, incremental, mono-disciplinary mainstream work guaranteed to produce results publishable in top academic journals, and away from interdisciplinary and more heterodox, risky and long-term research. They have also generated perverse incentives, encouraged cynical game-playing to beat the system, and resulted in various unintended consequences.
Discussion and conclusions
From analyses of other sectors (e.g. Bevan and Hood, 2006; Boddy, 2006; Hood, 2007; McLean et al., 2007; Diefenbach, 2009, 2013), it is clear that the problems discussed above are far from unique to higher education. Similar problems are being encountered in schools, hospitals, the police force and elsewhere. What might be the common driving forces behind all this? One may be the drive for ever greater economic efficiency, narrowly defined as more output per unit input with little regard for quality or anything else that cannot be measured in simple economic terms. Such an approach may suit organisations whose business model is based on mass production and standardisation rather than customised production. As Woodward (1958) showed over 50 years ago, an organisation’s choice of technology exercises a significant influence on its organisational structure, with organisations opting to pursue a trajectory based on mass production and standardisation benefitting from greater centralisation and hierarchy, while customised production demands a flatter organisational structure with greater dispersion of control. If universities have decided that they should pursue policies based on mass production and standardisation, then the pursuit of greater centralisation and hierarchy perhaps makes sense, enabling the production system to be more controllable and predictable. But is that what universities have now descended to?
Related to this are the on-going consequences of the process of new public management, with its baleful emphasis on accountability, performance targets and the like, all of which encourages changes in behaviour to maximise one’s score according to the designated metrics (see Burrows, 2012). In the case of UK schools, for example, the emphasis on league tables based on the percentage of children passing exams has resulted in strategic decisions by schools as to which exam board offers the easiest curriculum, which subjects have the highest pass rates, which students should be entered for which subjects, and so on (Hartford, 2012). Whether all this has actually resulted in better school education is unclear. Similarly, in the UK National Health Service (NHS), the political prominence given to performance targets (such as reducing waiting list times) has spawned elaborate gaming. For instance, patients are left in ambulances outside the hospital until the accident and emergency department is able to deal with them within the prescribed period; or patients diagnosed as being in need of a particular treatment, but who are not placed on a waiting list if this would jeopardise achieving hospital or treatment targets (see Bevan and Hood, 2006). Likewise, in the case of the police, only certain types of crimes may be reported, while others may be re-classified to produce an improving trend (Coulson, 2009, p.277). Whatever benefits new public management is supposed to bring (and there may indeed be some) need to be set against the deleterious effects and wasted efforts associated with such game-playing (see Hood, 2006).
Another wider driving force in universities is linked to globalisation and increasing competition, whether for students, income or academic staff. One way in which this has manifested itself is in the current obsession with international league tables, based on the Shanghai rankings or one of the many other such rankings. Though the methodology involved in these ranking is highly dubious, all vice chancellors, rectors and university presidents seem intent on doing whatever it takes to improve the ranking of their own universities. Because Nobel prizes feature so prominently in the Shanghai rankings, disputes arise over which university can claim Nobel laureates. For example, the 2007 Nobel Prize for Chemistry was awarded to Gerhard Ertl, who, according to the official Nobel website, carried out his prize-winning research at the Fritz-Haber-Institut der Max-Planck-Gesellschaft, Berlin. However, he had also held a number of part-time and visiting posts with the Free University, Humboldt University and the Technical University of Berlin, all of which duly claimed him as their own. Unable to sort out these competing claims, the compilers of the Shanghai ranking had no option but to drop these institutions from their rankings over the following years.
A further factor that appears to be at work is the growing use of headhunters to fill senior university posts.17 While the use of headhunters began in the private sector, by the 1990s the practice had spread to universities, not just for vice chancellors and their deputies, but subsequently for deans, heads of departments and director of research centres, and later even for professors. Headhunters have often been used in combination with traditional academic search committees. To justify the very considerable sums of money they charge for their services, headhunters had to demonstrate their ability to bring other, less obvious candidates into consideration. Indeed, they worked hard at the short-listing stage to show their candidates were the strongest. The consequence was often the appointment of an outside candidate who would probably not have been appointed through the traditional academic process. In an unfamiliar university environment in which they had no powerbase and no understanding of organisational systems and values, such external appointees have often reacted by making decisions that seem to defy all logic. The more arguments and evidence are used by opponents to show the flaws in their plan, the more determined they become to demonstrate their authority by sticking to their plan.
Another transformation in universities over the last decade or two is the dramatic rise in the number of central university administrators and support staff.18 Each new initiative launched by senior university officials seems to require more such staff. Once in post, they justify their continuing existence by coming up with new bureaucratic procedures or adding to the complexity of existing ones. Bozeman (2015) offers an explanation for the growing amount of red tape in publicly-funded research, identifying the main drivers in universities as bureaucratic overlap, response to crises (e.g. the Stanford presidential yacht scandal), and social and political ‘side payments’ [‘rules that must be attended to by university researchers and administrators but that do not affect the quality or quantity of scientific work’ (p.26)]. In the view of some, ‘institutions of higher education are [now] mainly controlled by administrators and staffers who make the rules and set more and more of the priorities of academic life’ (Ginsberg, 2011a, p.1).
All of this raises the question: why have most academics so meekly accepted these developments? Some may be too frightened to voice their concerns publicly, particularly if (like most full professors) they are on performance-related pay. Others may have bought into the ideology that all this constitutes progress, that such changes represent the only way of addressing the problems and challenges that universities face. (These are likely to include academics who go on to assume senior roles in universities, in which they then actively contribute to inflicting a more managerialist approach on their colleagues.) Most, however, remain baffled and frustrated, feeling powerless to resist, not least because teaching, research and their administrative responsibilities leave them chronically overstretched and unable to mount a coherent opposition.19
An analogy can be drawn with the boiled frog (Tichy and Ulrich, 1984, p.60). Experiments by physiologists in the 1870s suggested that if a frog is placed in a saucepan of cold water which is then very gradually brought to the boil, the frog will not jump out but remain until it is eventually boiled. In contrast, a frog dropped into a saucepan of very hot water immediately jumps out (Sedgwick, 1882). The empirical truth of this has since been challenged (Gibbons, 2007). Nevertheless, it offers a beguiling metaphor for what we have been witnessing in universities over the last couple of decades. If academics 20 years ago had been suddenly presented with the panoply of changes described above, to be implemented in one fell swoop, the level of opposition would have been such that the changes would have been thrown out en masse. Instead, however, the changes have been introduced incrementally and by stealth. At each step, academics may have felt: ‘Having accepted all this, why make a fuss about just a bit more?’
But if academics continue to acquiesce to yet further changes of the type examined here, they risk eroding their integrity, self-worth and dignity, becoming mere cogs in the higher education machine. It is not clear whether academics are yet at the stage familiar to Winston Smith in Nineteen Eighty-Four or still some way off, but the direction in which they are heading is clear.20 The purpose of this proposition paper is to promote debate on these matters. If others share my concerns, I hope they will join in the debate. If they do not, the fate of the boiled frog awaits.