129
views
0
recommends
+1 Recommend
1 collections
    0
    shares

      If you have found this article useful and you think it is important that researchers across the world have access, please consider donating, to ensure that this valuable collection remains Open Access.

      Prometheus is published by Pluto Journals, an Open Access publisher. This means that everyone has free and unlimited access to the full-text of all articles from our international collection of social science journalsFurthermore Pluto Journals authors don’t pay article processing charges (APCs).

      scite_
       
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Phronetic expertise in evidence use: a new perspective on how research can aid educational policy development

      Published
      research-article
      a , *
      Prometheus
      Pluto Journals
      Bookmark

            Abstract

            The notion of evidence-informed policy making and the arguments in its favour are well established. At the same time, however, regular evidence use is yet to become a routine way of life for educational policy makers, particularly within the UK. This paper engages with the notion of expertise in evidence use, and with Flyvbjerg’s idea of phronesis. It also details how the phronetic approach can be adopted by policy makers and the potential implications for the policy development process. Given the issues that abound with current attempts to embed and enact evidence-informed policy making, the phronetic approach presents an alternative and viable way of both perceiving how evidence is currently used and also establishing enhanced levels of evidence use. In particular, the paper spotlights a need for current thought in this area to move away from rational and linear perspectives to encourage policy makers continuously to incorporate the most recent evidence into their thinking, and to make well-rounded decisions. The mechanisms required to facilitate phronetic expertise are examined as are the cultural issues that need to be addressed.

            Main article text

            Introduction

            The pursuit of evidence-informed policy is based on the premise that policy outcomes will be improved if decision making is aided by knowledge that is both of quality and pertinent to the issue in hand. This premise is explicated through the work of advocates such as Oakley (2000, p.3), who argues that evidence-informed approaches ensure that ‘those who intervene in other people’s lives do so with the utmost benefit and least harm’; also described by Alton-Lee (2012) as the ‘first do no harm’ principle. Failing to employ available evidence can also lead to situations where public money is wasted and members of society not offered treatments or interventions at points in their lives where they might provide most benefit (e.g. Scott et al., 2001; Lee et al., 2012). Oxman et al. (2009) summarise the benefits of being evidence-informed by suggesting that evidence use increases the probability of policy being effective, equitable and value for money.

            Initiatives designed to enhance evidence use by government

            Numerous initiatives have been instigated by governments and other stakeholders (both in the UK and internationally) in an attempt to improve the links between evidence and education policy. Gough et al. (2011), as part of the Evidence Informed Policy in Education in Europe (EIPEE) project, for example, identify 269 instances of such linking activity in a survey of 30 European countries. They suggest that ‘the findings from the survey [indicate] a high level of activity across Europe and demonstrate that a wide variety of approaches [have] been taken to try to improve the use of research evidence in policy settings’ (Gough et al., 2011, p.4). A catalogue of UK government initiatives, meanwhile, is set out in Brown (2013). The most recent of these includes the 2012 Civil Service Reform Plan,1 which contains a commitment to the widest possible engagement with external experts and to investigate the feasibility of setting up an independent institution that might determine what works with regard to social policy. In addition, in 2013 the UK government announced the launch of the What Works Network2 – six independent ‘evidence centres’, responsible for producing and disseminating research to decision makers in such areas as crime reduction, active and independent ageing, early intervention, educational attainment and local economic growth. The aim of the centres is to support local decision makers to invest in services that deliver the best outcomes for citizens and value for money for taxpayers. This is to be achieved through the collation, assessment and synthesis of published evidence on the effectiveness of interventions, assessing these using a common currency, publishing clear synthesis reports and sharing findings in an accessible way (see Cabinet Office, 2013).

            While such initiatives abound, it is also argued that their impact to date has been limited (Brown, 2013). For example, while several studies suggest that capacity to understand and consider evidence does, to an extent, exist at the level of the individual policy maker (e.g. see Campbell et al., 2007; Brown, 2009), Nutley et al. (2007) argue that the effects of these initiatives, designed to improve evidence use across the Civil Service, have generally not been fully evaluated and are restricted to case studies and simple anecdotes of success. Similarly, although the UK’s Government Office for Science (2010) observes that within the Department for Education (DfE) ‘there is strong and active leadership support for evidence-based policy making’ (2010, p.22), programmes such as, for example, the Making Policy Happen programme, designed to shift policy makers’ behaviours towards better consideration and use of evidence as part of their decision making, are yet to be fully embedded. Making Policy Happen does not currently feature as an active initiative on the DfE’s website. It should also be noted that a number of the examples of best practice spotlighted by the Government Office for Science (and which, it was felt, served to enhance the DfE’s capability to engage with and use evidence) appear to have been discontinued. For instance, the department’s annual research conference is no longer mentioned on www.dfe.gov.uk. Gone, too, are the annual analysis and evidence strategies, the DfE’s annual statements of evidence requirements and priorities (see Department for Children, Schools and Families, 2008, 2009).

            There have been few attempts in the UK to assess quantitatively the uptake of research by policy makers. Approaches have been undertaken elsewhere, however. For example, Landry et al. (2003) surveyed 833 government officials from Canadian and provincial public administrations to examine the extent to which they employed academic research as part of the policy process. They used the Knott and Wildavsky (1980) scale of research use which ranges from reception (I received the university research pertinent to my work) to influence (university research results influenced decisions in my administrative unit) via the stages of cognition, discussion, reference and effort (I made efforts to favour the use of university research results). As Webber (1992, p.21) observes, the points along the Knott and Wildavsky scale are ‘meant not only to capture the extent to which information is processed cognitively by the policy makers but also its consequence in the policy process’. The scale can be considered cumulative in the sense that cognition builds on reception, discussion on cognition, reference on discussion, effort on reference, and influence on effort. Since research rarely provides an immediate solution to a policy problem (Weiss, 1986), policy makers were asked to consider the previous five years.

            Landry et al. (2003) find that nearly 12% of policy makers usually or always receive academic research pertinent to their work; 39%, however, rarely or never do. Moving through the six stages from reception to influence, there is an increase in university research that is rarely or never used and, conversely, a decrease in university research that is usually or always used. Ultimately, only 8% of respondents report that the academic research they have received usually influences decisions and slightly less than 1% indicate that academic research always influences decisions in their departments; 41% report that research rarely or never affects decisions (with 46% rarely or never making efforts to favour the use of academic research). The suggestion is that attempts to enhance the uptake of research have not been as successful as they might be.

            Current assumptions underpinning evidence-informed policy making

            Although it is clear that current attempts to improve the use of evidence have met with limited success, there have been few attempts to investigate why this might be. Various explanations are offered for the failure of initiatives to take hold, particularly within educational policy making. These are set out below as assumptions, arrived at after an extensive review of literature (see Brown, 2011). The aim of the review was to provide an overview of existing theory and an understanding of the type of empirical studies previously undertaken in this area. Two approaches were taken: (i) a search of four prominent databases3 using search terms synonymous with ‘evidence-informed policy making’ and ‘knowledge adoption’4; and (ii) recommendations on seminal literature from colleagues, authors identified in the search above, and experts in the fields of evidence-informed policy making and knowledge adoption. The references cited by the authors of these studies were then also reviewed. This approach to sourcing literature, combined with the screening criteria, resulted in a total of 228 papers, studies, reports and books being reviewed over 18 months.

            The assumptions arrived at in this way should not be thought of as carrying equal weight, nor as being held simultaneously by all policy makers or all researchers. Nonetheless at least some will ring true with individual readers familiar with this subject:

            • Assumption 1: the use of evidence in policy making is inherently rational, of primary concern to policy makers is a desire to develop policies that are optimal in terms of their efficacy, equity and value for money. Policy makers not only systematically seek out evidence to aid their decision making, but also pursue all pertinent evidence on a particular issue (see Trowler, 2003).

            • Assumption 2: there is a process for developing policy and this process has both broadly definable stages and also tends to operate in a broadly linear or sequential way. There are specific roles for research within this process. Thus, research should be considered at specific points with a view to it then being used to aid specific decisions. For example, research can aid in the identification of a problem, in helping to create, form or steer the public agenda, or in the development of the initiatives of policy directorates (Nutley et al., 2007; Perry et al., 2010).

            • Assumption 3: the provision of knowledge can, by itself, deliver or lead to expertise in a topic, and also to expertise in terms of the ability of social actors to use evidence (Hackley, 1999; Stewart and Liabo, 2012). In a similar vein is the notion that, the more simply evidence is presented, the easier it is for policy makers to make a decision and the more effective that decision will be (Hillage et al., 1998; Brown, 2009; Cherney et al., 2012).

            • Assumption 4: the voices of researchers carry equal weight compared with others operating within the policy sphere of influence (such as think tanks) and also in relation to the policy makers responsible for developing policy (Habermas, 1999). In other words, it is the quality of the argument that matters, rather than who makes the argument. Correspondingly, research use is thought to transcend fashions and fads in topics and researchers (Brown, 2012).

            • Assumption 5: instances of evidence use have not materialised in greater numbers because the process of educational research and its underpinning epistemological/ontological assumptions serve the interests of academics, rather than those of policy makers. As a result, there is generally a mismatch between the research and policy making cycles, the quality of research is poor, and researchers are unable to express their conclusions in ways that make them usable; for example, by failing to provide detail on what works or by not providing definitive facts about the social world in its actual (unequivocal) state (Hargreaves, 1996; Hillage et al., 1998; Tooley and Darby, 1998; Davies, 2006). This assumption links to assumption 3, since the same critics feel that the communication of research has traditionally been through language and means that policy makers find inaccessible.

            These assumptions are shown not to be justified in a number of empirical studies (e.g. Landry et al., 2003; Coburn et al., 2009; Brown, 2009, 2011). This raises a simple question: if policy development does not occur in ways currently envisaged and if these blockages to evidence use truly exist, then why continue to try and facilitate evidence-informed policy making through these? An alternative may lie in the adoption of phronetic evidence use, which has its basis in the work of Bent Flyvbjerg. His analysis examines the role of the social sciences and how they might be harnessed to deliver ‘enlightened political, economic, and cultural development’ in society (Flyvbjerg, 2001, p.3). Flyvbjerg contends that this is best achieved by employing a contemporary interpretation of Aristotle’s notion of phronesis, often translated as practical wisdom. The remainder of this paper examines in detail Flyvbjerg’s notion of phronetic expertise and how this might apply to the use of evidence by educational policy makers.

            Expertise

            The concept of expertise, the myriad of forms in which expertise might be considered to exist and how expertise might be acquired or transmitted over time, have all been widely discussed. Thinking in this area gained prominence in the natural sciences in the latter part of the twentieth century, especially via work which examined the sociology of science (e.g. Polanyi, 1958), and began to influence the social sciences in the early part of the twenty-first century(e.g. Flyvbjerg, 2001; Collins and Evans, 2007). Thinking has addressed knowledge-based and practical expertise, and the factors which influence their operation; for example, how either type of expertise might develop over time and what facilitates this process; the role of tacit knowledge and perspectives in shaping how individuals come to experience explicit knowledge; and tacit knowledge as a way of storing and employing knowledge or practical skills that cannot be explicated (or are generally left unexplicated). In addition, there has also been substantial thought given to the critical notion of how knowledge-based expertise might most effectively be combined with practical expertise.

            Expertise is often posited as something that can be acquired over time and so is a function of experience. Flyvbjerg (2001) is one such proponent of this position and employs the Dreyfus model of learning to illustrate what he means by expertise. The Dreyfus model employs five levels of human learning, ranging from novice to expert,5 with each level comprising recognisably different behaviours in relation to performance at a given skill. A novice, for example is new to particular situations, and will, during instruction, learn facts pertaining to the situation and rules for action. Flyvbjerg (2001, p.11) suggests that for the novice:

            Facts, characteristics, and rules are defined so clearly and objectively … that they can be recognised without reference to the concrete situations in which they occur. On the contrary, the rules can be generalised to all similar situations, which the novice might conceivably confront. At the novice level, facts, characteristics, and rules are not dependent on context: they are context independent.

            This is nicely illustrated through the example of learning to drive, which combines theoretical rules for action (the rules of the road) with practical instructions; for example, what constitutes the biting point of gears, the process of moving through gear changes, and so on. Both can be learnt independently of any concrete situation. Over time, however, as the driver becomes more familiar with instances in which the gear is changed, this process becomes more intuitive. Flyvbjerg argues that as learners advance from novice through the levels of advanced beginner, competent performer and proficient performer, a number of things occur to facilitate this instinctual/intuitive behaviour. Firstly, instances of performing in real life situations increase, and the number of cases the learner encounters and tackles also increases. Secondly, recognition of different situations accumulates, as does recognition of the context in which these situations occur. Thirdly, dependency on specific rules for action diminishes as learners are able to interpret and judge how to perform optimally in any given situation; for example, when the noise of the engine indicates that it is time to change gear. Genuine expertise, however, occurs only as a result of a quantum leap in behaviour and perception, from being an analytical problem solver to someone who ‘[exhibits] thinking and behaviour that is rapid, intuitive, holistic, interpretive … [expertise] has no immediate similarity to the slow, analytical reasoning which characterises rational problem-solving and the first three levels of the learning process’ (Flyvbjerg, 2001, p.14). In other words, experts immediately perceive a situation – the problem that is presented, the goal that must be achieved and the actions that will address this – without the need to divide the process into distinct phases. This, Flyvbjerg (2001, p.21) argues, is ‘the level of true human expertise. Experts are characterised by a flowing, effortless performance, unhindered by analytical deliberations’.

            Tacit knowledge and immersion

            What may be concluded from Flyvbjerg’s analysis is that only by building experience with numerous cases can one move from being a beginner to being an expert. Such experience facilitates the spontaneous and intuitive interpretation of a given situation, based on a recognition of past situations and an understanding of actions that have worked, or might work. At the same time, however, Stewart and Liabo (2012) suggest that the type of intuitive interpretation posited by Flyvbjerg is made possible because expertise is enacted via knowledge which is both formal and informal. This analysis provides an important differentiation between knowledge that might be considered explicit and other knowledge; that which we simply come to know. These forms of knowledge are not distinct and bounded, however. Polanyi (1958) argues, for example, that formal or explicit knowledge never simply exists, but always rests on a much larger, hidden, foundation of tacit knowledge. This tacit knowledge, while it cannot always be articulated, provides the basis for how we come to ‘know’ explicit knowledge. In other words, it provides a mechanism by which we come to view other forms of knowledge. Collins and Evans (2007) also highlight the importance of tacit knowledge, suggesting that it is the type of deep understanding required for expertise. They argue that much tacit knowledge is socially facilitated, and is gained only through immersion in social groups already possessing such knowledge. Individuals must embed themselves within the social group if they are to gain the tacit knowledge required. Similarly, maintaining tacit knowledge is often thought to require continual immersion in such groups. Thus, there can be a complex interplay between tacit knowledge, the membership of expert groups and the social attribution of the possession of knowledge.

            Expertise in policy development

            In the UK, the social actors directly responsible for creating central government policy are ministers and civil servants. The role of ministers is described by Riddell et al. (2011), who suggest that, in relation to policy, ministerial responsibility exists in terms of both: (i) parliamentary duties (for example, making statements about policy decisions); and (ii) executive and policy-related responsibilities (developing policy objectives, approving decisions and providing leadership for senior officials and their departments). Ribbins and Sherratt (2012) argue that there have been few empirical studies detailing the role of civil servants in policy development. Nonetheless, it is possible to set out the theory: it is the responsibility of civil servants to serve apolitically and implement the policies of the elected government of the day. The policies civil servants develop originate from the pre-conceived ideas, the commitments and the overall narrative of the ministers or the political party (or coalition of parties) currently in power (Brown, 2011). Policies are typically developed by teams rather than individuals, and each member of the team will possess a greater or lesser general understanding of the policy process work (depending on time in post). While those responsible for the development of policy texts (such as green and white papers) are likely to be generalists, they will draw on expertise from other areas (such as law and economics) as and when required (Brown, 2013). In terms of the policy process then, ultimate expertise from a Flyvbjergian perspective (the achievement of phronetic, virtuoso expertise) may be envisaged as a state in which individual civil servants can, almost without thinking, interpret and respond to a policy request in a way that meets the requirements of the politicians requesting it, while also attending to the contextual nuances that might affect the enactment of the policy.

            Invariably, however, each policy request will differ in terms of the ideologies of those requesting it, their output or impact requirements, the setting involved and the resources available. This means that, unlike with other acts within the education sphere (such as teaching), developing solutions to ever-changing requests cannot be viewed as an act of performance that can be practised and perfected, nor as something that can be judged against a fixed standard. Success relates to how well the needs and likely responses of those requesting the policy have been anticipated and met. Unlike the example of 10,000 hours required to achieve certain levels of performance (see Lemov et al., 2013), possession of expertise in the policy setting is more constructivist and context specific. For example, an individual may get on well with one minister and be able to ascertain and meet that minister’s needs, but may jar with another. This confrontation may then affect ability to provide solutions that meet the minister’s requirements. Alternatively, an official may move department and so have to learn about its policy history and the aims and successes of these policies. Or an exogenous event (for example, legislation, or economic, social or natural disaster) may mean that policy context changes and new understanding may be required. So, the development of policy expertise is likely to be gradual, interrupted rather than linear, and to stem from constant immersion in both the policy process and the group running the department. Given the gradual turnover of staff in any organisation, at any one time government departments will be populated with officials at different levels of competence. Policy texts will reflect the range of proficiency and understanding that exists at the time. Nonetheless, the general trend is towards competence as policy makers engage more and more with specific policy cases (see Dowling, 2010 ).

            Expertise as learning

            From a Flyvbjergian perspective, expertise is derived through the learning that accrues from experience (i.e. Flyvbjerg explicitly relates expertise to the number of cases with which an individual interacts). This approach is congruent with more constructivist/socio-cultural aspects of learning which consider the mental models learners employ when responding to new information, and which reflect the notion that knowledge itself emerges from participation in cultural practices (see Paavola et al., 2004). Important, too, is the notion of distributed cognition (the idea that aspects of knowledge will be distributed among individuals), which implies that collaborative problem solving can be more productive than the efforts of individuals since it will bring together a myriad of perspectives. It would appear that, through their day-to-day actions, interactions and engagement, policy makers learn through the constructivist and socio-cultural modes and this learning leads them to develop expertise in policy development.

            When it comes to evidence use as currently conceived, however, civil servants would appear to be at the level of novice. Linear perspectives posit that there will be fixed points at which evidence will or should be consulted in order to inform policy. In other words, evidence is not regarded as something to be considered continuously or holistically, but separately, as part of a defined, rationalised, sequence of events. This disrupted engagement can occur either with research texts or with those who provide them: for example, where they exist, government departments often separate, rather than bring together within policy teams, specialists and more generalist civil servants. Such separation can take the form of teams being located on different floors of an office, to teams being located in different cities (the Department for Education, for example has much of its research and statistical activity based outside London, including in Sheffield and Darlington). Separation means that specialists are often called in at fixed points to discuss an issue rather than allowed to provide constant input to policy development. This limits the number of cases of generalist evidence to which policy makers are exposed.

            This notion is reaffirmed when we examine the kinds of evidence requested and acquired by civil servants, which is often akin to the knowledge found in an instruction manual – for example, evidence that details what works. A desire for this type of knowledge is reflected in a number of speeches made by past and current Secretaries of State with a responsibility for education policy. For instance, in 2000, David Blunkett (a former Secretary of State for Education and Employment), as part of an announcement of major developments in the government’s education research policy, called for ‘Social scientists to help to determine what works and why, and what types of policy initiatives are likely to be most effective’ (emphasis added). In June 2010, Michael Gove, the current Secretary of State for Education, set forth in a speech to the National College for Leadership of Schools and Children’s Services, his desire for:

            … more data generated by the profession to show what works, clearer information about teaching techniques that get results, more rigorous, scientifically-robust research about pedagogies which succeed and proper independent evaluations of interventions which have run their course. We need more evidence-based policy making, and for that to work we need more evidence. (Emphasis added)

            A what works approach has been even more explicitly adopted in the United States, where, as Biesta (2007) notes, the 2001 Elementary and Secondary Education Act – popularly known as No Child Left Behind – has led to both a preference and increased funding for randomised controlled trials (approaches in which individuals are allocated at random to receive one of a number of interventions).6 This preference, Biesta suggests, stems from a belief held by US federal government that randomised controlled trials represent the gold standard for establishing cause and effect. Randomised controlled trials are also the subject of a recent Cabinet Office publication: Test, Learn, Adapt: Developing Public Policy with Randomised Controlled Trials, a paper co-authored by Ben Goldacre, a noted proponent of this approach (Haynes et al., 2012). Specifically it states that:

            We should and could use [randomised controlled trials] much more extensively in domestic public policy to test the effectiveness of new and existing interventions and variations thereof; to learn what is working and what is not; and to adapt our policies so that they steadily improve and evolve both in terms of quality and effectiveness. (Emphasis added; Haynes et al., 2012, p.6 )

            Invariably, however, what works type evidence is easily digested and frequently does not require more than cursory interpretation to be employed. In fact, the more esculent the evidence, the more likely its use can be limited to a simple acceptance/rejection of the recommendations presented (Cherney et al., 2012). For example, should it point to a solution that is not cost effective or impractical to implement, then this type of evidence is likely to be dismissed out of hand. A more valuable engagement, however, would see policy makers taking into account the underlying principles of the message (the why of the research outcome) and for these to become intertwined with other variables to produce a solution (Ball, 1994; Pollard and Newman, 2010). Such engagement is typically referred to as reflective (Hannay and Earl, 2012).

            While novice-level evidence might be appropriate for specified tasks, where the solution is equally well defined and understood, it is less well suited to situations in which the task is complex, where the solution needs to incorporate a multitude of factors (including the economic and ideological), and where any solution proposed will be scrutinised by the media and critiqued by those with vested interests. True expertise in evidence use, on the other hand, provides a vision of policy makers as social actors who intuitively develop responses to situations. With this intuition arises from an amalgamation of their formal knowledge with an understanding of both the specific case they are dealing with and also the other environmental factors that might influence the policy decision (e.g. the amount of money available, the ideological or personal perspectives of the ministers initiating the policy, how the press/public might respond, who might block its implementation, stakeholders who might need to be courted, the capacity of delivery mechanisms to ensure that the policy is implemented, and so on). This moves conceptions of how policy makers should engage with evidence away from something separate towards something which is fully integrated.

            Moving to expertise in evidence use

            Policy makers, as an essential element of their role, move to more continuous engagement with research and researchers. The phronetic approach represents a more realistic conception of evidence use as it currently stands. For example, it is ordinarily assumed that the mind of the policy maker must be empty of knowledge about an issue until knowledge has been provided. Patently, this cannot be true: policy makers will have opinions, are likely to have an understanding of the wider policy environment, and may have already digested research on an issue before they are specifically required to tackle the problem. A phronetic approach illustrates the fallacy of seeing evidence use as separate from policy development. We must recognise that policy makers and their decisions will already be (explicitly or implicitly) informed by whatever has shaped their perspective/reality, including the evidence and knowledge they have already adopted.

            The approach proposed here presents a more effective way of countering the assumptions that impede greater evidence use. For example, continuous engagement with research means that policy decisions will not be contingent on evidence being considered at a fixed point in time. Rather than policy makers awaiting tranches of evidence to provide direction, they will develop their own understanding of the evidence base and draw their own conclusions from it (in terms of what works). At the same time, how evidence is interpreted and whether it is adopted will be driven by the perspectives developed by policy makers over time and the realities they inhabit. This means that rather than quality and methodology being the key determinants of whether evidence will be used, these will sit alongside such matters as how well the story resonates with policy makers (see Huberman, 1990). In addition, as policy makers develop a picture of the evidence base, they will be less likely to accept or reject the findings of a single study. Instead, their decisions will be steeped in a rich bank of evidence accumulated from past policy agoras and the ideological positions of previous governments (Brown, 2011).

            What kind of engagement?

            Collins and Evans (2007) argue that developing expertise requires deep immersion amongst those considered to be experts. Learning communities are an alternative form of capacity building which embrace this approach and are described by Stoll as a means of building learning in order to support educational improvement (Stoll, 2008). Learning communities comprise ‘inclusive, reflective, mutually supportive and collaborative groups of people who find ways, inside and outside their immediate community to investigate and learn more about their practice’ (Stoll, 2008, p.107). The notion of such communities thus encapsulates instances where policy makers and researchers might meet to facilitate learning about and from formalised/academic knowledge.

            A key benefit of learning communities is the sort of learning that takes place within them, especially the process of knowledge creation. This is described by Stoll (2008) as producers and users of formal knowledge (who are also the users and holders of practical knowledge), coming together to create new knowledge (Stoll, however, uses the term ‘animation’). For Nonaka and Takeuchi (1995), this process of creation arises from interactions between tacit and explicit (or informal and formal) knowledge. In particular, a spiralling occurs from four sequential types of knowledge conversation: (i) the conversation between tacit knowledge and tacit knowledge (labelled ‘socialisation’); (ii) the externalisation of tacit knowledge; (iii) the conversation between explicit and explicit knowledge (which represents the combination of explicit knowledges); and (iv) the internalisation of knowledge. The first stage, socialisation, should be regarded as representing baseline behaviour, the normal state of affairs when policy makers mingle. It may be assumed that policy makers working in the same area will share some perceptions and understandings in that policy area. In the second stage (externalisation), policy makers and researchers catalogue what is known (in terms of ‘formal’ and praxis-based knowledge). In the third stage, the totality of what is known is combined and shaped into new solutions. Such creation results in policy ready knowledge (i.e. evidence directed at policy problems). In the final stage, policy makers possess new knowledge and intuitively draw upon it as part of the day-to-day process of developing policy solutions.

            Creation will lead to new knowledge that is policy ready. Through knowledge creation, policy makers build their capacity to use research (by engaging with researchers who can assist with explanations of method and meaning) while simultaneously merging formal and informal/tacit knowledge to address policy concerns. In other words, policy makers construct their own policy-ready, knowledge-based understanding, rather than act as passive recipients of research knowledge. It is this creation (and the journey towards it), which develops policy makers’ phronetic expertise.

            How might such engagement be facilitated and enforced?

            The final consideration, then, must be how to facilitate and enforce continuous engagement via the development of policy learning communities. This is likely to need thought and effort both at the level of the individual and at the level of departments/organisations. At an individual level, if the responsibilities of policy makers are expanded to include active engagement with evidence, they must be able to handle these new responsibilities. The competency framework for central government officials in the UK is set out in Professional Skills for Government (PSG). The PSG provides four thematic groupings for Civil Service competencies: (i) leadership; (ii) core skills; (iii) professional skills; and (iv) broader experience.7 Within these, two statements of ability encapsulate what is currently required in terms of evidence use. In leadership, for example, there is the requirement that civil servants ‘build capacity for the organisation to address current and future challenges’, and under core skills is the requirement that civil servants should be able to engage in ‘analysis and use of evidence’.

            These are neither sufficiently descriptive nor prescriptive enough to encourage the development of capacity and competence in evidence use, or the participation of officials in learning communities. Hannay and Earl (2012, p.313) argue that the skills required for the twenty-first century include: ‘collaboration, problem framing, critical thinking, “thinking outside of the box”, innovation and creativity’, all of which are functions or outputs associated with being able to employ evidence. Pierson et al. (2012), examining the health sector in Canada, also suggest that core competencies for policy makers should include proficiency in evidence-informed decision making. In similar vein, Pollard and Newman (2010, p.264) argue that:

            The ability of teachers to recognise the type of knowledge required to address a particular practice issue, to find such knowledge, to appraise its quality and relevance, and to interpret it for their own practice environment are … key features of professional teaching practice. This process and the set of skills/knowledge required to apply it are what is referred to as evidence-informed practice.

            The same might apply to policy makers seeking to find innovative and effective solutions to problems (see Hargreaves, 1999, 2010). It is clear that current descriptive requirements in this area need to be revised so that policy makers are required to engage with evidence, and it is clear what they are expected to achieve once they have done this.

            In recent years UK governments have sought to establish various ways of improving the competence and capacity of the education workforce. This approach is encapsulated in the notion of the self-improving school system and its corresponding four forces of top-down performance management; in the development of capability and capacity; in market incentives to improve efficiency and quality; and in customers/users being able to shape services (Ball, 2008). A number of key terms have come to signify what is required from reform: ‘transformation’, ‘enterprise’, ‘modernisation’, ‘innovation’, ‘creativity’, ‘competition’ and ‘dynamism’. Policies specific to the school workforce include, for example, standards for teachers and the National Professional Qualification for Headship (NPQH) for school leaders. These set out expected behaviour against which performance can be judged, and act as frameworks for progression within the teaching profession (the NPQH, for example, is essentially a benchmark entry requirement for headship in UK schools). It is not unreasonable to expect government and its policy makers to look inwards and apply some of the same approaches on itself, ensuring that engagement with evidence and participation in knowledge creation are an essential rather than a perfunctory part of any policy making.

            Organisational culture, too, will need to accommodate learning community activity. Talbert (2010), for instance, notes that, while learning communities rely on collaboration, access to a variety of resources, and on mutual accountability and responsibilities, bureaucracies rely on checks and balances. In other words, bureaucracies operate by limiting the power of individuals or groups of individuals to act. As a consequence, Talbert argues that bureaucratic resources must facilitate learning strategies and that collaboration, mutual trust, participation and accountability must be allowed to develop within a context of rules, checks and accountability.

            Conclusion

            This paper has argued that current conceptions of evidence-informed policy making are dominated by a number of assumptions which fail either to characterise the policy process or to account for the role of evidence within it. Instead, they hinder efforts to marry evidence to decision making by grounding them falsely. In so doing, these assumptions perpetuate what has been described as the ‘evidence dilemma’, an intuitive understanding that evidence should have a substantive influence on policy, combined with resignation to the fact that it invariably will not (Brown, 2013). An alternative is to look to Flyvbjerg’s notion of expertise and to show how the learning that accrues from the engagement with multiple cases will, in the long term, lead to competency.

            The paper proposes educational change, suggesting that policy makers should seek to engage with evidence in a continuous rather than sporadic way throughout the policy process. This requires the establishment of learning communities and the instigation of knowledge creation within them, as well as mechanisms for ensuring that policy makers participate within such communities. The paper also puts forward suggestions for facilitating more effective educational change in terms of the development of educational policy. It is only by unleashing the type of expertise that will accrue from such activity that we might see evidence use making policy more effective, equitable and efficient in terms of its value for money (Oxman et al., 2009).

            Notes

            3.

            JSTOR; Academic Search Complete; Web of Knowledge; and IngentaConnect.

            4.

            These included, for example, ‘knowledge mobilisation’, ‘knowledge transfer’ and ‘knowledge brokering’, and were taken from the definitive list provided on the University of Toronto’s Research Supporting Practice in Education website (http://www.oise.utoronto.ca/rspe/KM_Products/Terminology/index.html).

            5.

            These levels are (i) novice; (ii) advanced beginner; (iii) competent performer; (iv) proficient performer; and (v) expert.

            6.

            As a testament to the extent of its realist positivist underpinnings, Slavin (2002) notes that No Child Left Behind mentions the phrase ‘scientifically based research’ 110 times.

            References

            1. ( 2012 ) ‘The use of evidence to improve education and serve the public good’, paper prepared for the New Zealand Ministry of Education and the annual meeting of the American Educational Research Association , Vancouver , Canada, April .

            2. ( 1994 ) ‘ Intellectuals or technicians? The urgent role of theory in educational studies’ , British Journal of Educational Studies , 43 , 3 , pp. 255 – 71 .

            3. ( 2008 ) The Education Debate , The Policy Press , Bristol .

            4. ( 2007 ) ‘ Why “what works” won’t work: evidence-based practice and the democratic deficit in educational research ’, Educational Theory , 57 , 1 , pp. 1 – 22 .

            5. ( 2000 ) ‘ Influence or irrelevance: can social science improve government? ’, Research Intelligence , 71 , pp. 12 – 21 .

            6. . ( 2009 ) Effective Research Communication and its Role in the Development of Evidence-Based Policy Making. A Case Study of the Training and Development Agency for Schools , unpublished M.Res. dissertation , University of London , Institute of Education.

            7. ( 2011 ) What Factors Affect the Adoption of Research Within Educational Policy Making? How Might a Better Understanding of These Factors Improve Research Adoption and Aid the Development of Policy? , unpublished D.Phil. dissertation , University of Sussex.

            8. ( 2012 ) ‘ The “policy-preferences model”: a new perspective on how researchers can facilitate the take-up of evidence by educational policy makers ’, Evidence & Policy , 8 , 4 , pp. 455 – 72 .

            9. ( 2013 ) Making Evidence Matter: A New Perspective on Evidence-Informed Policy Making in Education , IOE Press , London .

            10. Cabinet Office ( 2013 ) What Works: Evidence Centres for Social Policy , available from https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/136227/What_Works_publication.pdf [accessed March 2013].

            11. , , , and ( 2007 ) Analysis for Policy: Evidence-Based Policy in Practice , HM Treasury , London .

            12. , , , and ( 2012 ) ‘ What influences the utilisation of educational research by policy-makers and practitioners?: The perspectives of academic educational researchers ’, International Journal of Educational Research , 883 , pp. 1 – 2 .

            13. , and ( 2009 ) ‘ Evidence, interpretation, and persuasion: instructional decision making in the district central office ’, Teachers College Record , 111 , 4 , pp. 1115 – 61 .

            14. and ( 2007 ) Rethinking Expertise , University of Chicago Press , London .

            15. (2006) ‘ Scoping the challenge: a systems approach ’, presented at National Forum on Knowledge Transfer and Exchange , Toronto, Canada , 23–24 October , available from http://www.chsrf.ca/migrated/pdf/event_reports/philip_davies.ppt.pdf [accessed August 2012].

            16. Department for Children Schools and Families ( 2008 ) Analysis and Evidence Strategy 08–09 , DCSF , London .

            17. Department for Children Schools and Families ( 2009 ) Analysis and Evidence Strategy 09–10 , DCSF , London .

            18. ( 2010 ) The Problem of Recontextualisation , available from http://www.pauldowling.me/publications/dowling(2010).pdf [accessed 22 October 2013].

            19. ( 2001 ) Making Social Science Matter: Why Social Inquiry Fails and How it Can Succeed Again , Cambridge University Press , Cambridge .

            20. , , and . ( 2011 ) Evidence Informed Policymaking in Education in Europe: EIPPEE Final Project Report Summary, available from www.eipee.eu/LinkClick.aspx?fileticket=W6vkqDjbiI%3d&tabid=2510&language=en-GB [accessed September 2012].

            21. Government Office for Science ( 2010 ) Science and Analysis Review of the Department for Children Schools and Families , Government Office for Science , London .

            22. ( 1999 ) On the Pragmatics of Communication , edited by M. Cooke, MIT Press , Cambridge, MA .

            23. ( 1999 ) ‘ Tacit knowledge and the epistemology of expertise in strategic marketing management ’, European Journal of Marketing , 33 , 7/8 , pp. 720 – 35 .

            24. and ( 2012 ) ‘ School district triggers for reconstructing professional knowledge ’, Journal of Educational Change , 13 , pp. 311 – 26 .

            25. ( 1996 ) The Teaching Training Agency Annual Lecture 1996: Teaching as a Research Based Profession: Possibilities and Prospects , available from http://eppi.ioe.ac.uk/cms/Portals/0/PDF%20reviews%20and%20summaries/TTA%20Hargreaves%20lecture.pdf [accessed August 2012].

            26. ( 1999 ) ‘ The knowledge-creating school ’, British Journal of Education Studies , 47 , 2 , pp. 122 – 44 .

            27. ( 2010 ) Creating a Self-Improving School System , National College for Leadership of Schools and Children’s Services , Nottingham .

            28. , , and ( 2012 ) Test, Learn, Adapt: Developing Public Policy with Randomised Controlled Trials , available from https://www.gov.uk/government/publications/test-learn-adapt-developing-public-policy-with-randomised-controlled-trials [accessed 22 October 2013].

            29. , , and ( 1998 ) Excellence in Research on Schools , DfEE , London .

            30. ( 1990 ) Linkage between researchers and practitioners: a qualitative study , American Educational Research Journal , Summer , pp. 363 – 91 .

            31. and ( 1980 ) ‘ If dissemination is the solution, what is the problem?’ , Knowledge: Creation, Diffusion, Utilization , 1 , 4 , pp. 537 – 78 .

            32. , and ( 2003 ) ‘ The extent and determinants of utilization of university research in government agencies ’, Public Administration Review , 63 , 2 , pp. 192 – 205 .

            33. , , , , and ( 2012 ) Return on Investment: Evidence-Based Options to Improve Statewide Outcomes (April 2012 update), available from http://www.wsipp.wa.gov/pub.asp?docid=12-04-1201 [accessed November 2012].

            34. , and ( 2013 ) Practice Perfect: 42 Rules for Getting Better at Getting Better , Jossey Bass , San Francisco, CA .

            35. and ( 1995 ) The Knowledge Creating Company: How Japanese Companies Create the Dynamics of Innovation , Oxford University Press , New York .

            36. , and ( 2007 ) Using Evidence: How Research Can Inform Public Services , The Policy Press , Bristol .

            37. ( 2000 ) Experiments in Knowing: Gender and Method in the Social Sciences , Polity Press , Cambridge .

            38. , , and ( 2009 ) SUPPORT Tools for Evidence-Informed Health Policymaking (STP) 1: What is Evidence-Informed Policymaking? , available from www.health-policy-systems.com/content/7/S1/S1 [accessed November 2010].

            39. , and ( 2004 ) ‘ Models of innovative knowledge communities and three metaphors of learning ’, Review of Educational Research , 74 , pp. 557 – 76 .

            40. , , and ( 2010 ) Instinct or Reason: How Education Policy is Made and How We Might Make It Better , CfBT , Reading .

            41. , , and ( 2012 ) ‘ Building capacity for evidence-informed decision making in public health: a case study or organizational change ’, BMC Public Health , 12 , 137 , pp. 1 – 13 .

            42. and ( 2010 ) ‘ Educational research: a foundation for teacher professionalism? ’ in and (eds.) The Routledge Education Studies Textbook , Routledge , London .

            43. ( 1958 ) Personal Knowledge. Towards a Post Critical Philosophy , Routledge , London .

            44. and ( 2012 ) ‘ Permanent Secretaries, consensus and centrism in national policymaking in education – Sir David Hancock and the Reform Act 1988: a place for a humanistic research dimension ’, Educational Management Administration and Leadership , 40 , 5 , pp. 544 – 58 .

            45. , and ( 2011 ) The Challenge of Being a Minister , available from www.instituteforgovernment.org.uk/sites/default/files/publications/The%20Challenge%20of%20Being%20a%20Minister.pdf [accessed October 2012].

            46. , , and ( 2001 ) ‘ Financial cost of social exclusion: follow up study of antisocial children into adulthood ’, British Medical Journal , 323 , pp. 1 – 5 .

            47. ( 2002 ) ‘ Evidence-based education policies: transforming educational practice and research ’, Educational Researcher , 31 , 7 , pp. 1 – 13 .

            48. and ( 2012 ) ‘ Involvement, expertise and research quality: a new model of public and patient involvement in research ’, Journal of Health Services Research and Policy , 17 , pp. 248 – 251 .

            49. ( 2008 ) ‘ Leadership and policy learning communities: promoting knowledge animation ’ in and (eds.) Policy Learning in Action: European Training Foundation Yearbook 2008 , European Training Foundation, Torino , Italy .

            50. ( 2010 ) ‘ Professional learning communities at the crossroads: how systems hinder or engender change ’ in , , and (eds.) Second International Handbook of Educational Change , Springer , New York .

            51. and ( 1998 ) Educational Research: A Critique , Ofsted , London .

            52. ( 2003 ) Education Policy , Routledge , London .

            53. ( 1992 ) ‘ The distribution and use of policy knowledge in the policy process ’, Knowledge and Policy , 4 , 4 , pp. 6 – 35 .

            54. ( 1986 ) ‘ Research and policy-making. A limited partnership ’ in (ed.) The Use and Abuse of Social Science , Sage Publications , London .

            Author and article information

            Journal
            CPRO
            cpro20
            Prometheus
            Critical Studies in Innovation
            Pluto Journals
            0810-9028
            1470-1030
            September 2013
            : 31
            : 3
            : 189-203
            Affiliations
            [ a ] Institute of Education, University of London , UK
            Author notes
            Article
            850186
            10.1080/08109028.2013.850186
            ac2bef83-f954-4d25-9566-6d7bfad9a67d
            © 2013 Taylor & Francis

            All content is freely available without charge to users or their institutions. Users are allowed to read, download, copy, distribute, print, search, or link to the full texts of the articles in this journal without asking prior permission of the publisher or the author. Articles published in the journal are distributed under a http://creativecommons.org/licenses/by/4.0/.

            History
            Page count
            Figures: 0, Tables: 0, Equations: 0, References: 54, Pages: 15
            Categories
            Article
            Research Paper

            Computer science,Arts,Social & Behavioral Sciences,Law,History,Economics

            Comments

            Comment on this article