172
views
0
recommends
+1 Recommend
1 collections
    0
    shares

      If you have found this article useful and you think it is important that researchers across the world have access, please consider donating, to ensure that this valuable collection remains Open Access.

      Prometheus is published by Pluto Journals, an Open Access publisher. This means that everyone has free and unlimited access to the full-text of all articles from our international collection of social science journalsFurthermore Pluto Journals authors don’t pay article processing charges (APCs).

      scite_
       
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Environmental complexity and stakeholder theory in formal research network evaluations

      Published
      research-article
      a , * , ,
      Prometheus
      Pluto Journals
      Bookmark

            Abstract

            Governments in OECD countries are turning more and more towards creating networked entities as a means of organising cross-sector and multidisciplinary research. Yet, there is little discussion of how such networks operate and how they differ in evaluation terms from other research entities (individuals and organisations). This particularly relates to the policy objectives of networks. In this paper, we use the literature on evaluation, impact and value as a lens through which to focus on the nature and benefits of formal research networks. This paper seeks to refine our concepts of research networks and, in defining the concept of formal research networks, to map the policy issues in evaluating networks. We argue that, to do this, it is important that two extant literatures (stakeholder theory and organisational environments) be introduced into the analysis of network operations. We focus particularly on the significance of environmental complexity for network evaluation.

            Main article text

            Introduction

            Across a range of Organisation for Economic Cooperation and Development (OECD) countries, since at least the early 1990s, there has been continued growth in the diversity of the structures governments have developed to support the distribution of R&D funds. The development of the collaborative network as a mechanism to organise funded research in general, and support research that addresses specific public policy objectives in particular, has been one such innovation. Such ‘knowledge’ or ‘research’ networks cover a spectrum of activities from pre-research capability development to structured research programmes, and have been given various labels in academic writings, such as ‘collaborative’ (Turpin and Fernández-Esquinas, 2011), ‘public–private’ research consortiums (Roelofsen et al., 2011) and ‘teams’ in medical research (Stokols et al., 2008). However, as Rogers et al. note:

            … the basic assumption of network approaches for any set of social phenomena is that the whole is more than the sum of the parts. In other words … the nature of the links between actors takes priority over their individual characteristics. (2001, p.167)

            While there has been increasing attention to the importance of collaboration institutionally (Gibbons et al., 1994; Howells and Edler, 2011) and to the role of connections between universities and industry for innovation (Leydesdorff and Meyer, 2006) and philanthropy, the evaluation of what we label here as ‘formally organised networks’ remains underdeveloped. Most evaluations of publicly funded R&D are conducted to assess the performance of individuals and/or specific institutions. They do not provide guidance on the value of networked R&D activities. Even larger entities, such as research centres and programmes, are treated as ‘super-individuals’, the sum total of their members, for evaluation purposes (Rogers et al., 2001). This tradition does not help in the specification of relevant boundaries for network analysis of R&D systems that may lead to evaluation based on structural properties.

            There has been a great deal of investigation of informal networks and invisible colleges (e.g. Crane, 1972; Bozeman and Rogers, 2002). There is also emerging analysis using mathematical and visualisation tools. In this paper, we make an important distinction between informal and formal research networks.1 The former consist of the small networks of collaborating individuals (colleagues and research assistants etc.) involved in most scientific projects (including where some research time is paid). Most often a project that involves some level of collaboration can be thought of as an informal network. On the other hand, the formal network is a form of organisation in its own right, typically reviewed and funded by government agencies to encourage research where there is a need; for example, in nascent fields, to achieve critical mass, to link scattered researchers, or to increase the involvement of stakeholders.

            There is a range of research on networking at a regional level (regional innovation systems) where the idea is used notionally without actual details of the networks. We are using the evaluation, impact and value literatures as a lens to highlight a specific weakness in conceptual development. The goal of this paper is to begin to reformulate how we describe and think of network impact. This involves re-thinking the language of networks. We need to acknowledge the multiple literatures on networks that think of them and describe them in different ways.

            First, we explore briefly the place of networks in science policy. However, the line of argument we develop contrasts with recent articles on network evaluation (Rogers et al., 2001; Mote et al., 2007). These two papers provide an extensive review of network-related literature and both conclude that there is little in the way of analysis relevant to the needs of evaluators. We have, therefore, taken a different route and review the fundamentals of research organisation evaluation, summarising the key findings and practices as they relate to organisation size, governance and networks structure. From this foundation, we argue that there is a need to move in new directions. Our research leads us to suggest that a missing component of the analysis is the research environment, particularly non-research stakeholders, who are fundamental to the network model. We reach into the organisational environment literature to show that the ‘environment’ of networks, an important variable for performance, has been overlooked.

            Innovation infrastructure and science policy

            Systems and policy

            Knowledge is now understood as an important input into societies to enhance their capacity for economic growth and social development. Governments seek to promote the generation of knowledge and its application to the economy. As part of their mission to increase economic well being, social well being, national security and administrative efficiency, governments use a variety of policy options to implement their national vision. Over the second half of the twentieth century, national and regional governments have invested in universities, government-funded laboratories and other public programmes, including defence (Freeman, 1968). How to join up these infrastructures with other stakeholder communities has become a key concern of science policy makers in recent decades (see Etzkowitz and Leydesdorff, 2000; OECD, 2006; Leydesdorff et al., 2006).

            One key approach to joining up the system has been the development of large scale (often national) formal research networks (FRNs). Examples of such research networks include:

            • Networks of Centres of Excellence (NCEs) (Canada);

            • the major collaborative research initiative – now re-badged as a partnership (SSHRC Canada);

            • Canadian Institutes of Health research team science;

            • Cooperative Research Centres (CRCs) (Australia);2

            • FRSQ strategic networks (Quebec, Canada);

            • British Columbia health of population networks (Canada);

            • Economic and Social Research Council priority networks (UK); and

            • European Framework programmes (European Union).

            Research networks are part of the system of innovation in which they operate. Thus, a nationwide research network is part of the national system of innovation, while a local research network is a part of both the local system of innovation and of the mosaic of policies and structures which forms the national innovation system. But at whatever level they operate, they are part of the infrastructure of that system of innovation, just as research councils, research organisations and key laboratories can be understood as infrastructure that supports innovation. Thus, research networks need to be analysed within their respective systems of innovation and tested as to the contribution they, as infrastructure, make to these systems. In this paper, we aim both to map the problems of evaluating formal research networks, and to suggest a path for future research. We define our interest in the formal and research components of networks in Tables 1 and 2.

            Table 1. Description of formal research networks
            Definition typesDescription
            Formal condition 1The network is funded for a set purpose for a set period of time. Most often they are a creation of government research grants, although they might, for example, be funded by large non-profit foundations.
            Formal condition 2The network is required to establish a formal administrative structure.
            Formal condition 3The network is established, in part, to meet a policy objective. Examples include: encouragement of linkages between researchers and user communities, and encouragement of communication across a dispersed population.
            Examples of criteria for NCEs I Canada include:
            • increasing networking and collaboration among researchers from Canada and abroad;

            • creating nationwide, multidisciplinary and multi-sectoral research partnerships between universities and the user sector; and

            • establishing training that promotes multidisciplinary and multi-sectoral research approaches and encourages trainees to consider the economic, social, environmental, and ethical implications of their work.

            Australian CRCs selection criteria include this prompt: ‘What end-users will utilise the research outputs? What strategies will be put in place to assist utilisation of research outputs by end-users, including SMEs?’
            Probable conditionThe network will likely be formally evaluated at some point.
            Interpretative conditionEven if all these conditions are met, there will be a need to distinguish among research collaborations. Although, the analysis presented in this paper is of relevance to large collaborations, it is most relevant to situations where there is an expectation of formal network construction that reaches beyond researchers into the stakeholder communities.
            Sources: This NCE example comes from http://www.nce-rce.gc.ca/ReportsPublications-RapportsPublications/NCE-RCE/ProgramGuide-GuideProgram_eng.asp. For Australian CRCs, see Australian Government (2011). The application impact statement requires detailed analysis of expected benefits and for whom.
            Table 2. Definition of research in our classification of networks
            Definition typesDefinitions
            Condition 1The network will be established to generate new knowledge using the OECD Frascati Manual definition of R&D (and will likely have as a policy objective the diffusion of new knowledge)
            The network will develop leading-edge research findings relevant to the needs of the user sector.
            Condition 2An element of the network’s mandate will be to train, encourage or mentor new researchers.

            It is worth noting that within the formal organisational entities of networks there are many informal networks of the kind Bozeman and others describe, but our interest here is at the level of the organisation – the network as an entity. It is also important to acknowledge and analyse the relevance of the current prevailing paradigms in research organisation evaluations. Both a science production (productivity) and an economic value perspective have come to dominate the field of research programme evaluation (see Freeman, 1968; Godin, 2007). Therefore, before focussing attention on the general field of science policy evaluation, it is valuable to consider the variety in evaluation strategies.

            Evaluation of what for whom

            A distinction must be made between strategic policy reviews and programme evaluations. The former represents analysis of the big picture, what has worked and what has not worked. Such analyses often encompass elements of programme evaluation with future policy development suggestions. An example is the federal review of Australia’s innovation policies and support programme conducted in 2008 (Cutler, 2008). Such reviews address the strategic question for a given situation: the right organisational structure for this, even whether the right issue is being addressed. In such analyses, much information is required to identify gaps in the system and thus to initiate something new.

            Programme evaluation can be defined as the ongoing regular review of programmes or organisations. In the literature on evaluation, there is a large number of taxonomies of evaluation, though the theoretical development of evaluation frameworks has lagged behind (see Demarteau, 2002). Hansen suggests that three meta issues need to be addressed in designing evaluations:

            • evaluation design should logically be based on the purpose of carrying out an evaluation;

            • evaluation needs to be based in the characteristics of the evaluand; and

            • characteristics of the problem that the program or organization under evaluation aims to resolve need to be incorporated. (Hansen, 2005, p.451)

            Such questions of evaluation purpose and design can be understood through various frameworks of evaluation systems. These include evaluations of the results (based on initial goals), process models, system models, economic models, actor models and programme theory models. Hansen’s taxonomy facilitates a clear analysis of the worldviews of evaluators. Science grants are assessed through an actor approach (peer review), while much of the impact of science and technology organisations is assessed for government economic ministries through economic models (see OECD, 2007, 2008). On occasions, these rules of evaluation seem to be ignored and organisations with different purposes and contexts are evaluated as identical entities.3

            Network outputs: the productivity paradigm

            The major focus of work on the benefits of R&D is based on what can loosely be described as the economics of science.4 Within this category of work, it is possible to distinguish three overlapping areas of research which have emerged over the last 30 years or so. The first can be summarised as studies in the economics of R&D, particularly in assessing R&D through various metrics, such as patents, bibliometrics and return on investments in the private sector (see Pavitt, 1991; Dasgupta and David, 1994; Stephan, 1996; Audretsch et al., 2002). A second tradition has been built up around the practical problems of assessing particular government programmes. Performed by consultancies (e.g. ACIL Tasman, 2006) as well as academics, this literature often draws upon research in peer reviewed journals. It has been innovative in the search for methods and data that reveal the value of particular programmes and organisations.

            The third stream of work attempts to bridge and synthesise these two worlds. It reformulates the primary question of policy makers in assessing where to continue funding by addressing what can be expected of R&D programmes in the public sector. For example, Salter and Martin (2001) have argued that public research has six principal types of impact. The focus of much of the investigation into networking has been on the necessity of the private and public sectors to collaborate in order to develop new products and services (see Thune and Gulbrandsen, 2011; Ryan, 2011). Note that though lip service is paid to government and non-profit stakeholders, what matters in the literature are industry partners, because they are seen as the drivers of innovation and economic growth. It is critical to note that Salter and Martin suggest that many of the links are informal and thus do not fit the criteria established for our study.

            Networks have been the focus of much empirical research. … This work indicates that firms and industries link with the publicly funded science base in many different ways and these links are often informal. (Salter and Martin, 2001, p.523)

            In contrast, Stein et al. (2001) reviewed the history of five knowledge networks, some of them formal research networks and others deigned to enhance human capital development in North–South collaborations through international experience and training. The authors developed a simple but useful tool for thinking about network effectiveness:

            Would we know less if the network had not been created … would we know differently if network members had not had the opportunity to work together … [and] would we have known … more slowly or less widely …?

            These are good practical questions, though they do not address the current demands of governments for measuring value and impact. As measurement becomes an increasing concern, programme evaluators confront a common set of challenges (see Fahrenkrog et al., 2002):

            • attribution – is it possible to ascribe a particular output, outcome or impact to a particular research project or programme? Such benefits may (probably are) derived from the accumulated experience derived from multiple projects while a given project may have an impact on, or contribute to, multiple outputs;

            • appropriation – the danger of finding the benefits being looked for (i.e. misappropriating good news as indicators of programme effectiveness);

            • timing – research impacts often become clear long after the evaluation process is complete;

            • inequality – a small number of research projects may account for most of the measurable effects (but it is not possible to judge the value of the majority of projects in terms of the process of knowledge accumulation); and

            • the project fallacy – it is often assumed, hoped or demanded (i.e. policy makers often expect) that everything will have an identifiable benefit, which can then be attributed equitably and in a timely fashion.

            In essence, these all emphasise the challenge of accounting for the interactions between science projects and between these projects and external knowledge sources and the wider economic context. These challenges have been addressed in various ways by different researchers, but it is important to note that the structure and scale of the project/programme/organisation being evaluated are important characteristics. Single organisations or programmes offer specific challenges, but networks, by their nature, generate a level of challenge of analysis that makes these issues of second order importance. Nevertheless, despite these technical problems, much of the programme evaluation work boils down to bibliometric analysis of output and journal quality, as well as commercialisation-oriented metrics.

            Productivity of research organisations and centres

            While it is acknowledged within the S&T indicators field (see Geisler, 2000) that organisational size and structure matters for evaluation, there are few surveys of evaluation at different scales of organisation. As a first step in this direction, to highlight both the methods of evaluation and the gaps, Table 3 is provided as a map of how particular research organisations have been evaluated and the productivity model applied.

            Table 3. Evaluation schemes applied to research organisations of different structures and size
            ScaleForm and/or functionExamplesTypes of evaluation (measures and procedures)
            MicroUniversity research centre or sub-departmental unitUS: research centre faculty and non-centre faculty (1)Case studies; output metrics (cvs)
            CSIRO (2)
            MesoDepartment of an organisationUK RAE and REFUK Research Assessment Exercise (RAE) REF – increasingly driven by metrics (3)
            Australia ERA
            ERA – metrics and peer review (4)
            MacroStand alone research organisation in the national system of innovationNRC CanadaMetrics, case studies
            CSIRO Australia (5)
            Granting councils (provincial, state or national) Canada – CFI (6)CIHR and NIH – mostly peer review audits, the latter unusually on goal attainment.
            CIHR Canada (7)
            NHMRC Australia (8)
            NIH USA (9)NHMRC – mixed case studies and metrics
            RCUK (10)
            RCUK – emphasis on impacts
            Sources: (1) Gaughan and Ponomariov (2008); (2) Gläser et al. (2004); (3) Barker (2007), Office of Science and Innovation (2007) and REF (2011); (4) Australian Research Council (2011); (5) ACIL Tasman (2006); (6) Hickling Arthurs Low (2002); (7) Bernstein, Hicks et al. (2006); (8) Garrett-Jones et al. (2004); (9) US Department of Health and Human Services (2007); (10) Corbyn (2008), RCUK (2008) and Matthews (2011).

            The smaller and more diffuse the organisation, the more conventional metrics pose problems and lose relevance. As Gläser et al. (2004) point out, there are ‘least evaluable units’ (LEU) where publication measures of scientific output and impact become unreliable. Their analysis of the Australian Commonwealth Scientific and Industrial Research Organization (CSIRO) discovered that these LEUs may be surprisingly large. At the other end of the spectrum, as the scale of activity increases, the specificity of the assessment must necessarily decrease. Table 3 lays out a range of organisational structures and scales, with examples of the evaluation indicators and approaches being used. We have focussed our attention on examples from Australia, Canada, the UK and the USA, where we have the most experience, but in the expectation that the findings have wider relevance.

            There is a general trust in metrics for evaluation, but we also note that a number of organisations are developing evaluation systems that are not completely dependent on metrics. Political changes play a part in evaluation structures (one Australian government initiated the Research Quality Framework while another government replaced it with Excellence in Research for Australia) as does resistance from researchers (the UK’s impact framework).

            The evaluations represented in Table 3 are examples of evaluation strategies that emphasise the unity (super-individuals) of evaluation units. What then of research networks? What frameworks and models are available for organisational structures that, by definition, should not be treated as super-individuals?

            Networks in evaluation: form and function

            An evaluation model for formal networks should address the following issues:

            • the purpose of the networks and the purpose of the evaluation;

            • the scale and form of the networks; and

            • programme outcome attributes – what is the productivity of networking?

            In the section that follows, we examine the first two points, leaving the last for the final section. First let us look at the nature of networks.

            Formal, or at least semi-formal, networks that use and develop the knowledge of their members can be roughly divided into two types: knowledge/research networks, which carry out collaborative research and information exchange and propagation, and policy-oriented networks, which can consist of communities of policy researchers who carry out research for evidence-based policy (Nutley et al., 2007) or, alternatively, advocacy and issue-based communities that try to influence government policy. Although research is common to both, we are primarily interested in those networks that have as a prime objective the creation of new knowledge and the diffusion of that new knowledge or the building of research capacity in new fields of science.

            Organised research networks, as distinct from self-organising informal networks, are politically necessary in large jurisdictions, particularly where there are widely distributed and relatively small populations. It is interesting to note that national FRNs are a Canadian invention. Although a number of network programmes preceded it, the Networks of Centres of Excellence programme established in 1988 appears to be the first significant public–private research collaboration model. Other nations (such as Australia) may have looked at Canadian networks to see how they could be adapted to their own situations (Salazar and Holbrook, 2007). It seems entirely possible that the Australian Cooperative Research Centres programme was influenced by the development of the Canadian NCEs (Slatyer, 1994; Networks of Centres of Excellence of Canada, 2004). In Australia, networks meet the needs of a small population spread along the east coast, while, in Canada, networks address the needs of a population spread across the northern US border, and meet the political needs within which most Canadian researchers operate (Salazar and Holbrook, 2007).

            The NCE Program invests in national research networks that: stimulate leading-edge research in areas critical to economic and social development, develop and retain world-class researchers in areas essential to Canada’s productivity, create nationwide multidisciplinary and multi-sectoral research partnerships, and accelerate the exchange of research results within the Networks and the use of these results by organizations who can harness them for economic and social development. (Canadian Networks of Centres of Excellence: National Centres of Excellence, 2007)

            Networks enhance knowledge transfer and policy impact, networks build or increase research capacity, networks promote collaboration and partnerships. (British Columbia’s Health of Population (research) Networks: MSFHR PIWG, 2008)

            The CRC program supports end-user driven research collaborations to address major challenges facing Australia. CRCs pursue solutions to these challenges that are innovative, of high impact and capable of being effectively deployed by the end-users. (Cooperative Research Centres, 2012)

            The ARC Research Networks scheme builds on investments in excellent research undertaken by individual investigators and small teams to: Enhance the scale and focus of their research; Encourage more inter-disciplinary approaches to research; and facilitate collaborative and innovative approaches to planning and undertaking research. (Australian Research Council Research Networks: Australian Research Council, 2010)

            Given that formal networks can be organisationally unclear, it is unsurprising that so little attention has been paid to constructing appropriate holistic evaluation frameworks (Sala, Landoni, & Verganti, 2011). The simplest starting point is to return to Salter and Martin (2001) and add network specific outputs (numbers are from the original):

            • 1.

              Increasing the stock of useful knowledge. (How much was produced and of what quality? Did the networking shift the direction of research?)

            • 2.

              Training skilled graduates. [Did the network produce new graduates? Are they now doing research or in the wider labour force (see Holbrook, Wixted, Chee, Klingbeil, & Shaw-Garlock, 2009)? Were they embedded into the network – co-publishing etc.?]

            • 3.

              Creating new scientific instrumentation and methodologies (as appropriate for individual networks).

            • 4.

              Increasing the capacity for scientific and technological problem-solving. [Was the network simply diffusing new knowledge (researcher outwards)? Was there knowledge exchange? Was there knowledge transformation (awareness of needs and capabilities and thus generating new problem definitions and new solutions)?]

            • 5.

              Forming networks and stimulating social interaction. (Were the right stakeholders included? Were the new members sticky? Did new networks or projects grants spin-off this project?)

            • 6.

              Creating new firms (as appropriate for individual networks).

            Most of these outputs are fairly standard, although the approach itself does not seem to have been applied anywhere systematically. Nevertheless, it is evident that adopting this approach still leaves the networking itself a black box. It should be clear that standard productivity measures fail to explain why networks are established in the first place:

            … most evaluation of R&D is conducted to assess the performance of individuals and, therefore, does not provide guidance on the value of structural properties of R&D activities. (Rogers et al., 2001, p.167)

            Rogers et al. (2001, p.169) are clear that the structure of relations (informal or formal networks) ‘in many respects is superior to product or outputs focus for R&D evaluation’. Table 4 presents an overview of some of the formal evaluations of networks. It reveals mixed frameworks and a lack of clear vision of how to conceptualise and capture in analysis the value of networking.

            Table 4. Evaluation of research networks
            Network programme scaleNetwork programmes evaluationsIndividual formal research networks performance evaluations
            Micro  
            MesoAustralian Research Networks – no framework apparently developed (1)British Columbia – Health of Population networks (3) – mostly narrative with some stakeholder assessments.
            CIHR team grants programmes – evaluation system under development (2)
            Australian Research Networks – no apparent framework (4)
            MacroCanada – National Centers of Excellence – evaluations 1997 and 2002 (5)Canada – National Centres of Excellence mid-term review criteria appear to be fuzzy (10)
            Australia – Cooperative Research Centres reviews (6)
            MCRI programme review (7)
            Austrian Research Networks (8)
            European Frameworks networks (9)
            Sources: (1) Australia (2010); (2) personal communication; (3) MSFHR PIWG (2008); (4) Australia (2010); (5) Networks of Centres of Excellence (2007a); (6) Insight Economics (2006) and Cutler (2008); (7) Kishchuk (2005); (8) Edler and Rigby (2004) and Rigby (2005); (9) for a survey, see Arnold (2005); and (10) Networks of Centres of Excellence (2007b).

            What we note from our research on evaluations of existing network programmes is a distinct lack of focus on the networking of these networks with the universe of possible stakeholders. Internal collaboration between researchers is an important focus, and there is some interest in collaboration with stakeholders in the network. The European Framework Programmes (FP) have consistently required that project teams be networks that span the geographic dimensions of the EU and link researchers with stakeholders. Although networking itself is apparently a key policy dimension, it remains under-examined and evidence of the benefits of networks is thin. Arnold (2005) notes that they generate new contacts and that smaller networks appear to work better than larger ones, but they are not new networks:

            A factor promoting stability among a core of frequent participators is the fact that (like other network R&D programs) the FP does not generate wholly new R&D networks, but causes network extension. Evaluations of network R&D tend to find that R&D networks evolve over time, rather than being newly constructed for each funding opportunity. (Arnold, 2005, p.14)

            Network governance

            A different model of evaluation ignores the conventional productivity model and the relational structure which we discuss below. It focuses instead on the governance of networks. Provan and Milward (2001) argue that the evaluation of the performance of (public service) networks is complex, in part because of the range of stakeholders, and that it is best to review the governance of the network. Stein et al. (2001) also give attention to governance issues in their comparisons of the successes of five knowledge networks. The productivity of an entity obviously stems, in part, from how that entity is managed. Milward and Provan (2006) state that the structure of the network in terms of administration (centralised or decentralised), leadership style and other governance arrangements matters in networks just as it does in hierarchical organisations.

            Creech and Ramji (2004) have developed the simplest form of governance evaluation of networks for the development and dissemination of information for international development. They focus on effectiveness, structure and governance, relationship governance, efficiency, resources and sustainability, and lifecycle analysis. These are valuable criteria, but applying them to research networks is not straightforward because of a lack of detailed criteria against which they might be tested. Those who have worked in research networks would agree that governance is an important aspect of network success (see Atkinson-Grosjean, 2006). It is all the more important when networks are short lived:

            There is a fundamental gap in the current practice of networking. At present, most organizations are experimenting with models of collaboration for the sharing of information and expertise. … Many researchers are beginning to investigate the value of these models as a means of changing public and private sector actions to be more supportive of sustainable development. But we continue to see organizations struggle with the problem of working together to increase their collective effectiveness, not just to achieve their immediate research objectives but to fulfill their vision of having real influence on decision-making for sustainable development. (Creech, 2001, p.1)

            We can then add to these output measures tests to measure the performance of the network’s governance structures and procedures (adopting a modified version of the Creech list):

            • resources (financial, human) (in the first term of the network, were enough resources devoted to the networking?);

            • governance (was the research conducted with efficiency and were the graduate students incorporated effectively?);

            • effectiveness (what was produced, did the researchers connect to the stakeholders and was the research effort discernibly different for being conducted as a network?); and

            • lifecycle and sustainability (have new networks or projects spun-off?).

            But how do we know there have been genuine connectivity and real relationships?

            Network relationship structures

            A small, but rapidly growing, literature evaluating research networks utilises methods developed in the field of social network analysis. This field has grown from observations on the differential benefits of an individual’s close friends versus acquaintances (weak ties) (Granovetter, 1973), to focus on the use of a particular suite of mathematical methodologies developed in the social sciences (Borgatti et al., 2009) and in physics and mathematics (complex networks) (see Strogatz, 2001). The basic assumption of both the social network and complex network approaches is that relational structure matters:

            If two networks had the same structure then there is an expectation that the outcomes would be very similar. … [Alternatively] teams with the same composition of member skills can perform very differently depending on the patterns of relationships among the members. (Borgatti et al., 2009, p.893)

            Further, the emphasis of network relationship structures is not on traditional social research explaining an individual’s outcomes or characteristics as a function of other characteristics of the same individual (e.g. income as a function of education and gender), but on the explanatory power of the social environment – the web of relationships (Borgatti et al., 2009).

            The tools for analysing and visualising network structure have proven to be attractive to many researchers conducting evaluation work (see e.g. Neurath and Katzmair, 2004). However, the most accessible data on researcher relationships are on authors of academic publications. Consequently, it is no surprise that work on network relationship structures concentrates almost exclusively on researcher–researcher relations. Ryan (2008) has shown that co-publication networks are long lived even when pre-existing and unrelated groups are forced to merge in a single formal research network. These findings mirror those of Arnold (2005) on European networks, which do not, on the whole, integrate new members successfully. In an evaluation of Austrian research networks, Rigby reports a heavy emphasis on bibliometric analysis:

            This bibliometrics review was based on a method employed by PREST for comparing scientific outputs under program funding with those outputs arising without funding. … The method involves three types of analysis: a) a review and assessment of the differential citation rates between the authors’ project and non project publications; b) a review and assessment of the difference in citation rates between those papers published by the authors and those published by non-project authors (in this case also from Austria, but from no other countries) within the same journals; and c) a review and assessment of co-publication patterns from within the project. All the analysis is subject to the availability and the reliability of the data provided by the project interim and final reports under the two programs. (Rigby, 2005, p.6)

            There has been a tendency to use this template to analyse relational patterns and to imply that particular patterns, such as small worlds, have implications for the success of the network and fostering knowledge exchange and innovation (see Protogerou et al., 2010; Van der Valk and Gijsbers, 2010). Thus, there is an expanding field of analyses of the mathematical properties of network connectedness (Klenk et al., 2009; Protogerou et al., 2010; Mattsson et al., 2010; Howells and Edler, 2011) that uses network relationship structures for mapping the collaborative connectedness of specific research networks. Its conclusions have not gone unquestioned (Kilduff and Brass, 2010; Steen et al., 2011).

            While this form of analysis provides an argument for the synergy and additionality of research networks, it is useful only for the knowledge creation aspects of networks. Knowledge mobilisation and transfer, which is a key raison d’être of so many network programmes, is largely ignored as it is not easy to obtain useful relational data. Kilduff and Brass (2010, p.327) note that ‘pure structural research tends to treat different kinds of relationships as more or less equivalent because the focus is on structure rather than the content of ties’. Such an observation can be understood on two distinct levels. The first is that not only do we not pay attention to what knowledge is flowing between actors, but that actors in networks play an active role in constructing the shape of networks, an issue we take up in a separate paper (Holbrook et al., 2012). However, just as important, there is evidence to suggest that the nature of the environment in which the network exists as an organisation may be vitally important for its success. Even if network relational structures are similar, the environments in which they operate or the type of problems on which they work may be very different. Is genomics science development equivalent to mental health in terms of the structure of environment in which networks operate?

            Environmental structure – stakeholder complexity

            As we have just shown, the literature on network relationship structures primarily concerns itself with the connections of those who are related to a particular network, particularly where there is a formally funded entity. The social environment of an individual is defined by those relationships and not, for example, by whether the overall conditions are highly competitive, contractionary or by whether there are opportunities for growth. We have nothing in this literature that focuses on the environment of a network, probably because a network is defined by a particular set of links and is yet to be understood as an organisation in the conventional sense. One area of scholarship which does engage with this larger environment is policy networks research:

            The main thrust of the policy subsystems literature is its effort to derive a conception of relevant policy actors that transcends traditional ‘positivist’ distinctions between agents and structures, and especially between institutionally defined ‘state’ and ‘societal’ actors. A major element of the conception of policy subsystems involves viewing them as being composed of two subsets of all the actors present in the policy ‘universe’. The larger set of actors is composed of those who have some knowledge of the policy issue in question and who collectively construct a policy discourse within a ‘policy community’. (Howlett and Ramesh, 1998, p.269)

            So, in this research field, authors discuss the openness of actors in networks to new ideas, their membership numbers and organisational form (whether organised or based on a loose affiliation of interests). These all affect the rate at which new policies are adopted. This finding connects very closely to the long-standing work in organisational studies on (organisational) environments. That field defines environments in primarily three broad ways: complexity (homogeneity–heterogeneity, concentration–dispersion of organisations), dynamism (stability–instability, turbulence, velocity) and munificence (resource richness of environments) (see Aldrich, 1979; Scott, 1981; Dess and Beard, 1984; McCarthy et al., 2010). We could propose ways in which all three concepts have important implications for research networks. Stein et al. (2001) make clear how important, for example, is the munificence of the environment; networks are costly in terms of administration and networking. Our analysis, however, focuses on environmental complexity. To explicate this, we need to describe the process of network construction.

            Network construction

            A number of features separate formal from informal research networks. The first is that the former are often established to improve the embedding of players and to increase the numbers of players. The second is that formal research networks are established with an explicit management structure. We can build a model of the formal knowledge network organisation, although it has loosely defined and porous boundaries. The principal investigators tend to know one another before a collaborative research bid (see Ryan, 2008) and need to cooperate in building the application for funding (Figure 1). The stakeholders will often become involved in developing the application (see Roelofsen et al., 2011).

            Figure 1

            Stylised structure of networks: the principal investigators and administration

            Once established, the formal network needs some form of organised administration (the square at the centre of Figure 1). Alongside the principal investigators are the workers in the network, graduate and post-doctoral students (Figure 2). These members of the network are supported by a penumbra of colleagues and others with interests in the network’s research.

            Figure 2

            Graduate and post-doctoral students

            Beyond perhaps a few core full-time researchers and doctoral students, the publications-based network may involve many authors who are not actively engaged in network membership, but who collaborate with co-authors who are (Figure 3). Even further beyond this layer are government and industry stakeholders, who may or may not be involved in particular projects or publications. It is important to realise just how blurry the boundaries of a network truly are. Who is in such a network and who is not? Klenk et al. (2009) go to the trouble of identifying only publications which acknowledge funding from the Networks of Centres of Excellence programme, but other authors write of the connectedness of created networks as though the small world phenomena is an exogenous fact and the relevance of the connections uncontested:

            These findings point out that the vast majority of organizations participating in these EU-funded projects are, directly or indirectly, interconnected via collaboration … The greater the density in a network, the greater the connectedness of its members. Information in a dense network will move faster and more efficiently as a result of the many connections and potential lines of allocation. Despite the high connectivity that our networks exhibit, it seems that they are strongly dependent on a core of central actors. (Protogerou et al., 2010, p.368)

            Figure 3

            Colleagues of the principal investigators

            There are several assumptions in such an analysis. The first is that dense networks move information faster, the second that speed entails efficiency, and the third that everything is dependent on central actors.

            Environment complexity of community stakeholders

            In business and organisational studies, researchers grappling with the interactions between businesses and communities have developed stakeholder theory to explore the countervailing forces to pure stockholder value for corporate decision making, and to explain particular patterns of community engagement with business ethics (see Parmar et al., 2010). Various taxonomies of stakeholder have emerged, but perhaps the most prominent to date has been that of Mitchell et al. (1997), who identified urgency, power and legitimacy as factors determining management attention. Crane and Ruebottom (2011) suggest that there is a shift in the literature from interacting with communities to working in communities, and thus to more network-based perspectives of stakeholders. This provides an interesting perspective on how research might interact with a set of stakeholder actors. The environmental complexity for a network, the number of stakeholders, the different clusters of interests, the visions of the problem, the power bases and the differential funding levels will all be acting upon the network performance (see Figure 4).

            Figure 4

            Simple and complex stakeholder environments

            We received valuable insights into how it might be possible to understand the environments of networks from reviewing the results of a workshop on evaluation for eight diverse Health of Population Networks developed and funded in British Columbia. These innovative and unusual networks were funded to promote the development of linkages between researchers and stakeholders to stimulate new research questions, projects and teams. The eight networks covered the diverse population areas of children and youth, environmental and occupational health, mental health, aging, rural and remote health, disabilities health, aboriginal health and women’s health.

            Analysis of the responses to a worksheet aimed at assisting the workshop participants to build an indicator set representing multiple possible outputs of their activities revealed some interesting differences among the networks. Some networks clearly had a strong sense of research possibilities, while others saw strong stakeholder investment in network activities. These responses suggested a possible taxonomy of network attributes which focuses not on individuals, but on the communities of actors with which they engage. Very importantly, what we learnt from this activity is that we could think of research communities in the same terms as external communities. For a range of possible networks, research may span many fields or few, be well funded or not, etc. So, environmental complexity does not refer only to the external environment, but also to the totality of activity for a given problem.

            In the most simple modelling of this approach, we have two stakeholder communities (researchers and others – industry, government researchers or community populations) and we have two starting positions for each community (diffuse and less diffuse). This provides a basic modelling structure of four combinations (see Figure 5). This is constructed on the grounds that the more complex the environment, the more difficult it will be to build a coherent researcher or stakeholder community of interests around particular issues and possible policy answers. Our breakdown looks like this:

            • Less complex researcher environment (less diffuse) with more complex (diffuse) stakeholder environments – researchers are relatively easy to define, but the stakeholder communities are more scattered across geography, size of population and topics, making connection more difficult (e.g. gerontology, children and youth).

            • Less complex (less diffuse) researcher community with less complex (less diffuse) stakeholder environment – circumstances where the research community is easily identified and where there might be a leading charity or other centralising organisation organising the environment (e.g. environmental and occupational health).

            • More complex (diffuse) researcher community with more complex (diffuse) stakeholder environment – there is both a disparate researcher community (researchers from many social sciences, natural sciences and health may be interested) while the stakeholder community is also fragmented (e.g. rural health and women’s health).

            • More complex (diffuse) researcher community with less complex (less diffuse) stakeholder environments – the research community is more disparate across geography and fields, but strong emphasis on community engagement and support with some obvious ‘go to’ organisations (e.g. First Nations health).

            Figure 5

            Network environment typology

            This first rather simplistic conceptualisation of the environment and the original definition of complexity (homogeneity–heterogeneity, concentration–dispersion of organisations) has led us to think of measures to differentiate the researcher/stakeholder clusters along three dimensions. Research on knowledge and innovation systems has generally emphasised the significance of the following dimensions, but the specific list has been inspired by our work on networks and the idea of distance contained in the work of Sorenson et al. (2006) and Nooteboom (2009).

            • Geographic proximity (spatial closeness of partners – too spread out and a network will be difficult to operate, too close and what is required is a centre, not a network).

            • Agenda proximity (if stakeholders and researchers are not aligned, it will be hard to generate useful research for stakeholders, but the network may have a coherent agenda as its purpose).

            • Field fragmentation (the greater the diversity of the interests among researchers or stakeholders, the less likely that a coherent research programme will emerge).

            Each of these three dimensions can be broken down into measurable qualities to distinguish different network forms and purposes. It should also be possible to develop a sliding scale that indicates whether a particular proposal lacks the diversity to warrant network funding.

            Taking the environmental complexity of a particular network into account in its evaluation will greatly improve the quality of evaluations. In environments of high complexity, the indicators of success should emphasise less the productivity of a network and emphasise more whether there is a growing consensus around central problems. Key performance indicators would be as follows:

            • Did the research or stakeholder community locate and embed appropriate community members in the network?

            • Was an agenda appropriate to the external circumstances developed?

            • In less complex starting conditions, was discernible progress made in solving a particular science problem?

            One of the main benefits of such an approach is that we can differentiate networks on the basis of their level of outputs in comparison with what may be expected of them. In some situations, it is conceivable that a network in a more complex starting position could make greater progress than one that emerges from a field of science that is already highly achieving. In the latter case, network formation would make little discernible difference. This levels the playing field between networks in terms of evaluation. The environmental conditions are built into the evaluation model.

            Overall, we do not discount the use of all the above approaches for evaluation (governance, structural analysis and stakeholder analysis). We do, however, emphasise that the stakeholder/environmental complexity perspective has been largely ignored despite its importance, just as it has been with more traditional organisational forms.

            Discussion and conclusions

            The impact statement required on Australian CRCs applications, as one example, appears to pitch one network proposal against another, even though their social and economic environments will be very different. Treating all actors identically displays lack of imagination for what is possible from networks. Networks will be seen as successful only if the measures of success relate to their environment and to realistic objectives within these environments.

            Yet, while there is now a large body of literature on the evaluation of publicly funded scientific research, and a nascent one on research networks (Sala et al., 2011), evaluations of stakeholder engagement focus on those connected with the networks and not on the structure/complexity of the researcher/stakeholder universe. Evaluations conducted to date for the most part overlook important characteristics of stakeholders.

            As Hansen (2005) shows, there are multiple strategies and paradigms for evaluation; the key is fitness for purpose. If the goal is purely economic outcomes, then we would probably fund research only in each country’s most dominant industries, but programmes have multiple objects and need relevant evaluations of each key criterion. Currently, the approach to network evaluation which is gaining most traction is social network analysis. However, as a caution:

            The emerging debate concerning the importance of indirect ties and different kinds of ties offers the prospect of a significant extension of the network research program. Does the importance of relations imply that different types of relations are of differential importance, or do they need to be aggregated to provide a complete picture of the appropriability of relations? (Kilduff and Brass, 2010, p.342)

            We suggest going even further, beyond the ties that define a particular network, to understand the complexity of the problem space it operates within. Research networks in different fields are likely to construct networks differently and so need to be assessed differently. Each broad area of science has its own capital intensity, its own stakeholder community structures, and the knowledge–problem frontier is different. In the medical sciences, the more the research engages with societal issues, the more complex will become the stakeholder environment. In the natural sciences, it could be speculated that researcher and stakeholder conditions will be less complex as the researcher community in particular fields is often quite small and potential industry partners are limited. The social sciences will often face complex environments (see Bernstein et al., 2000) at both ends of the system because choosing who is relevant to a particular social issue can be difficult.

            At this time, it is possible only to sketch out the concepts and measures needed to develop relevant indicators. If what we have mapped out can be verified in empirical testing, then one implication of the work presented here is that more emphasis should be placed on evaluating the problem-stakeholder maps of proposed research networks before they are funded. An intriguing prospect is the role of charities and foundations in simplifying environmental complexity. Our assessment model is built on the premise that it is possible to narrow down a target group of potential research, industry and community partners, and to evaluate the network on its success in reaching and including informed but unconnected partners.

            Acknowledgements

            The authors acknowledge the valuable comments of two anonymous reviewers. The paper has also benefited from the contributions of the editor, Stuart Macdonald, and Tim Kastelle. We recognise the contribution made by a large number of people who have heard and responded to various seminars and special workshops in Canada and Australia. The early thinking for this paper stems from our analysis of results from the Michael Smith Foundation for Health Research funded Health of Population Networks (HoPNs) workshop Navigating Network Evaluation. The workshop was coordinated by Dr. Gloria Gutman (BC Network for Aging Research) and Dr. Bonita Sawatzky (Disabilities Health Research Network).

            Appendices

            Appendix 1. Glossary

            • CIHR – Canadian Institutes of Health Research

            • CRCs – Cooperative Research Centres (Australia)

            • CSIRO – Commonwealth Scientific and Industrial Research Organisation (Australia)

            • HOPNs – Health of Population Networks, funded by the Michael Smith Foundation for Health Research

            • MSFHR – Michael Smith Foundation for Health Research

            • NCEs – Networks of Centres of Excellence (Canada)

            • NHMRC – National Health and Medical Research Council (Australia)

            • NIH – National Institutes of Health (US)

            • NSERC – Natural Sciences and Engineering Research Council of Canada

            • RAE – Research Assessment Exercise (UK)

            • SSHRC – Social Sciences and Humanities Research Council (Canada)

            Notes

            References

            1. ACIL Tasman (2006) Review of the Impact of Some Recent CSIRO Research Activities, report to CSIRO, Canberra, available from http://www.csiro.au/resources/pflj.html [accessed December 2011].

            2. Aldrich H.. 1979. . Organizations and Environments . , Englewood Cliffs , NJ : : Prentice-Hall. .

            3. Arnold, E. (2005) What the Evaluation Record Tells Us About Framework Program Performance, Technopolis Group, Brighton, available from http://www.technopolis-group.com/site/downloads/index.htm [accessed December 2011].

            4. Atkinson-Grosjean J.. 2006. . Public Science; Private Interests: Cultures and Commerce in Canada’s Networks of Centres of Excellence . , Toronto : : University of Toronto Press. .

            5. Audretsch D., Bozeman B., Comb K., Feldman M., Link A., Siegel D., Stephan P., Tassey G. and Wessner C.. 2002. . The economics of science and technology. . Journal of Technology Transfer . , Vol. 27((2)): 155––210. .

            6. Australian Government (2011) Cooperative Research Centres Program Selection Round Application Instructions Selection Round 14 (2011), Department of Innovation, Industry, Science and Research, Canberra, available from https://www.crc.gov.au/Information/ShowInformation.aspx?Doc=14th_Selection_rounds&key=bulletin-board-selection-rounds_14&Heading=14th [accessed June 2011].

            7. Australian Research Council (2010) ARC Research Networks, available from http://www.arc.gov.au/ncgp/networks/networks_default.htm [accessed January 2012].

            8. Australian Research Council. . 2011. . Excellence in Research for Australia 2010 National Report . , Canberra : : Australian Research Council. .

            9. Barker K.. 2007. . The UK Research Assessment Exercise: the evolution of a national research evaluation system. . Research evaluation . , Vol. 16((1)): 3––12. .

            10. Bernstein, A., Hicks, V., Borbey, P. and Campbell, T. (2006) ‘A framework to measure the impacts of investments in health research’, presentation to the Blue Sky II conference, What Indicators for Science, Technology and Innovation Policies in the 21st Century?, Ottawa, September, available from http://www.oecd.org/dataoecd/10/42/37450246.pdf [accessed October 2008].

            11. Bernstein, S., Lebow, R., Stein, J. and Weber, S. (2000) ‘God gave physics the easy problems: adapting social science to an unpredictable world’, European Journal of International Relations, 6, 1, pp. 43–76.

            12. Borgatti S., Mehra A., Brass D. and Labianca G.. 2009. . Network analysis in the social sciences. . Science . , Vol. 323:: 892––895. .

            13. Bozeman B. and Rogers J.. 2002. . A churn model of knowledge value. . Research Policy . , Vol. 31:: 769––794. .

            14. Cooperative Research Centres (2012) Cooperative Research Centres website, available from https://www.crc.gov.au/Information/default.aspx [accessed October 2012].

            15. Corbyn, Z. (2008) ‘RCUK abandons impact formula’, Times Higher Education, 6 March, available from http://www.timeshighereducation.co.uk/story.asp?sectioncode=26&storycode=400973 [accessed October 2008].

            16. Crane A. and Ruebottom T.. 2011. . Stakeholder theory and social identity: rethinking stakeholder identification. . Journal of Business Ethics . , Vol. 102((1)): 77––87. .

            17. Crane D.. 1972. . Invisible Colleges . , Chicago , IL : : University of Chicago Press. .

            18. Creech, H. (2001) Form Follows Function: Management and Governance of a Formal Knowledge Network, International Institute for Sustainable Development, Winnipeg.

            19. Creech, H. and Ramji, A. (2004) Knowledge Networks: Guidelines for Assessment, International Institute for Sustainable Development (IISD), Winnipeg, available from http://www.iisd.org/pdf/2004/networks_guidelines_for_assessment.pdf [accessed December 2011].

            20. Cutler, T. (2008) Venturous Australia: Building Strength in Innovation, Department of Industry, Innovation, Science and Research, Canberra, available from http://www.innovation.gov.au/innovationreview/Pages/home.aspx [accessed December 2011].

            21. Dasgupta P. and David P.. 1994. . Toward a new economics of science. . Research Policy . , Vol. 23((5)): 487––521. .

            22. Demarteau M.. 2002. . A theoretical framework and grid for analysis of program-evaluation practices. . Evaluation . , Vol. 8((4)): 454––473. .

            23. Dess G. and Beard D.. 1984. . Dimensions of organizational task environments. . Administrative Science Quarterly . , Vol. 29:: 52––73. .

            24. Edler, J. and Rigby, J. (2004) Research Network Programs Evaluation for the Austrian Science Fund (FWF), PREST, University of Manchester, available from http://www.fwf.ac.at/de/downloads/pdf/networks_evaluation.pdf [accessed December 2011].

            25. Etzkowitz H. and Leydesdorff L.. 2000. . The dynamics of innovation: from national systems and “mode 2” to a triple helix of university–industry–government relations. . Research Policy . , Vol. 29((2)): 109––123. .

            26. Fahrenkrog, G., Polt, W., Rojo, J., Tübke, A. and Zinöcker, K. (eds) (2002) RTD Evaluation Toolbox: Assessing the Socio-Economic Impact of RTD Policies, IPTS Technical Report Series, IPTS, Seville.

            27. Freeman, C. (1968) ‘Science and economy at the national level’ in OECD (ed.) Problems of Science Policy, OECD, Paris, pp. 60–65.

            28. Garrett-Jones S., Wixted B. and Turpin T.. 2004. . Some international benchmarks for evaluating Australian health and medical research. . Research Evaluation . , Vol. 13((3)): 155––166. .

            29. Gaughan M. and Ponomariov B.. 2008. . Faculty publication productivity, collaboration, and grants velocity: using curricula vitae to compare center-affiliated and unaffiliated scientists. . Research Evaluation . , Vol. 17((2)): 103––110. .

            30. Geisler E.. 2000. . The Metrics of Science and Technology . , Westport , CT : : Quorum Books. .

            31. Gibbons M., Limoges C., Nowotny H., Schwartzman S., Scott P. and Trow M.. 1994. . The New Production of Knowledge. The Dynamics of Science and Research in Contemporary Societies . , Thousand Oaks , CA : : Sage. .

            32. Gläser J., Spurling T. and Butler L.. 2004. . Intraorganisational evaluation: are there “Least evaluable units”? . Research Evaluation . , Vol. 13((1)): 19––32. .

            33. Godin B.. 2007. . Science, accounting and statistics: the input–output framework. . Research Policy . , Vol. 36((9)): 1388––1403. .

            34. Granovetter M.. 1973. . The strength of weak ties. . American Journal of Sociology . , Vol. 6:: 1360––1380. .

            35. Hansen H.. 2005. . Choosing evaluation models A discussion on evaluation design. . Evaluation . , Vol. 11((4)): 447––462. .

            36. Hickling Arthurs Low (2002) Evaluation Framework for the Canada Foundation for Innovation, report prepared for the Canada Foundation for Innovation, available from http://www.innovation.ca/evaluation/ef_hal_e.pdf [accessed August 2007].

            37. Holbrook, A., Wixted, B., Chee, F., Klingbeil, M. and Shaw-Garlock, G. (2009) Measuring the Return on Investment in Research in Universities: The Value of the Human Capital Produced by these Programs, report to the British Columbia Ministry of Technology, Trade and Economic Development, available from http://www.sfu.ca/cprost/reports.html [accessed December 2009].

            38. Holbrook, J., Wixted, B., Lewis, B. and Cressman, D. (2012) ‘The structure and construction of formal research networks: a policy oriented understanding of stakeholder engagement’, mimeo.

            39. Howells J. and Edler J.. 2011. . Structural innovations: towards a unified perspective? . Science and Public Policy . , Vol. 38((2)): 157––167. .

            40. Howlett M. and Ramesh M.. 1998. . Policy subsystem configurations and policy change: operationalizing the postpositivist analysis of the politics of the policy process. . Policy Studies Journal . , Vol. 26((3)): 466––481. .

            41. Insight Economics. . 2006. . Economic Impact Study of the CRC Program, report prepared for the Australian Government Department of Education . , Canberra : : Science and Training. .

            42. Kilduff M. and Brass D.. 2010. . Organizational social network research: core ideas and key debates. . Academy of Management Annuals . , Vol. 4:: 317––357. .

            43. Kishchuk, N. (2005) Performance Report: SSHRC’s Major Collaborative Research Initiatives (MCRI) Program, SSHRC, Ottawa, March, available from http://www.sshrc-crsh.gc.ca/about-au_sujet/publications/mcri_performance_e.pdf [accessed October 2012].

            44. Klenk N., Hickey G., Maclellan J., Gonzales R. and Cardille J.. 2009. . Social network analysis: a useful tool for visualizing and evaluating forestry research. . International Forestry Review . , Vol. 11((1)): 134––140. .

            45. Leydesdorff L., Dolfsma W. and Panne van der G.. 2006. . Measuring the knowledge base of an economy in terms of triple-helix relations among technology, organization, and territory. . Research Policy . , Vol. 35:: 181––199. .

            46. Leydesdorff L. and Meyer M.. 2006. . Triple helix indicators of knowledge-based innovation systems: introduction to the special issue. . Research Policy . , Vol. 35:: 1441––1449. .

            47. Matthews, D. (2011) ‘Deep economic impact: new mission to ensure university research benefits UK business’, Times Higher Education, 10 July, available from http://www.timeshighereducation.co.uk/story.asp?storycode=416762 [accessed December 2011].

            48. Mattsson P., Laget P., Vindefjärd A. and Sundberg C.. 2010. . What do European research collaboration networks in life sciences look like? . Research Evaluation . , Vol. 19((5)): 373––384. .

            49. McCarthy I., Lawrence T., Wixted B. and Gordon B.. 2010. . A multidimensional conceptualization of environmental velocity. . Academy of Management Review . , Vol. 35((4)): 604––626. .

            50. Milward, B. and Provan, K. (2006) A Manager’s Guide to Choosing and Using Collaborative Networks, IBM Center for the Business of Government, available from http://www.businessofgovernment.org/ [accessed June 2011].

            51. Mitchell R., Agle B. and Wood D.. 1997. . Toward a theory of stakeholder identification and salience. defining the principles of who and what really counts. . Academy of Management Review . , Vol. 22:: 853––886. .

            52. Mote J., Jordan G., Hage J. and Whitestone Y.. 2007. . New directions in the use of network analysis in research and product development evaluation. . Research Evaluation . , Vol. 16((3)): 191––203. .

            53. MSFHR PIWG (2008) Evaluation Framework Version 1.7 Health of Population Networks Performance Indicator Working Group, Michael Smith Foundation for Health Research, Vancouver.

            54. National Centres of Excellence (2007) Annual, Report 2006–2007.

            55. Nelson R.. 1959. . The simple economics of basic scientific research. . Journal of Political Economy . , Vol. 67((3)): 297––306. .

            56. Networks of Centres of Excellence (2007a) Evaluation of the Networks of Centres of Excellence Program, available from http://www.nce.gc.ca/pubs/reports/2007/evaluation/NCEEvaluationReport2007-eng.pdf [accessed December 2011].

            57. Networks of Centres of Excellence (2007b) Available from http://www.nce.gc.ca/pubs/reports/2007/selec-renewal-oct07_e.pdf [accessed December 2011].

            58. Networks of Centres of Excellence of Canada (2004) The Networks of Centres of Excellence Program: 15 Years of Innovation and Leadership, available from http://www.nce.gc.ca/pubs/history/NCE-histEN.pdf [accessed August 2007].

            59. Neurath, W. and Katzmair, H. (2004) ‘Networks of innovation – evaluation and monitoring of technology programs based on social network analysis (SNA)’, Plattform Forschungs- und Technologieevaluierung GesbR., 20, April [accessed August 2007].

            60. Nooteboom, B. (2009) A Cognitive Theory of the Firm: Learning, Governance and Dynamic Capabilities, Edward Elgar, Cheltenham.

            61. Nutley S., Walter I. and Davies H.. 2007. . Using Evidence. How Research Can Inform Public Services . , Bristol : : Policy Press. .

            62. Office of Science and Innovation (2007) Measuring Economic Impacts of Investment in the Research Base and Innovation – A New Framework for Measurement, available from http://www.berr.gov.uk/dius/science/science-funding/framework/page9306.html [accessed April 2008].

            63. Organisation for Economic Cooperation and Development (OECD) (2006) Science, Technology and Industry Outlook, Organisation for Economic Cooperation and Development, Paris.

            64. Organisation for Economic Cooperation and Development (OECD). . 2007. . Innovation and Growth Rationale for an Innovation Strategy . , Paris : : Organisation for Economic Cooperation and Development. .

            65. Organisation for Economic Cooperation and Development (OECD) (2008) Assessing the Socio-Economic Impacts of Public R&D, OECD workshop, June, Organisation for Economic Cooperation and Development Paris, available from http://www.oecd.org/document/7/0,3343,en_2649_34273_40469255_1_1_1_1,00.html [accessed August 2008].

            66. Parmar B., Freeman R., Harrison J., Wicks A., Purnell L. and de Colle S.. 2010. . Stakeholder theory: the state of the art. . Academy of Management Annals . , Vol. 4:: 403––445. .

            67. Pavitt K.. 1991. . What makes basic research economically useful? . Research Policy . , Vol. 20((2)): 109––119. .

            68. Protogerou A., Caloghirou Y. and Siokas E.. 2010. . Policy-driven collaborative research networks in Europe. . Economics of Innovation and New Technology . , Vol. 19((4)): 349––372. .

            69. Provan, K. and Milward, B. (2001) ‘Do networks really work? A framework for evaluating public-sector organizational networks’, Public Administration Review, 61, 4, pp.414–23.

            70. RCUK (2008) Research Councils UU Response to HEFCE’s Consultation on the New Research Excellence Framework (REF), available from http://www.rcuk.ac.uk/news/070220a.htm [accessed March 2008].

            71. Research Excellence Framework (REF) (2011) Assessment Framework and Guidance on Submissions, Research Excellence Framework, available from http://www.hefce.ac.uk/research/ref/pubs/2011/02_11/ [accessed February 2012].

            72. Rigby, J. (2005) ‘Evaluating the FWF’s research networks’, Plattform Forschungs- und Technologieevaluierung GesbR, 24, September.

            73. Roelofsen A., Boon W., Kloet R. and Broerse J.. 2011. . Stakeholder interaction within research consortia on emerging technologies: learning how and what? . Research Policy . , Vol. 40((3)): 341––354. .

            74. Rogers J., Bozeman B. and Chompalov I.. 2001. . Obstacles and opportunities in the application of network analysis to the evaluation of R&D. . Research Evaluation . , Vol. 10((3)): 161––172. .

            75. Ryan, C. (2008) Evaluating Performance of Collaborative Research Networks: A Socio-economic Framework for Assessing Funded Research Projects, Books on Demand, available from http://www.bod.de/index.php?id=296&objk_id=167105 [accessed August 2008].

            76. Ryan J.. 2011. . Irish experience of cross-sector research collaboration initiatives. . Science and Public Policy . , Vol. 38((2)): 147––155. .

            77. Sala A., Landoni P. and Verganti R.. 2011. . R&D networks: an evaluation framework. . International Journal of Technology Management . , Vol. 53((1)): 19––43. .

            78. Salazar M. and Holbrook J.. 2007. . Canadian STI policy: the product of regional networking. . Regional Studies . , Vol. 41((8)): 1––13. .

            79. Salter A. and Martin B.. 2001. . The economic benefits of publicly funded basic research: a critical review. . Research Policy . , Vol. 30:: 509––532. .

            80. Scott, W. (1981) Organizations: Rational, Natural, and Open Systems, Prentice-Hall, Englewood Cliffs, NJ.

            81. Slatyer R.. 1994. . Cooperative research centres: the concept and its implementation. . Higher Education . , Vol. 28:: 147––158. .

            82. Sorenson O., Rivkin J. and Fleming L.. 2006. . Complexity, networks and knowledge flow. . Research Policy . , Vol. 35:: 994––1017. .

            83. Steen J., Macaulay S. and Kastelle T.. 2011. . Small worlds: the best network structure for innovation? . Prometheus . , Vol. 29((1)): 39––50. .

            84. Stein G., Stren R., Fitzgibbon J. and Maclean M.. 2001. . Networks of Knowledge: Collaborative Innovation in International Learning . , Toronto : : University of Toronto Press. .

            85. Stephan P.. 1996. . The economics of science. . Journal of Economic Literature . , Vol. 34:: 1199––1235. .

            86. Stokols D., Hall K., Taylor B. and Moser R.. 2008. . The science of team science. overview of the field and introduction to the supplement. . American Journal of Preventive Medicine . , Vol. 35((2)): S77––S89. .

            87. Strogatz S.. 2001. . Exploring complex networks. . Nature . , Vol. 410:: 268––276. .

            88. Thune T. and Gulbrandsen M.. 2011. . Institutionalization of university–industry interaction: an empirical study of the impact of formal structures on collaboration patterns. . Science and Public Policy . , Vol. 38((2)): 99––107. .

            89. Turpin T. and Fernández-Esquinas M.. 2011. . The policy rationale for cross-sector research collaboration and contemporary consequences. . Science and Public Policy . , Vol. 38((2)): 82––86. .

            90. US Department of Health and Human Services (2007) Fiscal Year 2008 National Institutes of Health – Volume II Overview – Performance Detail, available from http://nihperformance.nih.gov/ [accessed September 2007].

            91. Van der Valk, T. and Gijsbers, G. (2010) ‘The use of social network analysis in innovation studies: mapping actors and technologies’, Innovation: Management, Policy and Practice, 12, 1, pp.5–17.

            Footnotes

            1. The literature often focuses on social and business networks: our focus is on networks for the creation and diffusion of knowledge.

            2. Although the Australian entities are called ‘centres’ they have many similar features to the NCEs of Canada as they are collaborative, multiorganisational, multidisciplinary and a distributed multisite structure.

            3. Read, for example, the approach to the evaluation of four proposed NCEs available from http://www.nce-rce.gc.ca/_docs/reports/selec-renewal-oct07_e.pdf. The document does not even refer to the fourth NCE (the Canadian language and literacy research network), the only social science NCE being considered. It was not recommended for renewed funding.

            4. The term ‘economics’ is used here advisedly as a catch-all for analysis that attempts to capture the inputs and outputs in a broad productivity-based framework. Much of the work has indeed been on the benefits to the economy (Nelson, 1959) or to specific firms arising from this public research (see RCUK, 2008). This framework can be pushed too far and become worthless (see Corbyn, 2008).

            Author and article information

            Contributors
            Journal
            cpro20
            CPRO
            Prometheus
            Critical Studies in Innovation
            Pluto Journals
            0810-9028
            1470-1030
            September 2012
            : 30
            : 3
            : 291-314
            Affiliations
            a Centre for Policy Research on Science and Technology (CPROST) , Simon Fraser University , Vancouver , Canada
            Author notes
            Article
            727276 Prometheus, Vol. 30, No. 3, September 2012, 291–314
            10.1080/08109028.2012.727276
            568d134a-3a55-45d0-a989-7ea03d817882
            Copyright Taylor & Francis Group, LLC

            All content is freely available without charge to users or their institutions. Users are allowed to read, download, copy, distribute, print, search, or link to the full texts of the articles in this journal without asking prior permission of the publisher or the author. Articles published in the journal are distributed under a http://creativecommons.org/licenses/by/4.0/.

            History
            Page count
            Figures: 5, Tables: 4, References: 91, Pages: 24
            Categories
            Research Papers

            Computer science,Arts,Social & Behavioral Sciences,Law,History,Economics

            Comments

            Comment on this article