Selectivity and concentration in research funding has become unavoidable in Australia today. Some of the reasons for this are reviewed relative to international economic, scientific/technical and conceptual developments. The resulting need to develop an evaluative culture in and for Australia is discussed. The reasons for undertaking evaluations are outlined, and a working definition of ‘research evaluation’ that may be suitable within the Australian context is developed. The parameters that may deserve consideration in designing an evaluation are detailed, and a series of conceptual and practical guidelines are forwarded. Several barriers to implementing evaluations that may apply to Australia are addressed. Finally, the implications of the concept ‘accountability’ for both the recipients of government support and government itself are briefly raised.
“J. S. Dawkins, “The challenge for higher education in Australia”, a speech by the Minister for Employment, Education and Training, Canberra: Department of Education, Employment and Training, 22 September, 1987, pp. 11–12.
Commonwealth Tertiary Education Commission, Review of Efficiency and Effectiveness, Canberra: CTEC, October, 1986.
Australian Science and Technology Council, Improving the Research Performance of Australia's Universities and Other Higher Education Institutions, A report to the Prime Minister, Canberra: ASTEC, February, 1987.
“Department of Employment, Education and Training, “Government moves quickly to establish Australian Research Council”, A public bulletin, Canberra: DEET, 23 September, 1987.
UGC, A Strategy for Higher Education Into the 1990's: Criteria for Rationalisation, London: HMSO, University Grants Committee, 1984.
S.S. Blume et. al., Evaluation of Research: Experiences and Perspectives in the Netherlands, Report on a study commissioned by the OECD Directorate for Science Policy, Ad Hoc Group on University Research, Paris: OECD, 1985.
Office of Technology Assessment, Research Funding as an Investment: Can We Measure the Returns?, A Technical Memorandum, Washington, D.C.: Congress of the United States, April 1986.
M. Gibbons, Evaluation of Research: Evaluation of Research in Sweden, Report on a study commissioned by the OECD Directorate for Science Policy, Ad Hoc Group on University Research, Paris: OECD, 1985.
Australian Science and Technology Council, Future Directions for CSIRO, A report to the Prime Minister, Canberra: ASTEC, November, 1985.
See for example: Jane Ford, “Govt research drive disappoints,” Financial Review, July 17, 1987, p. 53. See also: “ABS survey shows industry R&D growth”, in Laboratory News, July, 1986.
Michael Gibbons and L. Georghiou, Evaluation of Research, A Selection of Current Practices, A report prepared for the Secretary-General of the OECD, Paris: 1987, p. 58.
For further discussion see: M. G. Taylor, “Evaluation of research and resource allocation”, International Journal of Institutional Management in Higher Education, 9, 1, March 1985, p. 89.
The university referred to is the University of Wollongong.
Paul Bourke, Quality Measures in Universities, A study commissioned by the Commonwealth Tertiary Education Commission, Canberra, Australia: CTEC, 1986, p.20.
Some observations made here about changes in the nature and perception of ‘science’ apply more to the Physical and Biological sciences than to the Arts and Humanities. The Physical sciences have generally served as the model around which the theories relevant to this paper were developed. The cultural and intellectual role of the Arts and Humanities may be quite different from that of the Physical/Biological sciences and, therefore, deserves separate consideration. This was not possible in this paper.
C. Ganz Brown, “The technological relevance of basic research,” in B. Bartocha, et al. (eds), Transforming Scientific Ideas into Innovations: Science Policies in the United States and Japan, Tokyo: Japan Society for the Promotion of Science, 1985, pp. 113–134.
F. Narin and E. Noma, “Is technology becoming science?”, Scientometrics, 7, 3-6, 1985, pp. 369–381.
Gibbons et al., op. cit., note 11, p.14.
R. Johnston, “Why scientists don't get more money,” Metascience, 3, 1985, p. 46.
Department of Science, Science and Technology Statement 1985-86, Tables 5 and 18, Canberra: DoS, November 1986, pp. 14, 88.
Department of Science, Submission to ASTEC Review of Higher Education Research Funding, Tables 1, 6 and 10, Canberra: DoS, 1986.
Australian Science and Technology Council, Improving the Research Performance of Australia's Universities and other Higher Education Institutions, Canberra: ASTEC, February 1987, p. 19.
P. S. Chen, “Evaluation in biomedical research at the National Institutes of Health”, in G Goggio and E. Spachis-Papasois (eds), Evaluation of Research and Development, Proceedings of the European Community Seminar, Brussels, October 17-18, 1983, Dordrecht, Netherlands: D. Reidel, 1984, p. 115.
See for example: John Ziman and Peter Healey, International Selectivity in Science, A working paper from the Science Policy Support Group, London, SPSG, 1987.
Johnston, op. cit., note 19, p. 49.
The reader is referred to the literature of the sociology and history of science for more detail. The following provide an introduction to a large and varied literature: K.D. Knorr-Cetina and M. Mulkay (eds), Science Observed, London: Sage, 1983; Steven Shapen and S. Schaffer, Leviathan and the Air Pump: Hobbes, Boyle, and the Experimental Life, Princeton: Princeton University Press, 1985; Bruno Latour and S. Woolgar, Laboratory Life: The Construction of Scientific Facts, Princeton: Princeton University Press, 1979.
S. E. Cozzens, “Expert review in evaluating programs”, Science and Public Policy, 14, 2, April 1987, pp. 71–81.
S. Cole, J.R. Cole and G.A. Simon, “Chance and consensus in peer review”, Science, 214, 20 November 1981, pp. 881–86.
A.L. Porter and E.A. Rossini, “Peer review of interdisciplinary research proposals”, Science, Technology & Human Values, 10, 3, Summer 1985, pp. 34–38.
B.R. Martin and J. Irvine, “Assessing basic research”, Research Policy, 12, 1983, p. 72.
Gibbons et al., op. cit., note 11, p. 27.
Gibbons et al., op. cit., note 11, p. 26.
ibid., p. 10.
ibid., p. 46.
ibid., p. 57.
Thomas E. Clarke, The Evaluation of R&D Programs and Personnel: A Literature Review, Ottawa, Ontario, Canada: Stargate Consultants Ltd., December 1986, p. 56.
Most of the writing about research evaluation is targeted more for the Physical and Biological sciences than for Arts and Humanities. However, this does not mean that Arts and Humanities research cannot be systematically evaluated in the general way suggested here, only that further consideration than has generally been given is required as to the specific nature of research performance in those fields.
Gibbons et al., op. cit., note 11, p. 16.
P. Fasella, “The evaluation of the European Community's research and development programmes”, in G. Goggio and E. Spachis-Papasois (eds), op. cit., p. 5.
J. Irvine and B. Martin, Foresight in Science: Picking the Winners, London: Frances Pinter, 1984, p. 141.
Bourke, op. cit., note 14, p. 15.
One example is Lewis Branscomb, “Industry evaluation of research quality: edited excerpts from a seminar”, Science, Technology & Human Values, 1, 39, Spring 1982, pp. 15–22.
J. A. Snow “Research and development: programs and priorities in a United States mission agency”, in G. Goggio and E. Spachis-Papasois (eds), op. cit., p. 95.
O. T. Fundingsland, “Perspectives on evaluating federally sponsored research and development in the United States”, in G. Goggio and E. Spachis-Papasois (eds), op. cit., p. 100.
Daryl E. Chubin, “Designing research program evaluations: a science studies approach”, Science and Public Policy, 14, 2, April 1987, p. 82.
ibid., p. 88.
The definition of ‘research evaluation’ is partially derived from V. Stolte-Heiskanen, “Evaluation of scientific performance on the periphery”, Science and Public Policy, 13, 2, April 1986, p. 85.
Gibbons, et al., op. cit., note 11, p. 19.
Gibbons, et al., op. cit., note 11, p. 21.
Bourke, op. cit., note 14, p. 23.
Fasella, op. cit., note 37, p. 5.
Gibbons, et al., op. cit., note 11, p. 46.
Fundingsland, op. cit., note 44, pp. 109–11.
J. Irvine, B. Martin and G. Oldham, Research Evaluation in British Science: A SPRU Review, A paper commissioned by the Centre de Prospective et d'Evaluation, Ministère de la Recherche et de l'Industrie, Paris, France, Sussex: University of Sussex, SPRU, April, 1983, p. 5.
For an introduction to the Delphi method see H. Sackman, Delphi Assessment, Expert Opinion, Forecasting, and Group Process, US: Rand Corporation, 1974, as discussed in A.L. Porter et al., A Guidebook for Technology Assessment and Impact Analysis, New York: North-Holland, 1980, p. 126.
J. D. Roessner, “The multiple functions of formal aids to decision-making in public agencies”, IEEE Transactions on Engineering Management, 1985.
One aspect of NIH evaluation activities is exemplified by Francis Narin, Subjective vs. Bibliomeiric Assessment of Biomedical Research Publications, A US National Institutes of Health program evaluation report, Bethesda, MD: US Departent of Health and Human Services, April, 1983.
IDEA Corporation, A Comparison of Scientific Research Excellence at Selected Universities in Ontario, Quebec and the United States, 1982, A technical background paper for The Commission on the Future Development of the Universities of Ontario, Ontario: IDEA Corporation, September, 1984.
Blume, op. cit., note 6, p. 10.
One of the many examples of US NSF investigations: M. P. Carpenter, Updating and Maintaining Thirteen Bibliometric Data Series Through 1982, A final report to the US National Science Foundation, Science Indicators Unit, New Jersey: Computer Horizons, 19 November, 1985.
H.R. Coward, J.J. Franklin and L. Simon, ABRC Science Policy Study: Co-Citation Bibliometric Models, Final report to the Advisory Board to the Reseaarch Councils of the United Kingdom, Philadelphia: Center for Research Planning, July, 1984.
Royal Society Policy Studies Unit, Evaluation of National Performance in Basic Research — A review of techniques for evaluating performance in basic science, with case studies in genetics and solid state physics, ABRC Science Policy Studies, No. 1 performed for the Economic and Social Research Council, London: Department of Education and Science, 1986.
H.F. Moed, W.J.M. Burger, J.G. Frankfort, A.F.J, van Raan, “The use of bibliometric data for the measurement of university research performance”, Research Policy, 14, 1985, pp. 131–149.
J.J. Franklin, H.R. Coward, and L. Simon, Identifying Areas of Swedish Research Strength: A Comparison of Bibliometric Models and Peer Review Evaluations in Two Fields of Science, Final report to the National Swedish Board for Technical Development, Philadelphia: Center for Research Planning, 23 April, 1986.
Referred to in B.R. Martin and J. Irvine, Final Report on the Three-Year SPRU Programme on Research Evaluation by the Leverhulme Trust, Sussex: University of Sussex, SPRU, November 1986, p. 17.
H.F. Moed, W.J.M. Burger, J.G. Frankfort, A.F.J, van Raan, On the Measurement of Research Performance. The Use of Bibliometric Indicators, Leiden, Netherlands: The University of Leiden, 1983.
F. Narin, Measuring the Research Productivity of Higher Education Institutions Using Bibliometric Techniques, Report to the OECD Workshop on Science and Technology Indicators in the Higher Education Sector, 10-13 June, 1985, Paris: OECD, 20 May, 1985.
Raphael Gillett, “No way to assess research”, New Scientist, 30 July, 1987, pp. 59–60.
Martin et al., op. cit., note 30.
J.J. Franklin and R. Johnston, “Co-citation bibliometric modeling as a tool for S&T policy and R&D management: issues, applications, and developments”, forthcoming in A.F.J, van Raan (ed.), Handbook of the Quantitative Study of Science and Technology, Amsterdam: Elsevier, 1987-88.
M. Callon, S. Bauin, J-P. Courtial and W. Turner, “From translation to problematic networks: an introduction to co-word analysis”, Social Science Information, 22, 1983, pp. 191–235.
L. A. Myers, “Information systems in research and development: the technology gatekeeper reconsidered”, R&D Management, 14, 4, 1984, pp. 199–206.
N. Cooray, “Knowledge accumulation and technological advance”, Research Policy, 14, 1985, pp. 83–95.
S. Ghoshal and S.K. Kim, “Building effective intelligence systems for competitive advantage”, Sloan Management Review, 49, Fall 1986, pp. 49–58.
Gibbons et al., op. cit., note 11, p. 10.
Johnston, op. cit., note 19, p. 52.
M. Gibbons and L. Georghiou, Evaluation of Research: Evaluation of Research and Development in the United Kingdom, Report on a study commissiond by the OECD Directorate for Science Policy, Ad Hoc Group on University Research, Paris: OECD, 1985, p. 19.
For related discussion see Ken Green, “Research funding in Australia: a view from the North”, Prometheus, 1, June 1986, p. 85.
Stephen Hill, “From dark to light: seeing development strategies through the eyes of S&T indicators”, Science and Public Policy, 13, 5, October 1986, pp. 275–84.
Gibbons et al., op. cit., note 11, p. 24.
Gibbons et al., op. cit., note 77, p. 31.
Blume et al., op. cit., note 6.
Chen, op. cit., note 23.
Fasella, op. cit., note 39, p. 5.
The only one of these subdisciplines that has perhaps not been referenced here is social evaluation research. See L. Rutman and G. Mowbray, Understanding Program Evaluation, Beverly Hills: Sage, 1983 or Marvin C. Alkin, A Guide For Evaluation Decision Makers, Beverly Hills: Sage, 1985.
Hugh Preston, “The new Australian Research Council — its objectives, structure and implications”, A speech given by the Assistant Secretary of the Research Grants Branch, DEET, University of Wollongong, 14 October, 1987.
Terry Hillsberg, an untitled speech given at the conference “Innovation Outlook ‘87” by the First Assistant Secretary of the Technology and Business Efficiency Division, DITAC, Sydney, 17-18 September, 1987.
J. Ronayne, The Allocation of Resources to Research and Development: A Review of Policies and Procedures, A report to the Australian Science and Technology Council, Canberra: ASTEC, 1980, p. iv.
For discussion see Bourke, op. cit., note 14, pp. 4–5.