Fraunhofer-Gesellschaft

Publica

Hier finden Sie wissenschaftliche Publikationen aus den Fraunhofer-Instituten.

Governance of research - the role of evaluative information

Contribution to the CRIS 2004 Conference
 
: Kuhlmann, S.

Nase, A.:
Putting the Sparkle in the Knowledge Society : 7th International Conference on Current Research Information Systems
Leuven: Leuven Univ. Press, 2004
ISBN: 90-5867-383-9
S.165-175
International Conference on Current Research Information Systems (CRIS) <7, 2004, Antwerpen, Belgium>
Englisch
Konferenzbeitrag
Fraunhofer ISI ()

Abstract
Institutions of scientific research can be described as a self-referential system; part of the evaluation procedures used today - above all peer review - is rooted in self-reference. But it has its limits: almost half all science and research conducted is paid for with public funds. Politics, economy and society are demanding - and increasingly so since the 1990s - evidence of the performance, quality and benefits of government-promoted science and research.
Evaluation processes can be divided between two functional poles: evaluation can in the first instance serve to measure performance and thus provide the legitimisation for promotional measures afterwards (summative function), or it can be utilised as a "learning medium", in which findings about cause and effect linking of running or completed measures can be utilised as intelligent information for current or future initiatives (formative function). We suggest to conceptualise evaluation as "strategic Intelligence": a set of sources of information and explorative as well as analytical (theoretical, heuristic, methodological) tools employed to produce useful insight in the actual or potential costs and effects of public policy and management.
We know many methods to determine the quality and the achieved or achievable impacts of research. The most important are before/after comparisons, the control or comparison group approach, as well as qualitative analyses (among others, plausibility checks, estimated judgements). These conceptions can be carried out individually or in combination with different indicators (financial expenditure, patents, economic, social, technical indicators, publications, citations), data collection methods (existing statistics, questionnaires, interviews, case studies, panels) and data analysis methods (econometrics, cost/benefit analyses, technometrics, bibliometrics, peer reviews).
One must however warn against regarding quantitative indicators alone as adequate to evaluate research. Rather evaluation processes should be used as a mediation instrument - not denying the diverging perspectives of participating actors from science, industry and policymaking, but on the contrary deliberately making different interests the subject of competing success criteria.

: http://publica.fraunhofer.de/dokumente/N-22273.html