Using the above definition of evaluation, program evaluation approaches were classified into four categories. Here is a sampling of the definitions you will see: Mirriam-Webster Dictionary Definition of Assessment: The action or an instance of assessing, appraisal . Muffat says - "Evaluation is a continuous process and is concerned with than the formal academic achievement of pupils. Teresa Penfield, Matthew J. Baker, Rosa Scoble, Michael C. Wykes, Assessment, evaluations, and definitions of research impact: A review, Research Evaluation, Volume 23, Issue 1, January 2014, Pages 2132, https://doi.org/10.1093/reseval/rvt021. Gathering evidence of the links between research and impact is not only a challenge where that evidence is lacking. This is recognized as being particularly problematic within the social sciences where informing policy is a likely impact of research. In the UK, there have been several Jisc-funded projects in recent years to develop systems capable of storing research information, for example, MICE (Measuring Impacts Under CERIF), UK Research Information Shared Service, and Integrated Research Input and Output System, all based on the CERIF standard. To evaluate impact, case studies were interrogated and verifiable indicators assessed to determine whether research had led to reciprocal engagement, adoption of research findings, or public value. In the UK, evaluation of academic and broader socio-economic impact takes place separately. 10312. only one author attempts to define evaluation. This is being done for collation of academic impact and outputs, for example, Research Portfolio Online Reporting Tools, which uses PubMed and text mining to cluster research projects, and STAR Metrics in the US, which uses administrative records and research outputs and is also being implemented by the ERC using data in the public domain (Mugabushaka and Papazoglou 2012). The Payback Framework enables health and medical research and impact to be linked and the process by which impact occurs to be traced. This article aims to explore what is understood by the term research impact and to provide a comprehensive assimilation of available literature and information, drawing on global experiences to understand the potential for methods and frameworks of impact assessment being implemented for UK impact assessment. To understand the socio-economic value of research and subsequently inform funding decisions. Inform funding. One of the advantages of this method is that less input is required compared with capturing the full route from research to impact. It is concerned with both the evaluation of achievement and its enhancement. The RQF was developed to demonstrate and justify public expenditure on research, and as part of this framework, a pilot assessment was undertaken by the Australian Technology Network. It is acknowledged that one of the outcomes of developing new knowledge through research can be knowledge creep where new data or information becomes accepted and gets absorbed over time. The Author 2013. One might consider that by funding excellent research, impacts (including those that are unforeseen) will follow, and traditionally, assessment of university research focused on academic quality and productivity. By asking academics to consider the impact of the research they undertake and by reviewing and funding them accordingly, the result may be to compromise research by steering it away from the imaginative and creative quest for knowledge. Impact can be temporary or long-lasting. It is important to emphasize that Not everyone within the higher education sector itself is convinced that evaluation of higher education activity is a worthwhile task (Kelly and McNicoll 2011). CERIF (Common European Research Information Format) was developed for this purpose, first released in 1991; a number of projects and systems across Europe such as the ERC Research Information System (Mugabushaka and Papazoglou 2012) are being developed as CERIF-compatible. Why should this be the case? A discussion on the benefits and drawbacks of a range of evaluation tools (bibliometrics, economic rate of return, peer review, case study, logic modelling, and benchmarking) can be found in the article by Grant (2006). 5. There is . Introduction, what is meant by impact? This atmosphere of excitement, arising from imaginative consideration transforms knowledge.. The case study does present evidence from a particular perspective and may need to be adapted for use with different stakeholders. Attempting to evaluate impact to justify expenditure, showcase our work, and inform future funding decisions will only prove to be a valuable use of time and resources if we can take measures to ensure that assessment attempts will not ultimately have a negative influence on the impact of our research. evaluation of these different kinds of evaluands. Many theorists, authors, research scholars, and practitioners have defined performance appraisal in a wide variety of ways. 0000008241 00000 n
Definition of Evaluation by Different Authors Tuckman: Evaluation is a process wherein the parts, processes, or outcomes of a programme are examined to see whether they are satisfactory, particularly with reference to the stated objectives of the programme our own expectations, or our own standards of excellence. 2007). The Goldsmith report concluded that general categories of evidence would be more useful such that indicators could encompass dissemination and circulation, re-use and influence, collaboration and boundary work, and innovation and invention. This is a metric that has been used within the charitable sector (Berg and Mnsson 2011) and also features as evidence in the REF guidance for panel D (REF2014 2012). To enable research organizations including HEIs to monitor and manage their performance and understand and disseminate the contribution that they are making to local, national, and international communities. 2008; CAHS 2009; Spaapen et al. Where narratives are used in conjunction with metrics, a complete picture of impact can be developed, again from a particular perspective but with the evidence available to corroborate the claims made. These . As part of this review, we aim to explore the following questions: What are the reasons behind trying to understand and evaluate research impact? There is a distinction between academic impact understood as the intellectual contribution to ones field of study within academia and external socio-economic impact beyond academia. The following decisions may be made with the aid of evaluation. The book also explores how different aspects of citizenship, such as attitudes towards diverse population groups and concerns for social issues, relate to classical definitions of norm-based citizenship from the political sciences. Definition of evaluation. This is particularly recognized in the development of new government policy where findings can influence policy debate and policy change, without recognition of the contributing research (Davies et al. Search for other works by this author on: A White Paper on Charity Impact Measurement, A Framework to Measure the Impact of Investments in Health Research, European Molecular Biology Organization (EMBO) Reports, Estimating the Economic Value to Societies of the Impact of Health Research: A Critical Review, Bulletin of the World Health Organization, Canadian Academy of Health Sciences Panel on Return on Investment in Health Research, Making an Impact. The ability to write a persuasive well-evidenced case study may influence the assessment of impact. Evaluative research is a type of research used to evaluate a product or concept, and collect data to help improve your solution. 4 0 obj The Social Return on Investment (SROI) guide (The SROI Network 2012) suggests that The language varies impact, returns, benefits, value but the questions around what sort of difference and how much of a difference we are making are the same. Oxford University Press is a department of the University of Oxford. Impact assessments raise concerns over the steer of research towards disciplines and topics in which impact is more easily evidenced and that provide economic impacts that could subsequently lead to a devaluation of blue skies research. The verb evaluate means to form an idea of something or to give a judgment about something. Ideally, systems within universities internationally would be able to share data allowing direct comparisons, accurate storage of information developed in collaborations, and transfer of comparable data as researchers move between institutions. Although it can be envisaged that the range of impacts derived from research of different disciplines are likely to vary, one might question whether it makes sense to compare impacts within disciplines when the range of impact can vary enormously, for example, from business development to cultural changes or saving lives? It has been acknowledged that outstanding leaps forward in knowledge and understanding come from immersing in a background of intellectual thinking that one is able to see further by standing on the shoulders of giants. In developing the UK REF, HEFCE commissioned a report, in 2009, from RAND to review international practice for assessing research impact and provide recommendations to inform the development of the REF. Any tool for impact evaluation needs to be flexible, such that it enables access to impact data for a variety of purposes (Scoble et al. x[s)TyjwI
BBU*5,}~O#{4>[n?_?]ouO{~oW_~fvZ}sCy"n?wmiY{]9LXn!v^CkWIRp&TJL9o6CjjvWqAQ6:hU.Q-%R_O:k_v3^=79k{8s7?=`|S^BM-_fa@Q`nD_(]/]Y>@+no/>$}oMI2IdMqH,'f'mxlfBM?.WIn4_Jc:K31vl\wLs];k(vo_Teq9w2^&Ca*t;[.ybfYYvcn Recommendations from the REF pilot were that the panel should be able to extend the time frame where appropriate; this, however, poses difficult decisions when submitting a case study to the REF as to what the view of the panel will be and whether if deemed inappropriate this will render the case study unclassified. The fast-moving developments in the field of altmetrics (or alternative metrics) are providing a richer understanding of how research is being used, viewed, and moved. Although some might find the distinction somewhat marginal or even confusing, this differentiation between outputs, outcomes, and impacts is important, and has been highlighted, not only for the impacts derived from university research (Kelly and McNicol 2011) but also for work done in the charitable sector (Ebrahim and Rangan, 2010; Berg and Mnsson 2011; Kelly and McNicoll 2011). It is possible to incorporate both metrics and narratives within systems, for example, within the Research Outcomes System and Researchfish, currently used by several of the UK research councils to allow impacts to be recorded; although recording narratives has the advantage of allowing some context to be documented, it may make the evidence less flexible for use by different stakeholder groups (which include government, funding bodies, research assessment agencies, research providers, and user communities) for whom the purpose of analysis may vary (Davies et al. 2008), developed during the mid-1990s by Buxton and Hanney, working at Brunel University. The criteria for assessment were also supported by a model developed by Brunel for measurement of impact that used similar measures defined as depth and spread.