Evaluating Healthy Universities
Introduction
What is evaluation?
Evaluation is a judgement or appraisal about the worth of an initiative or programme. This judgement can be:
- about the outcome – what short-term impacts and/or longer-term outcomes you achieved and whether you were effective in achieving your aims and objectives
- about the efficiency – linked to outcome evaluation, whether the approach taken and methods used were the most cost-effective and/or whether the benefits justified the costs
- about the process – whether the way that you implemented the initiative or programme was appropriate for the particular circumstances (gaining a fuller understanding of why something worked or didn’t work in a particular context at a particular time).
Why Evaluate?
Whilst it is commonly agreed that evaluation should be an integral part of all planned activities and programmes, there are many different perspectives as to why evaluation is important, as illustrated below:
Planning an Evaluation
It is important to plan your evaluation to ensure that it is as productive and useful as possible. The framework that follows sets out a flexible eight step process:
Step 1. Agree who will carry out your evaluation (will it be internal or external?) and how you will engage different stakeholders in the evaluation process.
Step 2. Clarify aims and objectives, making sure the objectives are SMART –Specific, Measurable, Achievable, Realistic and Time-Bound.
Step 3. Identify expected impacts and outcomes that ‘match’ the objectives, making sure that they relate to both public health and the core business of the university.
Step 4. Choose milestone and outcome indicators that can help to track progress and measure overall success.
Step 5. Decide what quantitative and/or qualitative information needs to be collected to enable the evaluation to take place – paying attention not only to outcome (and if appropriate, efficiency) data but also to the information required in order to carry out process evaluation.
Step 6. Agree what methods you will use to collect this information, being pragmatic about what is possible within resource constraints and where possible combining a range of methods in order to strengthen evidence generation.
Step 7. Set up data collection systems, considering what data is routinely available, identifying who will collect what data and agreeing a timetable (e.g. baseline, ongoing, mid-point, end).
Step 8. Collate, analyse and interpret the data in order to identify key findings and generate learning and evidence.
Evaluating a Healthy University Initiative
Levels of Evaluation
Within the context of a Healthy University, evaluation is likely to be focused at two levels:
- Component Activities and Projects: A Healthy University initiative is usually comprised of a range of different activities, interventions and projects (e.g. a green/active travel plan; a peer education project on drugs and sexual health; campaigns on mental health and stigma; healthy and sustainable food procurement procedures). It is important to evaluate these individual elements, assessing whether they achieved their stated objectives and exploring the implementation process to understand what worked well, what didn’t and why.
- Overall ‘Whole System’ Approach: Whilst evaluation of these individual components is a crucial part of assessing the worth of a Healthy University initiative, it does not by itself provide feedback on the effectiveness of the overall whole system Healthy University approach. For evaluation to capture the possible ‘added value’ of whole system working and help generate and build evidence of effectiveness, it must adopt non-linear approaches, looking at the whole and mapping and understanding the interrelationships, interactions and synergies – with regard to different population sub-groups, different components of the system and different health issues. This is clearly a much more challenging endeavour than evaluating component activities and projects.
The UK Healthy Universities Network Self-Review Tool provides a useful resource. It provides a mechanism to review and reflect on their progress in embedding a whole system approach to health and wellbeing into their core business and culture. The Self-Review Tool is an online questionnaire structured under five headings: Leadership and Governance; Service Provision; Facilities and Environment; Communication, Information and Marketing; and Academic, Personal, Social and Professional Development. Once a university has completed the questionnaire, a graphic ‘traffic light’ representation (red, amber, green) of progress will be generated, highlighting areas where the university is achieving and those areas where additional input is needed.
Evaluating Complex Whole System Initiatives
Focusing at this higher level of evaluation, there is no simple ‘how to’ guide to evaluating complex whole system initiatives – although it becomes even more important to utilise multi-method approaches (in order to deal with complexity more effectively) and to integrate outcomes and indicators that relate to both ‘health’ and the core business of the university (e.g. student experience, retention and achievement; staff performance and productivity; corporate social responsibility).
The literature also suggests that there is a growing appreciation of the value of theory-based approaches, examples being Realist and Theory of Change Evaluation:
- Realist evaluation (Pawson and Tilley, 1997) does not seek to factor out context in the way that experimentation or randomised controlled trials tend to. Instead, it seeks to understand how causal mechanisms work within specific contexts, thereby leading to particular outcomes. It thus looks inside what is often referred to as the ‘black box’ of evaluation, addressing process and outcome evaluation at the same time, and moving from the traditional question ‘does this work?’ to ask ‘what works for whom in what circumstances, and why?’.
- The Theory of Change approach (Connell and Kubisch, 1998) serves as both a planning/development and evaluation framework. In exploring links between activities, outcomes and contexts, it argues that it is necessary to make explicit the chain of assumptions and hypotheses on which an initiative is based. The approach draws on and combines insights from both realist evaluation and logic modelling, and has been developed as a means of evaluating complex community initiatives – with a number of stages: identify long-term goals and assumptions behind them; carry out backward mapping to reveal the necessary preconditions to achieve these goals; identify the interventions that will be undertaken as a means of achieving the required changes; develop outcome indicators to enable the initiative to be assessed; write a narrative to explain the logic of the initiative. In this way, it explores both process and outcomes, tracking the stages that make up overall programmes, mapping the links between the programmes that comprise a larger initiative, and enabling a more sophisticated and utility-focused understanding not only of whether something works, but also of why and how it works or does not work in particular situations.
References and Links
Links
EU Theory-Based Evaluation Guide
HM Treasury Magenta Book Guidance on Evaluation Part B
Reflect and Improve Toolkit for Engaging Youth and Adults in Program Evaluation.
Learning for Sustainability – Theory of Change Resources
WHO – Evaluation in Health Promotion: Principles and Perspectives
References
Connell, J. P. & Kubisch, A. C. (1998). Applying a theory of change approach to the evaluation of comprehensive community initiatives: Progress, prospects, and problems. In K. Fulbright-Anderson, A. C. Kubisch, & J. P. Connell (Eds.), New approaches to evaluating community initiatives. Volume 2: Theory, measurement, and analysis, (pp. 15-44). Washington, DC: The Aspen Institute.
Douglas, J., Sidell, M., Lloyd, C. & Earle, S. Evaluating public health interventions. In Earle, S., Lloyd, C., Sidell, M. & Spurr, S. (2007) Theory and Research in Promoting Public Health. London: Sage.
Green, J. & South, J. Evaluation. Maidenhead: Open University Press.
Naidoo, J, & Wills, J., 2000, 3rd Edition, Health Promotion: Foundations for Practice, London: Baillière Tindall
Pawson, R. & Tilley, N. (1997). Realistic evaluation. London: Sage.