Transcript of Introduction to Public Health Program Evaluation.

Adrian Bauman: Hello, my name's Adrian Bauman from Sydney University School of Public Health and the Charles Perkins Centre. This is an introductory lecture to public health program evaluation. It's designed for a range of policymakers, practitioners with a focus on disease prevention programs and health promotion.

My focus will be to take a pragmatic perspective. This is to deal with evaluation realistically, basing evaluation methods and precepts on practical rather than theoretical considerations. In essence, evaluation should solve problems, provide answers in real-world conditions rather than only obeying fixed theories or rules of evaluation, and there are many of them. This talk is going to introduce some of the concepts around program evaluation in public health settings in the real world where policymakers or practitioners are going to implement a comprehensive program of work around a particular issue, or community health or health promotion staff are going to start a large program and want to know whether it works, for whom it works, many of the central evaluation questions. This lecture will give a 15-minute introduction to public health program evaluation. Each of the slides will define a whole area of work behind the slide, so this is very much introductory material.

But what is evaluation? It's a process to determine systematically and objectively the relevant effectiveness and impact of program activities in the light of their objectives. Another way of saying that is: Where do you want to end up? How are you going to get there? How will you prove you got there? How will you not make the same mistakes next time? And how will you let other people know what you did? The purposes of public health program evaluation are: to improve programs to do better next time; to refine the objectives and learn from doing; for accountability - the stakeholders to government, the key funders; to describe the activities, the successes and the costs of running your program; for scientific curiosity, to generate evidence on efficacy or effectiveness or efficiency of the program in achieving its objectives and improving health; and finally as a model of good practice for the field, to describe the program context and strategies so that it can be adapted and replicated for use elsewhere.

First, the concept of evaluability assessment. This is seldom done but we really need to be sure that the program is ready for evaluation. Is it of sufficient quality and importance and is likely to be delivered as intended? Are stakeholders engaged? Will the evaluation provide information to those who make decisions about the program's worth? There are many versions of the evaluation cycle and this is one of them that I developed a few years ago. Starting on the top left-hand corner: What's the issue or the problem? What's known in the literature about the problem? How can we quantify the issue? How can we develop a program or policy options? Designing a program. If we do that is it likely that the program will fix the problem? If so we need to work up an evaluation strategy. If we don't think it'll really fix the problem we need to start again. The elements of our intervention or our program are also called 'inputs' in a program logic model and include the goals and objectives, the groups who will benefit from the program, the materials, resources, funding, support, technologies, personnel and staffing that you will need to run it, the strategies, activities and actions that are planned in the program, and any other resources that are required.

Larry Green, 25 years or more ago, described three worlds of evaluation planning: what the community wants, shown in blue, what the actual needs are, defined by epidemiological data and other community data, and your resources, your feasibility, your public health priorities and policies and where these intersect. Our web programs are likely to be developed and delivered. The Center for Disease Control in Atlanta, Georgia developed the framework for public health program evaluation, which started as another wheel, engage the stakeholders and communities, and describe what your program will be step 2, then focus your evaluation design and the elements of your evaluation, and then move on to implement the program and gather your evidence, reach a conclusion, justify your conclusion, and share the lessons that you've learned.

One concept that is important but seldom used well is the concept of program logic. Before you start your program you need to describe how you think it's going to work. You need to even draw that as a physical drawing where you describe all the activities in the program, leading in a logical sequence to achieving the program's objectives. The evaluation is linked to each of these stages in building a logic model, starting with defining the inputs, the resources that you need, describing the activities, the participation by the target group, and short, medium and long-term outcomes. Sometimes we call short and medium-term outcomes, we call them impacts, and long-term outcomes are effects on health states. Each of these needs to be linked to the evaluation and so you're designing the evaluation around your logic model.

There are different stages in the evaluation process. Before the evaluation begins you start with formative evaluation or formative research that identifies the problem, its magnitude, the key target groups involved, best practice in this area, synthesises the evidence from the literature. Once you've started the evaluation you measure its implementation. This is called 'process evaluation' and then you move to measuring impact and long-term outcomes.

There is a vast research literature around evaluation design and evaluation methods. There is also a lot of controversy and conflicts amongst different schools of evaluators in the field. But an evaluation design in essence determines how the study is set up or planned, what scientific methods you'll use in the evaluation, and why you chose those methods, how you will measure the effects of the program, where you will use qualitative or quantitative data or both, what kinds of process evaluation you will use, and if you decide to do any impact evaluation how will you resource that. You need to place your evaluation in the evidence cycle. Are you just starting out and identify the need for a program? Have you done efficacy testing, which is controlled trial or optimal testing of an intervention under good or controlled conditions? Have you tested it in the real world, that's effectiveness testing and the intervention work in diverse settings? And finally, how will you scale up the intervention? Scaling up is critical to making a difference at the population level. No amount of efficacy research will ever improve population health without the principles of scaling up programs to reach, be delivered to, and impact on the whole population or a large proportion of it.

One other important idea is complexity of your evaluation. Simple evaluation is what we do quite often. We test one single intervention, it's well defined, there may be a defined small target group, and there's a clear causal relationship between the intervention and the outcomes, usually through controlled trial evaluation. In prevention and health promotion we often have more complicated evaluations which are a single intervention but have multiple components, but there still may be a clear intervention focus, we're just doing several things to achieve that and we need to evaluate each of those components. And finally, and quite rare, are complex public health programs for evaluation which have large scales of work across a large population. Multiple components across diverse settings often with multiple outcomes and often needing a series of evaluation methods which intersect and combine to tell the whole story.

That's a quick summary of the broad principles of public health program evaluation. You need to understand what your evaluation is for, you need to describe how it might work in advance using logic models, you need to describe your evaluation design and methods, identify where is your evaluation in the evaluation cycle, and is it simple or complicated or a complex program evaluation. Ultimately you need to balance your evaluation efforts and the rigor that you apply to your evaluation against resources and accountability needs.

I hope this helps with your evaluation work, certainly in starting your thinking about evaluation and each of these areas can be expanded broadly into entire disciplines around methods, logic models, scale up etc. Good luck with your evaluation work. It's critical to making a public health difference.

Current as at: Thursday 10 October 2019