Transcript of Introduction to Critical Appraisal of Evidence for Policy.

Mary Haines: Hello, I'm Mary Haines and we're talking to you about critical appraisal of evidence to support policy and program professionals.

Carmen Huckle Schneider: My name is Carmen Huckle Schneider. I'm working with Mary Haines and together we deliver a workshop to help busy policy and program professionals understand how to critically appraise research when designing their policy and programs. The workshop includes a practical focus of how to use tools when critically appraising research to judge its quality and relevance to real-world policy and program design. The workshop will be valuable for policymakers and program managers, as well as implementers with some experience working with research who want to gain skills in critical appraisal.

Mary Haines: By attending the full workshop, participants will gain two things: firstly, a greater understanding of different types of evidence from research; and secondly, experience using practical tools to critically appraise research to assess if it is, firstly, high quality and, secondly, relevant for the policy and program being designed. Today we're running through an abridged version where we will introduce you to the key concepts that will be explored in more detail in the full workshop through lectures and group work. Critical appraisal challenges that policymakers and practitioners tell us about are: firstly, how to access the most relevant evidence to support policy and program priorities; secondly, how to know if the evidence is high-quality; and thirdly, how to know if the evidence is actually relevant to the context of your program or policy.

Carmen Huckle Schneider: We also have to be realistic about the role of research in policymaking as it is only one input in a complex process. We believe that gaining critical appraisal skills will help you use research more wisely in your policymaking. There are multiple other factors that also are taken into account in the policymaking process, everything from public opinion, the role of the media, expert advice, resources that are available, different stakeholder interests in the economic climate. We want you to be able to use research to increase the weighting of research in policy and program decision-making. Maybe we'll tell a little bit more about how it is important to know what question you are seeking to answer.

Mary Haines: Yes, that's right Carmen. It all starts with a question: knowing what question it is that you need to ask of the research is the first crucial step. This is illustrated by this quote from Albert Einstein: "If I had an hour to solve a problem and my life depended on the solution I would spend the first 55 minutes determining the proper question to ask, for once I know the proper question I could solve the problem in less than five minutes." In health research, in some instances, it is appropriate to frame up your question using what is known as the PICO framework. PICO stands for P for population, I for intervention, C for comparison, O for outcome.

Let me give you an example: the population might be children aged between five and twelve years old attending primary school. The intervention might be obesity prevention programs delivered in the primary school setting. The comparison might be primary schools where the obesity programs are not delivered. And the outcome of interest could be increased physical activity, increased rates of healthy eating, and ultimately increase in the proportion of children with healthy weight.

So once you know your question and have research to help you answer the question, how do you know, are results of the studies you found valid? What are the results? What did they actually find? And will the results help you locally? Those are the key questions that we'll keep coming back to as part of critical appraisal.

To help you answer these questions and critically appraise the research you need to as part of your work, we recommend you use tools developed by the Critical Appraisal Skills Program (CASP) at Oxford in England. Come and we'll tell you a bit more about these tools and how they can be used.

Carmen Huckle Scheider: The tools of the Critical Appraisal Skills Program are built around three overarching questions of critical appraisal: are the results of the study valid, what are the results, will the results help locally?

In terms of determining whether the results of the study are valid, the first step is to decide whether the study was unbiased by evaluating its methodological quality. We look at: what did they compare and how did they compare it, what study design did they use, was it a randomised controlled trial or another study design that had quasi randomisation, or was it an observational study, or did they compare before with after? Depending on the study design the CASP tools will help you work out questions to determine whether the study might be at risk of bias or might have errors. Did they perform a true comparison? Was a placebo used, or if that wasn't possible, did they perform a reasonable comparison of two different intervention options?

In terms of what are the results, if we decide that the study is valid we can go on to look at the results. At this step we consider whether the study's results are important and whether they are of significance. For example, did the experimental group show a significantly better outcome compared with the control group? We also consider how much uncertainty there is about the results as expressed in the form of p-values or confidence intervals, or if it's available, a sensitivity analysis.

Finally, we ask: will the results help locally? Once you've decided that your evidence is valid and it is important, you need to think about how it applies to your question or your particular policy problem. Is it likely, for example, that your particular group of interest may have different characteristics to those that were in the study you're appraising? Critical appraisal skills provide a framework within which to consider these issues in an explicit and systematised way. Was the study conducted in an environment that makes it relevant to the real world? Was it conducted in an environment that is comparable to your policy context?

We can use the cast tools to also appraise other types of research including qualitative research. When we appraise qualitative research we are still asking the same overarching questions about strength, results and relevance but we are looking for different features to give us answers to these questions. For example, qualitative research must give a clear statement of its aims. We also want to see that the justification for using qualitative research is clear. Do you agree with that justification? We also need details about what was studied and how. Importantly, good qualitative research is open, transparent and reflective about the researcher/participant relationship. In terms of results we want to make sure that the study is clear about its findings. There should be no ambiguity and it should not claim to have found results that it actually did not. Are any generalisations that are made logical? Do the conclusions match the data? And finally, as the examination of context is a key element of qualitative research, we also need to determine whether the study is relevant to our context.

We would like to tell the story of Frances Kelsey to remind us to bring all of these aspects together. Think about your own context. Ask smart questions. Appraise the evidence. Frances Oldham Kelsey was an epidemiologist born in 1914. She completed her Master's education at McGill University. She got a rare opportunity to work at the FDA, the Food and Drug Administration of the US, in the 1960s. She was smart and well trained but a relatively junior member of her team. She was doing a job that women rarely get to do. You could understand that she wanted to retain that job and be a good worker. One of her first tasks was to review a medication for approval for sale in the US. This particular medication was also approved for being sold in Europe, Canada, Australia and a few other countries. It was even available over-the-counter in Germany. It was considered safe. The process should have been straightforward and her supervisors expected her to work through the paperwork quickly to allow this medication onto the market. There was a high demand for it as it helped relieve the symptoms of morning sickness, which is bothersome and even debilitating for many women in pregnancy. There was lots more work to get on with so she was under time pressure. Frances Kelsey however didn't let the pressures and expectations of her work situation stop her from peering into the evidence with a critical eye. She refused to approve the drug, which went under the name of Contergan, or Thalidomide as it's known in Australia, because she found there wasn't adequate evidence of safety and a few earlier studies had faults. Due to her diligence, the Thalidomide disaster never happened in the US.

Mary Haines: Thank you Carmen, what a fascinating story and I think it illustrates why critical appraisal skills are just so important in the work we do. We hope this brief presentation has given you a sense of why critical appraisal is important to make wise policy decisions, the concepts that underpin critical appraisal skills, starting with a good question, the PICO question, and appraising evidence to answer these questions. Are the results of the study valid? What are the results? And will the results help locally? Goodbye and thank you for listening to this presentation. We hope to see you at a critical appraisal workshop in the future. The slide that follows contains some useful links for you as well (Cochrane Library, The Campbell Collaboration, Sax Institute - Evidence Check Library, National Institute for Health and Care Excellence (NICE), CASP). So goodbye from me, Mary Haines...

Carmen Huckle Schneider: ...and goodbye from me, Carmen Huckle Schneider.

​​​
Current as at: Thursday 10 October 2019