MSW513NOTESModule5EvaluatingSocialPolicies.docx

MSW513NOTESModule5EvaluatingSocialPolicies.docx

Module 5: Evaluating Social Policies

Overview of Policy Evaluation

CDC (2016) defines policy as "At CDC, "policy" is defined as a law, regulation, procedure, administrative action, incentive, or voluntary practice of governments and other institutions." (slide, p. 2). CDC uses policy as a tool to advance public health awareness of significant health issues and enforce laws, regulations to promote health and well-being for all persons living in the U.S. CDC (2016) defines policy evaluation is the systematic collection and analysis of information to make judgments about contexts, activities, characteristics, or outcomes of one or more domains of the policy process CDC presents a five phased approach to demonstrate how policy is conceptualized, developed, adopted, implemented, and evaluated.

The five critical steps of the policy making process are:

· Problem Identification

· Policy Analysis

· Strategy and Policy Development

· Policy Enactment

· Policy Implementation

Ideally, the steps will be sequential. However, depending on the policy, the steps may overlap. Evaluating the data, product and outcomes each phase is a necessary step prior to moving on to the next phase. A thorough examination on the quantity and quality of the research evidence collected to substantiate the problem, support or refute existing policy and strategy for communicating the policy must be assessed for viability. Regardless of the order of the steps, what will remain a constant in all the phases is the essential need for stakeholder engagement and education.

The CDC Evaluation Framework

The CDC (2016a) Framework for Evaluation in Public Health, provides six practical steps and four sets of standards for designing and implementing any evaluation (Rewrite).

This framework is to be used as a tool by practitioners within the public health, and closely related fields like social work to conduct a comprehensive evaluation of existing and emerging policy. Like the policy making framework, these six steps are non-linear, are iterative and may overlap, but what remains consistent is the need for stakeholder engagement and education.

CDC Evaluation

CDC Framework Standards:

The CDC framework provides four standards, which should be applied/administered as categories or types of "lens to help isolate the best approaches at each step".

These four standards are

· Utility

· Feasibility

· Propiertary

· Accuracy

The six steps of the CDC Evaluation framework:

· Engage stakeholders

· Describe the program

· Focus evaluation design

· Gather credible evidence

· Justify conclusions

· Ensure use of findings and sharing lessons learned

Considerations for Policy Making includes:

· Identifying Impact

· Collecting data

· Working with stake holders

· Complying with Laws and Regulations

· Dealing with Uncertainty

· Uses of Policy

Policy evaluation can have many uses including:

· Document and inform policy development, adoption, and implementation process.

· Determine policy effectiveness at improving targeted health outcomes.

· Gauge support for proposed policies.

· Assess compliance within existing policies.

· Contribute to the evidence base.

· Inform future policies and policy efforts.

· Help identify results of policy efforts, including health outcomes.

Lessons 2 through 5 provide a comprehensive overview of each section mentioned above. Please be sure to carefully review each slide, as well as the supplemental readings and resources to ensure full comprehension of the policy evaluation process which is essential for the successful completion of the week 5 paper assignment, week 7 PowerPoint and week 8 peer evaluation assignment.

Advancing Public Health Through Policy Evaluation

Lesson 2 of the CDC (2016c) describes how policy evaluation can be used to advance public health goals and social policies. This section also discusses how practitioners can identify opportunities for policy evaluation throughout the policy making process. The benefits of policy evaluation are thoroughly discussed and several examples of public health policies that advances social policies for vulnerable populations are presented. Information is presented on several public health and social issues including vaccines-preventable diseases, tobacco control, maternal and infant health, motor vehicle safety, occupational safety and childhood lead poisoning prevention (CDC, 2016c, p. 3). The research evidence produced by these studies and many others demonstrate the immense contributions to the existing literature, emerging literature and research evidence for various public health and social issues.

This section also provides a brief recap of the policy analysis process and introduces the evaluation process for determining which policies may be most effective (CDC, 2016c, p. 4). Several other concepts from the policy analysis process is revisited and aligned to the policy evaluation process. The concept of triaging to prioritize policy options based on effectiveness, feasibility, and other factors is also discussed as well as taking advantage of windows of opportunities (CDC, 2016c, p. 5).

This lesson concludes with brief paragraphs aligning the relationship between policy analysis and policy evaluation when documenting and informing policy implementation, identification of contextual factors affecting policy enactment, identification of gaps in implementation and enforcement of policy. There are resources to elevate the student's understanding of each of the content discussed in lesson two. Students should set aside time to view these reading to ensure full comprehension of the content in this section.

Policy Evaluation Using the CDC Evaluation Framework

Engaging Stakeholders

Lesson three provides the most detailed is the most robust of all the lessons in the introduction to Policy Evaluation in Public Health course. This lesson focuses on how to apply the CDC policy evaluation framework to the domains of the policy process. The lesson also focuses on the unique challenges associated with each of the policy domains. One of the most critical aspects of the policy evaluation process is stakeholder engagement.

The CDC (2016c) presents

Three types of stakeholders:

Policy Experts, Evaluation Experts and Subject Matter Experts.

Policy experts have demonstrated skill in the area of policy development and implementation.

Evaluation experts have demonstrated skill in the area of evaluation design and methodology. The evaluation experts also have expertise with multiple data sources, analysis and interpretation of statistical data.

Subject Matter experts -have specialized contextual knowledge on a specific topic or field. Awareness of the type of stakeholder is an essential component to leverage the different skills and expertise that each stakeholder may offer to the policy evaluation process.

Describing the Policy Effort

The considerations for policy evaluation for the describing the policy effort domain includes a critical analysis of several components:

· Goals and Objectives of the Policy

· Content of the Policy

· Context Surrounding the Policy

· Underlying Logic and Casual Pathways Supporting the Policy

Scrutinizing the goals and objectives of the policy is critical to the evaluation process. Clearly defining and articulating the intended purpose, design and function of the policy. Does the policy do what it is supposed to do, and delivery on its intended mission? Impact data, such as statistics to demonstrate the effects of the policy as an intervention is an import part of the evaluation process for this domain.

Logic models are useful visualization tools to succinctly display the goals, inputs, activities, outputs, context and outcomes (short-term, intermediate and long-term) of policy implementation is also important to this phase of policy evaluation. Logic models can help policy makers and evaluators presenting relevant information and data to show alignment or discord with the intentions of the policy.

Focusing the Evaluation Design

Considerations for policy evaluations in the focusing the evaluation design domain of the evaluation process starts with reviewing the:

· Purpose of the policy evaluation

· The user of the information

· The use of the information

Next, is the identification of the type of evaluation that is best suited to the type of policy that must be conducted.

The CDC (2016c) describes Three types of evaluation models: Formative evaluation, Process evaluation and Outcome/impact evaluation.

· Formative evaluation looks at the larger context and environment to determine the main problem and identify solutions that are feasible, appropriate, and meaningful for the target population. For a policy evaluation, this step would happen before a policy is adopted and implemented. It would also encompass questions related to the content of the policy.

· Process evaluation examines the implementation of policy- related activities. For a policy effort, process evaluation could examine the implementation of a policy, focusing on the degree to which the inputs, activities, and outputs were implemented as planned, barriers to its implementation, and factors that support its implementation.

· Outcome/impact evaluation examines whether the intended outcomes and impacts occurred and may also examine whether outcomes and impacts can be attributed to the policy.

Identifying appropriate evaluation questions to illicit information about the policy characteristics and intended use is another crucial aspect of the policy evaluation framework. Example of policy characteristics questions from the CDC (2016c) training include:

· What type of policy is being evaluated (legislative, regulatory or organizational)?

· What level of policy is being evaluated (local, state, national)?

· What type of evidence base exists for this policy?

Examples of intended use for policy questions from the CDC (2016c) training include:

· What is to be determined and accomplished with the evaluation?

· How will the answers to this evaluation help move the field or policy forward?

· How will the evaluation be used and who is the potential audience?

Once the questions are formulated, an evaluation design must be selected and administered. The CDC (2016c) presents four types of evaluation design

· Experimental (or randomized) designs compare a treatment group to a control group, and randomly assign the groups to try to assure an equal chance to all groups of being selected.

· Quasi-experimental designs use comparison groups to draw causal inferences but do not use randomization to create the treatment and control groups. The treatment group is usually predetermined. The control group is selected to match the treatment group as closely as possible so that inferences on the impacts of the policy can be made.

· Non-experimental/observational designs include, include, but are not limited to, time-series analysis, cross-sectional surveys, and case studies. Non-experimental designs can provide valuable information but do not include a comparison group and are not able to provide evidence of a causal link between a policy and any outcomes.

· Mixed methods evaluation is a design for collecting, analyzing, and mixing both quantitative and qualitative data in a single study or series of studies to understand an evaluation problem.

Gathering Credible Evidence

Gathering credible evidence in the policy evaluation is important to the policy evaluation framework. Selecting the right type of measures to accurately define outcome data is paramount. The CDC evaluation framework present two types of measures. Process measure, measures activities/outputs that have been developed correctly. Outcomes measure, measures the extent to which objectives are achieved.There are two primary data collection methods, qualitative and quantitative. When these two methods are combined, it is called mixed methods

Qualitative data collection methods include structured interviews, semi-structured interviews, focus groups, case studies and narratives. 

Quantitative data method includes surveys, questionnaires, and tracking tools that gather information on numerical data.Qualitative data can be quantified, but it is initially collected as textual data. The types of available data sources can inform what type of data collection strategy is utilized in the policy evaluation process. 

The CDC presents four types of data sourcesSurveillance data, administrative data, legislative or policy database, interview with stakeholders and focus groups.Sources of data can be categorized in two categories, Primary and Secondary data. 

Primary data is collected or administered by the researcher for the first time through qualitative and quantitative approaches. 

Secondary data is received secondhand, such as literature interviews, clinical notes, administrative records, and justifying conclusions. Justifying conclusions during the evaluation process includes but is not limited to the following actions.Assess external and internal contextual factors related to policy changes. Explain results to develop evaluation questions, policy goals and logic models. Analyze and equate results to resolve inconsistencies from multiple data sources. Present data results to stakeholders in a way that is meaningful and understandable. 

 

Ensuring Use if Findings and Sharing Lessons Learned

Once the rigors of the policy evaluation process are complete, the next step is to report and present the evaluation results to stakeholders, policy makers, colleagues, partners and the public.

To ensure, that findings are reported and presented well, the CDC (2016c) recommends that the presenter adhere to four following:

· Know your audience.

· Identify objectives of communications.

· Consider the best frame for your message to meet the communication objectives.

· Consider the methods you will use to deliver your message.

Results must be presented in language that is free of technical jargon and easily comprehended by various stakeholder groups. The information presented must be unbiased and factual. In short, the information presented must not be influenced or manipulated to coincide with the policy makers viewpoint. Social media can be leveraged to communicate the objectives and messaging related to the social policy, including evaluation data.

Evaluation Within and Across the Policy Process Domains

Lesson 4 of the CDC (2016d) uses the Step by Step — Evaluating Violence and Injury Prevention Policies briefs produced by CDC's National Center for Injury Prevention and Control to apply the evaluation framework for each of the policy process domains. This part of the CDC training has several real-world scenarios, that demonstrate the application of the evaluation framework on each of the policy process domains. The lesson 1-3 course content on defining purpose, identifying stakeholders, describing use, applying evaluations questions, and assessing methodology considerations are presented. Please review carefully review lessons one through three slides, reading and resources to complete the exercises.

Common Challenges to Policy Evaluation

1. Fear of Evaluation and Lack of Familiarity with Policy Evaluation Methods

Conducting program evaluation for the first time is a daunting task that can lead to a heightened sense of anxiety for the individual administering the policy evaluation methods. To alleviate these fears of familiarity, administer the CDC Program evaluation phases and apply the standards to ensure that your foundation for a quality evaluation is solid. Your foundation includes accurately identifying a problem, and policy goals, as well as a robust data collection strategy that includes quantitative and qualitative methods. Additionally, ensure that the stakeholders who are engaged are as knowledgeable as they are passionate about the subject matter.

2. Lack of Control over Policy Implementation

The policy evaluation process has a lot of moving parts, and persons. Consequently, the likelihood of feeling that you’ve lost control of the evaluation process may be inevitable. According to the CDC (2016e), control of the evaluation implementation process is critical. CDC purports that if evaluator is not confident that he or she can maintain control of the evaluation implementation process, it is compromised and should not be administered. If done correctly the evaluation implementation process will yield valuable information about the potential impact of the policy on the social issue and target population. Interpreting, and predicting issues with the policy design allows the opportunity to make the necessary modifications to improve the policy in a timely manner.

3. External and Contextual Factors

Considerations of internal and external factors (especially those outside of our control) poses yet another challenge to the program evaluation process. To mitigate this challenge, CDC (2016e) advises that contextual factors are thoroughly researched and an evaluation plan to measure and address the contextual factors is developed. This evaluation plan should measure and link the effects of short term and intermediate outcomes to long term outcomes (2016e).

4. Lack of Resources or Clear Responsibility for Evaluation

Having a clear understanding of what resources you have bailable to conduct the evaluation is very important. Per CDC (2016e) if you do not have the required resources to conduct a “high-quality” evaluation, you should not move forward with the evaluation. If you do not have the necessary resources, you should partner with individuals or entities who have the resources, including the personal to share the responsibilities for conducting a comprehensive evaluation.

5. Conflicting Results

Sometimes, the results of your evaluation will be contradictory, not necessarily supporting or refuting the premise of the policy.  This predicament is frustrating, and sometimes not easily resolved, because we must accept data as it is. The data should not be manipulated to present a biased viewpoint to support our theories. We must go where the data takes us and present the data with honestly and integrity to stakeholders, politicians, legislators, advocates and community members.

6. Occasional Rapid Pace of Policy; Desire for Quick Production of Results

Depending on the popularity, urgency or scrutiny on specific policy, there may be a demand for rapid production and execution of the policy. While a fast-paced approach may be tempting, and even ego-satisfying, the CDC cautions against prematurely administering policy if everything is in place for the policy evaluation process. Evaluators must take the time to thoroughly review and apply each step in the policy evaluation process. Time must be taken to review the short-term outcomes to ensure that long term outcomes are accomplished.

7. Lack of Strong Evidence Base, Access to Appropriate Data, and Appropriate Measures There are some occasions where the available data is a limitation. The data may not be available for a variety of reasons, including not being prominent for a specific field. For example, perhaps there is a lag in the available data for the field of social work. However, there may be similar fields or alternative fields that have an abundance of data. While the available data may not be exactly what the policy maker is looking for, it may provide some support for the fundamental concepts of the policy. Additionally, the alternative evidence, may provide insight that was not known, anticipated or expected.

8. Lag in Availability of Data

In some cases your policy will be novel, and there will be no research evidence to support your policy design. In the absence of available data, you will need to prepare an evaluation plan rich with data collection strategies, which should include partnering with stakeholders and agencies that can help you collect data to evaluate the parameters of the policy under review.

9. Challenges in Finding an Equivalent Comparison Group

Any good research design, including policy evaluation design must include comparable data. After all, having policy that affects the greatest number of people is the goal, ether it be public health or social services. So, having comparable groups and communities that apply the same program design to tests the effectiveness of policy is essential. CDC (2016e) recommends using a quasi-experimental design to show significant differences between two or more groups.