Monday, January 24, 2011

Assignment #1

Program Overview:
4 For Lunch is a four-week curriculum-matched program culminating in a challenge week where students are to bring healthy lunches containing four food groups. As part of the program, teachers received lessons meeting the outcomes of the Ontario Health and Physical Education Curriculum. Parents were also provided with nutrition resources that aligned with the program. The main goal was to encourage children and their families to pack healthy lunches. The program believes that by enhancing the students’ self-efficacy through the classroom-based intervention, the confidence will be transferred to other nutrition behaviors. The hope being that students will have the skills, knowledge and self-efficacy needed to make lasting behavior changes.
Evaluation Process:
The first task of the evaluators was to understand the purpose of the program. This knowledge was then used to generate four research questions that guided the rest of the evaluation. The evaluators also engaged in a literature review to help them identify the traits highlighting successful public health interventions in nutrition. The evaluation was summative in nature as it only analyzed the current program results in an isolated one-month snapshot. The evaluation could also be perceived as formative because the recommendations will likely play a key role in ‘tweaking’ the program for future implementations. The evaluation was experimental in design as it looked to compare data from implementing and control groups in order to determine if the program was meeting its goals. Participants for the study were gathered through flyers and letters to all elementary schools in the region. Proper methodological /ethical considerations and permissions were undertaken and approved by the school boards prior to commencing the study. To determine the success of the 4 For Lunch program, numeric data was collected via the previously validated pencil/paper pre- and post-test measurements tools of: teaching self-efficacy; student nutrition knowledge; and 24-hour student food recall questionnaires. The findings were shared in a formal report. This report included a thorough discussion of results, while maintaining a focus on the four guiding questions. It also identified limitations of the evaluation and finally provided key recommendations for future implementations of the program. The evaluation had a similar feel to Stake’s Countenance model as it clearly used descriptive data searching for congruence between the intended and what is actually observed.
Strength / Weaknesses of Evaluation:
I thought the evaluation did a thorough job of uncovering and highlighting the theory behind the program’s implementation. I also felt the literature review provided an academic component that gave the reader an immediate confidence in the findings. Reader confidence was also gained by the formal and well-organized appearance of the final evaluation report. The evaluation effectively used tables and charts to help convey the findings to the reader without getting lost in the technical aspects of the statistical analysis.
The discussion worked very hard to justify the lack of positive numeric results as shown by the collected data. It quickly became clear that proponents of the program commissioned the evaluation. Realizing that this was likely the reason for completing the evaluation, my criticism lies with the overt lack of neutrality, not the fact that bias entered the discussion portion of the evaluation (the hidden curriculum of program evaluation was clearly evident). I believe the evaluation should have spent more time pointing out that the major breakdowns occurred in the flawed implementation (lack of training and consistency). The limitations section of the report highlighted the majority of the concerns that I was going to raise, which was excellent, but I felt that they should have taken this one-step further and identified how these limitations may have affected the results.
Overall, I would say that this program evaluation was well-orchestrated and the difficulties did not lie in the evaluation strategy but more in the fact that it was trying to cover a flawed program.

2 comments:

  1. Do you think it was the program itself that was flawed or the design of the study that measured the effectiveness of the program? I admit I haven't read through it enough to make such a judgement but wanted to get your sense. I agree with you that the evaluation is trying to portray the modest and null results in the best possible light.

    Good job - I particularly enjoyed the study because it falls in my Nutrition area. Again, it shows how changes in behavior relative to health decisions are hard to bring about, sustain, and measure! A challenge for program evaluation for sure!

    ReplyDelete
  2. Dean

    Good overview of the program you chose. There were a great degree of effort in the preparation for the evaluation. What is missing are any recommendations; which is the purpose of conducting a PE. As you point out this is likely due to the reason the evaluation was commissioned in the first place. The report can be both summative and formative but without any clear call for change it is unclear. 'Tweaking' is hardly considered a recommendation. Is there one of the theorists or models that served as a guide or foundation for this evaluation?

    Jay

    ReplyDelete