Monday, April 25, 2011

Assignment #5

This assignment consisted of completing a pilot survey, analyzing those results, and then incorporating the feedback to create an updated and more appropriate/useful version of that survey. I have chosen to complete this process for the student survey of my program evaluation.


Below is a link to the pilot survey that was originally released:

Trial #1 - http://www.surveymonkey.com/s/BSPQ5SH



Below is an analysis of the released pilot survey:

Pilot Analysis of Student Survey

Sample Size:

The initial survey (mini pilot) was conducted with six students.

o   These students were chosen by their teachers to provide an approximate cross-section of general ability and grade level.

o   Two students each were chosen from grades one, three, and five.

Changes:

-          The most notable information gathered from the trial was that grades 1-6 was too broad of an ability range to only utilize one standardized survey.

o   I was aware that the grade one students would have a reading/writing deficit but asked an EA to sit and assist the two grade 1 students (in as neutral a way as possible). I observed from the back of the class as they worked at a computer pod in the classroom. I could quickly see that they were at a loss for the majority of the survey. I also debriefed with the EA afterwards and she highlighted this as well.

o   Trial #1 had received many edits before its trial #1 delivery in attempts to simplify the language but this was just not enough.

§  Ex. “Are you a boy or a girl” vs. “What is your gender”

§  Ex. “out-loud reader” vs. “Oral reader”

o   I did not want to over-simplify trial #2 and miss valuable information that the older grades could provide through deeper and more open-ended questions.

o   The final solution to this issue was to create two surveys. A simplified version for grades 1-3 and an additional age appropriate survey for grades 4-6.



-          I went a little further at this point and discussed the trial #1 surveys with a few of the teachers and an EA. They were able to provide me with feedback as well.

o   None of the grade one or three students were able to answer the question on what level their books were at. I initially took this to mean that only these students didn’t know and was going to leave the question. After discussion with their teacher (grade 1/2 homeroom) and an EA, I found out that none of the students would be able to answer that question as it is not their practice to inform the students in that grade and that the students never ask.

o   I therefore removed the question on reading levels from the grades 1-3 survey as well as the two logic questions that followed.



-          Trial #1 appeared to be very time consuming at the younger grades, both for the EA that was bombarded by questions and the frustrated students.

o   It was clear from the pilot surveys that the length of student responses got shorter and shorter as the survey went on.

o   A few of the non-critical questions (those not directly aligned with the program evaluation purpose) were removed from the trial #2, grades 1-3 survey.

o   Many questions also arose from students asking about vocabulary and even how to answer questions (“how do I tell my answer”).

o   More instructions about “how” to answer questions were included in trial #2 (for both 1-3 and 4-6 surveys)

o   Pictures were added to the 1-3 survey in order to assist with the reading difficulties

§  Ex. Used a range of ‘smiley faces’ instead of only language like ‘strongly disagree’



-          Technology appeared to be a problem for the younger grades but the on-line survey appeared to lead to further engagement in the older students. I originally tried to keep both surveys on-line in order to simplify data collection and analysis but discovered this was simply not worth the trouble during the survey completion time.

o   Paper copies were printed for the grade 1-3 surveys while Survey Monkey will still be utilized for the grade 4-6 group

o   This will also allow for more appropriate spacing of questions as to not overwhelm the younger students



-          During the original construction of the rating scale question, only ‘positive’ or what I felt might be ‘leading’ language was used. This was one of the edits completed prior to the release of trial #1. The actual original also used ‘negatives’ in the wording but this was abandoned as I thought it would become confusing for the younger students.

§  Ex. “I like reading groups” vs. “I would rather not attend reading groups”

o   Fewer leading questions will be utilized in the rating scale question for the grade 4-6 survey. A balance of language is now possible.



-          A few of the short answer/follow-up questions were left blank. I feel that valuable information is being left out.

o   When students answer the multiple choice questions, some of the written follow-up questions are now ‘must answer’ questions.

o   It is this reasoning from the student that will allow for us to answer the program evaluation questions

o   The open-ended questions at the end will remain as they are



-          Trial #1 did not have an introduction included with the electronic survey. During the design process, I believe that I was simply thinking that I would be there to give an oral introduction and include some of the directions. I now realize that when mass distribution is required, I cannot be sitting beside each student in order to explain.

o   An introduction was added to the electronic version of the survey (grades 4-6).



-          After spending some more time working on the revisions, I discovered a language issue. This was not brought to my attention by the student pilot surveys or staff discussions but I felt that it was important to address. One of the negatives of a leveled reading program is the potential for students to develop a negative image of their abilities.

o   The language was changed to ask about the ‘letter’ of books the student is reading and not what ‘level’ they read at.



-          The younger students did not have the writing skills to fully express their thinking on the final two questions of the survey. They had more information to provide but did not have an appropriate avenue to express it. They also provided very brief (surface thinking) types of comments. I thought with a little probing we would uncover more meaningful data.

o   The final two survey questions will actually be employed as more of an interview style. An EA will scribe answers and has been given some guidance in how to probe for further/deeper thinking from the students. I think this is a viable solution considering the small class sizes.


Below is a link to the updated/new grade 4-6 survey:

Trial #2 - http://www.surveymonkey.com/s/BSN9TMR


Below is the updated/new grade 1-3 survey:
(Note - The images had to be scanned in as pictures and are slightly distorted...if you would like to see the originals please let me know and I'll e-mail them to you)














I found the pilot survey to be invaluable. I feel confident that trial #2 will provide me with much stronger data. I would definitely employ this process in preparing for future program evaluations. I only wish I had the time to employ this process with each of the surveys I plan on conducting. I am positive that all of the surveys will continue to evolve if the evaluation turns into a yearly process.

Thursday, March 17, 2011

Assignment #4 - Logic Model

Program:  Guided Reading Groups                                                Logic Model

Situation:  Borden School



Inputs
Outputs

Outcomes -- Impact


Activities
Participation

Short
Medium
Long

·         Teachers

·         EA’s

·         Volunteers

·         Administration

·         Time

·         Money

·         Materials

·         Training
















·         Teachers attend professional development opportunities on structured reading groups
·         Training of parent volunteers by teachers
·         Utilize Fountas and Pinnell pre-test data to determine levels / groupings
·         Implementation consists of 30 minutes every second day for guided reading groups
·         Reading materials from: Reading A-Z.com
·         Group leaders employ the 7 comprehension strategies
·         Re-reads are employed to work on fluency (most sections read 3 times)
·         Students are assessed and re-grouped 3 times / year
·         Utilize Fountas and Pinnell post-test data to determine concluding level

·         Grades 1 – 6
·         ~ 40 students








·         Considering the addition of Kindergarten

·         Considering the addition of local home schooled students


·         ↑ reading comprehension

·         ↑reading fluency

·         ↑ content knowledge

·         ↑ vocabulary


·         ↓ achievement gap

·         ↓ classroom management issues

·         ↑ confidence

·         ↑ academic success in all content areas

·         ↑ contributions to the school learning community

·         ↑ student engagement

·         ↑ number of independent readers

·         ↑ increased partnerships with the school and community

·         ↑ joy of reading and a willingness to pass it on

·         ↑ career opportunities

·         ↑ post-secondary opportunities

·         ↑ societal contributions

·         ↑ Volunteerism within the community





Assumptions:

External Factors:
-       Leaders are making similar instructional choices
-       Additional resource materials are being accessed equally
-       Increases / decreases due to home support and additional parental interventions
-       Readiness for school when entering Kindergarten


Logic Model Explanation
Inputs:
Borden school invests a great deal of time and energy into its grades 1-6 Guided Reading Program. Our Special Education Teacher oversees this program and spends a large percent of her allotment facilitating the program. All of our K-6 teachers contribute to the program as group leaders and so do all three of the Educational Associates (EA’s) in the school. The school has four parent volunteers that also lead groups. This allows us to offer smaller groups with increased variety in the scaffolded levels. Classroom teachers also contribute the 30 curricular minutes out of their daily schedules every second day. Budget money is allocated towards purchasing the license to download and print materials from A-Z Reading.com. This is how the majority of the reading resources are procured. Money has also been provided for teachers and EA’s to attend workshops and purchase additional professional development reading materials related to reading groups and the use of comprehension strategies.

Outputs:
a)    Activities
The program delivery consists of 75 minutes/week of ability like groupings for reading instruction. At this time, the program is focusing on both reading fluency and comprehension. The program begins with an initial testing of student ability using the Fountas-Pinnell (F-P) standardized assessment tool early in September. A trained Special Education Teacher administers the F-P assessment. The students are then placed in ability-like groupings and assigned a leader. This leader could be a teacher, EA, parent, older student, or an administrator. Baskets are provided to each leader which contain 3-5 books (students are given choices) and a leader’s logbook for recording progress. The program works on fluency by having students re-read passages multiple times. The majority of groups will go over the same material three times. The basket also contains a set of comprehension questions and possible extension activities that can be used to work on deeper understanding. Leaders are encouraged to employ at least one of the seven comprehension strategies / session. Leader training is an ongoing process throughout the program. Students are tested about three times / year to re-assign levels and shuffle leaders. A final assessment using the F-P tool occurs in June.
b)    Participants
The participants currently include all grade 1-6 students in our school (~40).

Outcomes / Impacts:

The outcomes have been subdivided into three areas that are consistent with the child’s development and progression through school and into their adult lives. The short-term goals all focus on targeted and measurable academic components. We would expect to see some immediate growth in each of these categories within the first weeks if the program is successful. The medium range goals will take some time to achieve and they are not as easily measured. These are often the result of continually meeting short-term goals but it should be clear that a true cause-effect relationship cannot be assumed for the medium or long-term goals. The long-term goals represent the wider felt impact of student successes within their community.

Gantt Chart


Sorry I was having troubling posting this. I was going to try and post it as a pdf but can't figure that out....so I tried to save it as a picture in order to post it but the quality dropped substantially. If you would like to have a closer look at it please let me know and I can e-mail out the pdf version.

Assignment #3

Program Assessment:
1-6 Guided Reading Groups
As I am part of this organization, it will be important for me to remain as impartial as possible and work hard not to make any assumptions during the planning process. In order to achieve this I will engage in several informal conversations with different stakeholder groups during the initial planning phase. This program is perfect for an evaluation as our staff places a great deal of time and energy into the guided reading groups but are feeling unsure about the impact on student achievement. This is the third year of the reading programs implementation and it has not been evaluated previously. Another reason for performing this evaluation is that recent research is not backing leveled guided reading groups as strongly as it did a few years ago.
The following is a list of key questions (identified by red font) used to reflect and gather information on the guided reading program as the evaluation plan unfolds. It is broken down into several sub-topics as denoted by the bold font.
Engage Stakeholders:
Who should be involved?
-          Administration, Teachers, EA’s, Students, Parents, School Community Council (SCC)
How might they be engaged? What are their possible areas of contribution?
-          Interviews, surveys/questionnaires, group dialogue sessions, journals, sharing of anecdotal data
o   Administration – internal reflection on program goals, resource allocation (personnel and budget), level of implementation, etc.
o   Teachers – perspectives on program strengths and weaknesses, suggestions for changes, instructional strategies used within groups, long-term (>2 years) achievement results, thoughts on reading groups vs. daily ELA class time, level of implementation, other testing data, etc.
o   Students – personal achievement perspective, time commitment compared to other daily tasks, their engagement in the learning opportunity, etc.
o   Parents – level of knowledge of the program, perspective on students’ growth and achievement, etc.
o   SCC – perspectives and level of knowledge on the program, etc.
Focus the Evaluation:
What are you going to evaluate? (See attached logic model for clarification of program delivery)
-           Is the guided reading program an effective instructional method for increasing students’ reading fluency and comprehension?
-          Program Goal:
o   Students will continually increase their reading fluency and comprehension levels at an acceptable rate. There are three sub goals depending on their starting level…..
§  Those reading above grade level must maintain (at minimum) their current above grade level
·         Ex) A grade two student reading at a grade 4 level must still be at a grade 4 level or higher by the end of the year.
§  Those reading at grade level must maintain (at minimum) their age appropriate increase
·         Ex) A grade two student reading at grade 2 level needs to be at grade 3 level by the end of the year.
§  Those reading below grade level  must minimize the achievement gap from the start of the year to the end
·         Ex) A grade two student reading at a grade 1.3 level (0.7 behind) at the beginning needs to be at a 2.4 level (0.6 behind) by the end of the year when they are going to enter grade 3….keeping in mind that any growth is a success!
What is the purpose of the evaluation?
-          Address the accountability of our school goals around fluency and comprehension
-          Improve the teaching and learning occurring within the program (and the regular classroom setting)
-          Clarify the program goals and progress for all stakeholders
-          Inform future decisions about the program:
o   What grade levels should be included (i.e. drop the 5/6 class out and add the K’s)?
o   Is the time allotment appropriate?
o   Do leaders require more training in order to be effective?
o   Should additional money be invested into the materials (readers) used?
o   Is the primary assessment tool for student achievement accurate enough?
What questions will the evaluation seek to answer?
-          Does student engagement correlate with achievement
-          Should the program should be continued, altered or disbanded for the 2011/2012 school year?
-          Have students been progressing at an acceptable rate (as noted by the three-tiered program goals)
Who will use the evaluation? How will they us it?
-          Administration
o   To share results and future plans with superintendent at end of year goals conversation
o   To dialogue with parents and the SCC about school/program goals
o   To inform decisions about staffing/time-tabling/programming and budget allocations for next year
o   To facilitate group leaders in reflective dialogue about the program (Professional Development)
-          Teachers / EA’s
o   To engage with administration and fellow staff about the program (Professional Development)
o   To Inform future instructional practice
o   To increase confidence in the program or magnify the need/necessity for changes
What information do you need to answer the questions?
-          Student Achievement data (pre and post)
-          Student and Staff perceptions on levels of learning engagement
-          Knowledge of group leader practices (instructional decisions) made during their guided reading group time
o   What comprehension / fluency strategies are being consistently employed
When is the evaluation needed by?
-          June 2011 in order to allow for post evaluation dialogue and decision making time prior to the beginning of the 2011/2012 school year.
What evaluation design will you use?
-          The evaluation will be summative in nature
o   If we decide to implement the program in the 2011/2012 school year the current evaluation will become formative as we would conduct another evaluation in June of 2012.
-          The design will include a combination of the Provus (Discrepancy) and Stake (Countenance) models
Collect the Information:
What sources of information will you use?
-          Existing information:
o   Pre and Post-test data from 2008-2009 and 2009-2010 school years
o   Pre-test data from current 2010-2011 school year
o   Potentially may use data from Prairie View Elementary that is running a similar program
-          People:
o   Administration, teachers/group leaders, students, parents and SCC
§  Same as above but potentially from Prairie View Elementary
-          Pictorial records and observations:
o   Sample videos of reading groups
o   Administrative observations (with focus on instructional decisions)
o   Program manager observations (with focus on student responses to learning situation)
What data collection methods will you use?
-          Surveys
-          Interviews
-          Observations
-          Pre/post achievement tests
-          Videos
-          Journals
§  Due to time constraints (program nearing completion when choice was made to evaluate) these collection methods will only provide a snapshot view. It is recommended that next year collection methods be enacted throughout the program to provide a longitudinal perspective.

What is needed to record the information?
-          Instrumentation:
o    Surveys
§  Parent
§  Student
§  Group Leader
o   Rubrics for both students and group leaders to record levels of engagement
§  Include spot for anecdotal evidence as well
o   Interview / group dialogue questions and data response collection template
o   Data collection sheet for group observations
§  Instructional focus
§  Student focus
o   Video camera computer for playback
o   Journal (includes possible reflective questions)
§  This could be utilized by both students and teachers
·         Focus would be on the type of strategy utilized
o   Reading Assessment for pre/post (Fountas and Pinnell)
When will you collect data for each method chosen?
o   Surveys
§  Near end  (April-May) / After (June)
o   Interview questions and response template
§  Near end  (April-May) / After (June)
o   Data collection sheet for group observations
§  Near end (April-May)
o   Video camera
§  Near end (April-May)
o   Journal
§  Near end  (April-May) / After (June)
o   Reading Assessment
§  Immediately after (June)
Will a sample be used?
-          The population is quite small so the majority of collection instruments will be used in their entirety
o   Exceptions being: interviews (only use ability cross-section/grade) and video recordings due to time constraints but each group will either be observed or recorded.
Pilot Testing: when, where, how?
-          No pilot will be conducted at this time…..however this evaluation and its components could become the pilot for evaluations done in other schools that are running a similar program

Analyze and Interpret:
How will the data be analyzed?
-          Data analysis methods:
o   Surveys
§  Compiled and assessed for trends in particular fields
o   Interviews
§  Compiled and assessed for trends in particular fields
o   Video samples
§  Several groups will be filmed and reviewed by evaluators looking for similarities and differences in implementation (instructional decisions). The students in these groups will then be cross-referenced to their assessment scores (note: leader anonymity will be maintained in the report)
·         Responsible: Information processing class to film and administration and special education teacher to review
o   Group observations
§  This will be utilized in the same manner as the video reviews
·         Responsible: Administration / Program manager
o   Journals
§  Read by evaluators looking for commonalities and memorable quotes/thoughts (seeking common themes)
·         Responsible: Administration
o   Reading Assessment
§  Calculate level of increase from the Fountas and Pinnell pre/post assessments
·         a) look at average for whole group (1-6)
·         b) look at average for individual grades
·         c) look at individual students
o   Responsible: Special Education teacher / Program manager
How will the information be interpreted – by whom?
-            Principal and Project manager will be responsible for making inferences or interpretations of the data. However, staff will be given time to engage in dialogue and identify their own perceptions during the report debrief (teachers will have access to raw data).
What are the limitations?
-          Could be a large amount of information to process for only two people (others may need to be brought in)
-          There is not time for a pilot test of the collection instruments
-          Potentially small sample size
-          May have trouble identifying cause-effect relationships due to the large amount of variables present (which can be difficult to isolate)
-          Due to the late start of this evaluation, all data is being recorded right at the end of the program, and therefore will only represent a snapshot as opposed to a continuous view of the program
Use the Information:
How will the evaluation be communicated and shared?
-          Administration and program manager (special education teacher) will present the findings at a staff meeting
o   Raw data and full report will be shared with staff (teachers and EA’s will have full transparency)
o   Summary will be presented
§  Open dialogue time with entire staff
o   Evaluators recommendations will then be shared with staff
o   When: First Monday in June (afterschool meeting)
-          Partial information will be released through the principal’s monthly message in the newsletter
o   When: Distributed first Friday in June
-          Partial information will be released through presentation to SCC
o   When: First Monday in June (evening SCC meeting)
-          Partial information will be released to students through classroom visits by the administration to let them know about plans for next year.
Next Steps?
-          Seek informal feedback from stakeholders after release of findings
-          Make final decisions on implementation of program for the 2011/2012 school year
Manage the Evaluation:
-          Human subject’s protection
o   Students have already signed media release forms to allow for video recording in the school
o   Teachers will have option to have their group personally observed by evaluator, fellow teacher or recorded
o   Individual student achievement results will not be reported outside of the administration and classroom teachers
-          Gantt Chart
o   See appended document
-          Responsibilities
o   Principal – Lead Evaluator
§  Design Evaluation (with support from various stakeholders)
§  Create collection instruments
§  Analyze data
o   Special Ed. Teacher – Project manager
§  Pre/Post assessments using Fountas and Pinnell
§  Analyze data
o   Teachers/Volunteer Group Leaders
§  Implement the program
§  Participate in surveys, interviews and journaling
o   Students
§  Participate to their best ability in program
§  Participate in surveys, interviews and journaling
o   Other stakeholders
§  Participate in surveys, interviews and journaling

-          Budget
o   Use of 4 substitute days
o   Cost of photocopying
Standards:
-          Utility
o   The evaluation will provide valuable information for the administration and program manager when planning for the 2011-2012 school year. This program is an important component to the school culture and learning environment.
-          Feasibility
o   Due to the small number of active participants and group leaders, the evaluation process should not prove to be too cumbersome to complete. The lead evaluator has the ability to access substitute teacher days should the analysis begin to fall behind.
-          Propriety
o   This school holds relationships between all stakeholders with high regard. The design should allow for accurate collection and analysis of data within a safe environment. As the lead evaluator is familiar with this learning community, they have made choices to limit any ill feelings from the participants or outside stakeholders. The majority of stakeholders were engaged during the planning portion (participatory or empowerment approach to the evaluation).
-          Accuracy
o   Project manager (Special Ed. Teacher) will personally perform all pre/post tests to ensure consistent and accurate usage of the assessment tool
§  This individual has formal training in this area