Below is a link to the pilot survey that was originally released:
Trial #1 - http://www.surveymonkey.com/s/BSPQ5SH
Below is an analysis of the released pilot survey:
Pilot Analysis of Student Survey
Sample Size:
The initial survey (mini pilot) was conducted with six students.
o These students were chosen by their teachers to provide an approximate cross-section of general ability and grade level.
o Two students each were chosen from grades one, three, and five.
Changes:
- The most notable information gathered from the trial was that grades 1-6 was too broad of an ability range to only utilize one standardized survey.
o I was aware that the grade one students would have a reading/writing deficit but asked an EA to sit and assist the two grade 1 students (in as neutral a way as possible). I observed from the back of the class as they worked at a computer pod in the classroom. I could quickly see that they were at a loss for the majority of the survey. I also debriefed with the EA afterwards and she highlighted this as well.
o Trial #1 had received many edits before its trial #1 delivery in attempts to simplify the language but this was just not enough.
§ Ex. “Are you a boy or a girl” vs. “What is your gender”
§ Ex. “out-loud reader” vs. “Oral reader”
o I did not want to over-simplify trial #2 and miss valuable information that the older grades could provide through deeper and more open-ended questions.
o The final solution to this issue was to create two surveys. A simplified version for grades 1-3 and an additional age appropriate survey for grades 4-6.
- I went a little further at this point and discussed the trial #1 surveys with a few of the teachers and an EA. They were able to provide me with feedback as well.
o None of the grade one or three students were able to answer the question on what level their books were at. I initially took this to mean that only these students didn’t know and was going to leave the question. After discussion with their teacher (grade 1/2 homeroom) and an EA, I found out that none of the students would be able to answer that question as it is not their practice to inform the students in that grade and that the students never ask.
o I therefore removed the question on reading levels from the grades 1-3 survey as well as the two logic questions that followed.
- Trial #1 appeared to be very time consuming at the younger grades, both for the EA that was bombarded by questions and the frustrated students.
o It was clear from the pilot surveys that the length of student responses got shorter and shorter as the survey went on.
o A few of the non-critical questions (those not directly aligned with the program evaluation purpose) were removed from the trial #2, grades 1-3 survey.
o Many questions also arose from students asking about vocabulary and even how to answer questions (“how do I tell my answer”).
o More instructions about “how” to answer questions were included in trial #2 (for both 1-3 and 4-6 surveys)
o Pictures were added to the 1-3 survey in order to assist with the reading difficulties
§ Ex. Used a range of ‘smiley faces’ instead of only language like ‘strongly disagree’
- Technology appeared to be a problem for the younger grades but the on-line survey appeared to lead to further engagement in the older students. I originally tried to keep both surveys on-line in order to simplify data collection and analysis but discovered this was simply not worth the trouble during the survey completion time.
o Paper copies were printed for the grade 1-3 surveys while Survey Monkey will still be utilized for the grade 4-6 group
o This will also allow for more appropriate spacing of questions as to not overwhelm the younger students
- During the original construction of the rating scale question, only ‘positive’ or what I felt might be ‘leading’ language was used. This was one of the edits completed prior to the release of trial #1. The actual original also used ‘negatives’ in the wording but this was abandoned as I thought it would become confusing for the younger students.
§ Ex. “I like reading groups” vs. “I would rather not attend reading groups”
o Fewer leading questions will be utilized in the rating scale question for the grade 4-6 survey. A balance of language is now possible.
- A few of the short answer/follow-up questions were left blank. I feel that valuable information is being left out.
o When students answer the multiple choice questions, some of the written follow-up questions are now ‘must answer’ questions.
o It is this reasoning from the student that will allow for us to answer the program evaluation questions
o The open-ended questions at the end will remain as they are
- Trial #1 did not have an introduction included with the electronic survey. During the design process, I believe that I was simply thinking that I would be there to give an oral introduction and include some of the directions. I now realize that when mass distribution is required, I cannot be sitting beside each student in order to explain.
o An introduction was added to the electronic version of the survey (grades 4-6).
- After spending some more time working on the revisions, I discovered a language issue. This was not brought to my attention by the student pilot surveys or staff discussions but I felt that it was important to address. One of the negatives of a leveled reading program is the potential for students to develop a negative image of their abilities.
o The language was changed to ask about the ‘letter’ of books the student is reading and not what ‘level’ they read at.
- The younger students did not have the writing skills to fully express their thinking on the final two questions of the survey. They had more information to provide but did not have an appropriate avenue to express it. They also provided very brief (surface thinking) types of comments. I thought with a little probing we would uncover more meaningful data.
o The final two survey questions will actually be employed as more of an interview style. An EA will scribe answers and has been given some guidance in how to probe for further/deeper thinking from the students. I think this is a viable solution considering the small class sizes.
Trial #2 - http://www.surveymonkey.com/s/BSN9TMR
Below is the updated/new grade 1-3 survey:
(Note - The images had to be scanned in as pictures and are slightly distorted...if you would like to see the originals please let me know and I'll e-mail them to you)
I found the pilot survey to be invaluable. I feel confident that trial #2 will provide me with much stronger data. I would definitely employ this process in preparing for future program evaluations. I only wish I had the time to employ this process with each of the surveys I plan on conducting. I am positive that all of the surveys will continue to evolve if the evaluation turns into a yearly process.