Study of State Assessment Results and Instructional Approach

Whether or not you like the PARCC assessment, it is used to evaluate students, teachers, schools, and school districts. It also provides a consistent, independent measure to examine changes in student performance. A couple of months ago, I analyzed the  2017 state assessment results for third-grade students from four districts: two small districts, one medium-sized district, and one large district. New Mexico uses the PARCC assessment.

Spoiler Alert! (In case you don’t want to read to the results section below.)

In classrooms where teachers had training and practice with the instructional approach found in Roadmap for Reading, students did much better on their state assessments than students in other classrooms.

What I Wanted to Know / The Research Question

The Three Rivers Education Foundation hires classroom teachers to serve as reading tutors through an after school tutor program (which I designed and direct). The teachers receive some training and monitoring on the instructional approach described in Roadmap for Reading Instruction, and they get the chance to practice the approach in a small-group environment.

We don’t actually have anything to do with their classroom instruction, but I figured, “Hey, if they are learning these skills, is it having any effect in their classrooms, too? And if so, can we measure that impact?”

With these research questions in mind, I reached out to administrators in partnering districts to access their 2017 state assessment results.

Research Method

From the assessment data, I established two populations for study.

Control population: Non-tutored students in classrooms in which the teacher did not serve as a tutor during that school year and did not receive training and support in the instructional approach.

Intervention population: Non-tutored students in classroom in which the teacher served as a tutor in both the fall and spring semesters during that school year and received training and support in the instructional approach. (The “intervention” was having a teacher who had received training and support in the instructional approach.)

The PARCC assessment provides an overall English Language Arts placement on a scale of 1–5. The difference in average placement for students in each study population was calculated for the districts and overall.

Data for students who received tutoring during the 2015–2016 school year were removed from the data set so that results would reflect only the teachers’ effect and not the effect of tutoring. Once scores for tutored students were removed, I used the districts’ classroom rosters and our lists of tutors to separate students into the two populations described above. 

I also removed scores of students whose teachers tutored for only half the school year. However, I did not remove scores for any classroom teacher based on their performance as tutors. I used their scores whether or not they had good results in tutoring and whether or not they fully implemented the approach when tutoring. (For example, let’s say a teacher never addressed phonics or oral language and the students in the tutoring group made little or no progress. Maybe that teacher tried to use flash cards for fluency and relied on Accelerated Reading to address comprehension—basically demonstrating no understanding of the instructional approach. Classroom assessment results for that teacher were included in the intervention population, too.)

Each district had a different range of scores for tutors’ and non-tutors’ classrooms, so as the last step, I weighted the total difference by district population size to determine the overall difference in the two populations.

Results

In each district, the difference in average results for the two populations indicates that students in tutors’ classrooms outperformed their peers in non-tutors’ classrooms. This means that teachers who had some training and experience with the Roadmap for Reading Instruction approach were better teachers: their students did better on the state assessment results!

District

Students:
Intervention / Control

Average level
intervention
students

Average level
control
students

Performance
Level
Difference

A

37 / 37

2.57

1.92

+ 0.65

B

38 / 51

3.39

2.73

+ 0.67

C

113 / 81

2.34

1.83

+ 0.51

D

269/ 881

2.88

2.54

+ 0.34

The following graph demonstrates the range of average PARCC levels for the intervention and control populations per district.

Third grade results per district

To address the various performance ranges among districts and determine an overall difference in performance levels, the differences between the two populations by district were weighted by district-level study population size.

District

Students

Difference

Weighted Difference

A

74

0.65

48.1

B

89

0.67

59.6

C

194

0.51

98.9

D

1,150

0.34

391

Total

1,507

 

597.6

The total difference divided by the total students provides an overall difference in the two populations: 597.6 / 1507. Overall results indicate that students in classrooms of full-year tutors outperformed their peers in non-tutor classrooms by 0.4 performance levels.

There is only one way this result is possible: Students in the intervention population scored at a higher level than they would have if their teacher did not follow the Roadmap approach.

Conclusion

This approach has been used for more than 3 years, and with more than 17,000 students. Students made amazing results, generally 3 or 4 times the growth than they had experienced previously. This research study shows that the approach works in classrooms, too.

The bottom line: When teachers use this approach to reading instruction, the students do better on their state assessment results.