As nearly half of all states now use growth percentiles as a central element in their accountability systems (Data Quality Campaign, 2019), it is important for practitioners to understand how the technical choices made when estimating these percentiles can affect their interpretation and application. In particular, there are two major methods for calculating and reporting student growth data: baseline-referenced SGPs and cohort-referenced SGPs.
Baseline-referenced SGPs are norm-referenced statistics that compare a student’s current year test score with those of students who had similar prior scale scores. This process is often referred to as “normalization.” Because of this, an individual student’s SGP can vary depending on the norming sample used to construct it. In the case of MCAS, SGPs are computed using a distribution drawn from three to five years of data to smooth out any irregularities that might be present in a single year’s test results.
Cohort-referenced SGPs, on the other hand, are normed against a student’s prior school-year test score. This means that a student’s SGP can be expected to remain fairly stable over time, provided the curriculum remains unchanged. However, this method can also be influenced by the composition of the cohort that is tested each year.
Both approaches have their pros and cons. In general, it is advisable for districts to choose the method that will be most consistent with their existing systems and procedures. Moreover, it is helpful for districts to keep in mind the varying educational contexts that can influence the meaning and interpretation of their SGPs.
For example, accelerated programs can result in a high proportion of students who cannot keep up with their peers and may therefore produce SGPs that are lower than their peers’ in other schools. Fortunately, because SGPs are aggregated using the median, such a program will not be penalized as long as the majority of its students continue to make progress at a rate that is above the average for the state.
Similarly, a gifted and talented program may see its SGPs fall below the state average due to a small number of students who are not progressing at a sufficiently rapid pace. In this situation, it is advisable to report growth for these students in a separate category of “underperforming” or “needs improvement.” This allows educators and administrators to focus their efforts on helping these students catch up with their peers. In addition, this approach will help to protect the credibility of SGPs for all students in a given school or district.