graph crop

This article, which originally appeared on LinkedIn, is part 2 of a four-part series on the quantitative approach to IEP evaluation. In this part, we will look at some illustrative example sample data and discuss what that data may reveal. We will also examine the sorts of changes it could motivate administrators and teachers to make. Part 3 of this series will address questions surrounding choosing a test, and part 4 will look at how standard deviation and other statistical analysis among scores can be used to evaluate the effectiveness of student achievement systems.

First, thank you to those of you who responded to the first article in this series. I was pleasantly surprised by the amount of feedback. In particular, many of you responded to the section about the important role that trust plays in an IEP when considering quantitative data. Many leaders in education have written about how trust is a common denominator in successful schools, and IEPs are no different. Indeed, no productive dialogue about the analysis of quantitative data occurs without first establishing a common understanding about how the data will be used.

In most cases when considering quantitative data, it is simple descriptive statics that usually tells the most compelling story. In this article, we will look at example results that an IEP might get by administering a standardized test such as iTEP to students in the IEP across all levels.

The three examples below represent three possible outcomes of a proficiency test given to all students in an IEP with six proficiency levels. Example one represents a perfect, but non-existent world. Examples two and three represent real-world IEPs where the data might elicit some interesting conversations.

While Example 1 represents an idealized progression of proficiency, it is useful to consider its characteristics for points of later comparison. What do we like about this example? First, overall scores and specific skill area scores increase between each level. Also, proficiency, as measured by the test, increases uniformly from one level of test-takers to the next level.

Example 2 is much more characteristic of what a typical IEP might find as a result of giving all students a standard proficiency test.

Of course, the most interesting characteristic of Example 2 is the significant increase in proficiency between levels three and four. In addition, there seems to be relatively little increase in levels subsequent to level 4. Still, this chart likely represents a successful IEP. In fact, many IEPs seem to have a particular level progression where proficiency increases substantially, and this increase could have a variety of pedagogical and/or practical explanations.

In Example 3, the IEP administrator might notice the decline in speaking and writing scores in level 6 and decide to keep an eye on the subsequent test administrations to confirm if this is a trend. Another trend to keep an eye on here might be the lagging scores of reading and listening skills. Those in the IEP might question if this confirms a sentiment about lagging skill achievement in receptive language skills.

Of course, these charts represent just one possible administration of a standardized proficiency test at one point in time. There is much more to learn through multiple administrations of a test. In some cases, IEP administrators recognize trends based on the term or season of the year. The spring quarter might show a fall off of scores in the higher levels as students are readying to exit the program. The fall and winter sessions might be characterized by more solid and consistent increases as students are very motivated to advance within the program and achieve on other standardized tests. IEP administrators might view these data in the context of the most common level at which students exit the program, determining if this in some way affects the results.

Whatever the conclusion, data and simple analysis such as these three example results can help guide the conversation at an IEP when it comes to student achievement and length and structure of the program. It is important to reiterate the limitations of quantitative data. Quantitative data is good for uncovering??what is happening but not necessarily why it???s happening. Again, the data might best serve as the jumping off point to a more in-depth conversation.

Dan Lesho is Executive Vice President of?? iTEP International. Prior to joining?? iTEP (International Test of English Proficiency), he was director of Cal Poly Pomona English Language Institute, and a professor at Pitzer College.

How can we help?

Fill out the form below, and we will be in touch shortly.

We use cookies to ensure you have the best experience. By clicking ‘Allow,’ you agree to our use of cookies.