What Participatory Assessment is NOT
Obviously a blog devoted to participatory assessment should explain what that means. And try to do so in simple every day terms. This is the first in a series of posts that attempts to do so. Quite specifically, participatory assessment is first about assessing and improving a communities social participation knowledgeable activity, with the added bonus of fostering the understanding and achievement of the individuals in that community. To explain what this means, this post introduces some basic ideas and terms from educational assessment help explain what this does not mean. It was authored by Dan Hickey and Jenna McWilliams.
What is Assessment?
It makes sense to start first by saying what we mean by assessment. For us, assessment is about documenting outcomes. Most of the time we are interested in assessing what someone knows, or can do. Traditionally, educational assessment has been linked to documenting what students know and what they have learned. Often it has been associated with what teachers do in classrooms, at the end of a unit or a term. In the language of modern cognitive science, this view is well represented in the title of the 2001 National Research Council report entitled Knowing What Students Know: The Science and Design of Educational Assessment. This report (whose editors include Dan’s doctoral advisor Jim Pellegrino) nicely describes the mainstream view of educational assessment in terms of conceptual structures called “schema.” We here at re-mediating assessment often use the phrase “assessing individual understanding” to represent this commonly held view.
Assessment vs. Measurement
It is helpful to distinguish assessment from measurement. Most people use the term measurement to emphasize the use of sophisticated psychometric techniques. While these techniques are also useful for student assessment, they are typically associated with large-scale achievement testing. These techniques make it possible to figure out how hard one test item is compared to other test items. When you know this, you can estimate a person’s chance of getting a particular item correct after taking a few other items. This is how standardized tests come up with such precise scores that don’t change from test to test (this means the scores are reliable). As long as you think the items get at the thing you want to measure, you can efficiently and accurately compare how much of that thing different people have. This is most often used to measure achievement of educational standards, by creating large numbers of items that target those standards. Hence we here at re-mediating assessment often use the phrase “measurement of aggregated achievement” to describe how achievement tests are designed and used. By combining psychometrics, large pools of items of known difficulty, and computer-based testing, we get a big industry devoted to assessing aggregated achievement of lots of students. Because these tests are developed by institutions (usually companies) rather than teachers or schools, they are often called external tests.
Assessment, Measurement, and Educational Reform
Most (but certainly not all) efforts to improve education are concerned with raising scores on achievement tests. Because achievement tests target educational standards, they can be used to compare different textbooks or techniques that teach to those standards. Among those who care about achievement tests, there is a lot of debate about how directly education should target achievement tests. With the No Child Left Behind act, many educational reforms focused very directly on achievement tests. Sometimes people call this “drill and practice” but many teachers give students drills and then practice on things that are not directly included on achievement tests. The best example is what we call “test-prep.” These are computer-based programs that essentially train students in the very specific associations between words that will help them do well on a particular tests. Under No Child Left Behind, schools that were not making “adequate yearly progress” were required to take money out of the classrooms and use it to provide tutoring. Under NCLB, this often involves (and sometimes requires) schools paying private companies to deliver that tutoring, on the assumption that the teachers had failed. In practice, this meant that companies would get paid to use school computer labs to deliver computer-based test preparation. While the impact of these efforts on targeted achievement tests is a subject of fierce debate, one thing is for sure: It remains a very lucrative industry. (If you are interested see the column on Supplemental Educational Services by Edward Rotherham, aka Eduwonk).
As you might have guessed, we here at re-mediating assessment feel pretty strongly about test-prep practices. We also feel pretty strongly about what test mean, and have lots of debates about it. While Dan believe that achievement tests of literacy, numeracy, and domains like sciences and history measure stuff that is useful. Therefore as long as the tests are not directly taught to, higher scores are better. Jenna, like many, thinks that tests and the testing industry are so bad for education that they should be abandoned. But we both agree that (1) any viable current educational reforms must impact achievement tests and (2) that the transformation of knowledge into achievement tests makes that new representation of that knowledge practically useless for anything else. This is where the name of our blog comes from. When we transform knowledge of school subjects to make achievement tests, we re-mediate that knowledge. For example, consider the simple case of determining which brand of laundry detergent is the cheapest per ounce. Figuring this out while in the cleaning supplies aisle at the supermarket is very different from sitting at a table, with story problem, a pencil and a blank sheet of paper. Voila: Re-mediation. It is also worth noting that we also have some disagreements about the use of this term.
Re-mediation of knowledge for achievement tests usually means making multiple-choice items—lots of them. And this is where the trouble begins for many of us. A multiple choice item usually consists of a question (called a stem) and five answers (called responses). Most assume that test items have one “correct” answer while the rest are wrong. But it is usually more a matter of one response being “more” correct. When we take multiple choice tests we are as much ruling out wrong responses as we are recognizing the most correct one. Because the stem and five response make up five associations, we characterize achievement testing as the process of “guessing which of four or five loosely related associations is least wrong.” The more a student knows about the stuff being tested, the better they are at doing so.
What is the Problem with Test Prep?
Test prep programs raise achievement scores by training students to recognize hundreds or even thousands of specific associations that might appear on tests. Because of the way our brain works, we don’t need to “understand” an association to recognize it. All test prep programs have to do is help students recognize a few more associations as being “less wrong” or “more correct” to raise scores. Because of the way tests are designed, getting even a handful of the more difficult items correct can raise scores. A lot. And this is the root of the problem this blog is dedicated to solving. We believe that the way knowledge is remediated for tests makes that knowledge entirely worthless for teaching, and mostly worthless for classroom assessment. Specifically, we believe that training kids to recognize a bunch of isolated associations is mostly worthless for anything other than raising scores on the targeted tests. Test preparation practices and the politically motivated lowering of passing scores (“criteria”) on state achievement tests is why scores on state tests have gone up dramatically under No Child Left Behind, while scores on non-targeted tests (like the National Assessment of Educational Progress) and lots of other educational outcomes (like college readiness) have declined. Here is an article referencing some of the earlier studies. We are particularly distressed the so many schools find their computer laboratories locked up and their technology budgets locked down by computer based test preparation and interim “formative” testing. Despite a decade of e-rate funding, many students in many schools still don’t have access to networked computers to engage in networked technology practices that are actually useful.
There is a lot of debate about consequences of test preparation for achievement and its impact on other outcomes. We think that any programs that directly train students in specific associations on targeted tests is educational malpractice, because that knowledge is useless for any other purpose. This is because we think that knowledge is more about successful participation in social practices. And these practices have very little to do with tests scores. So, in summary, test preparation is the epitome of what participatory assessment is not. Our next post will try to explain what it is.
What is Assessment?
It makes sense to start first by saying what we mean by assessment. For us, assessment is about documenting outcomes. Most of the time we are interested in assessing what someone knows, or can do. Traditionally, educational assessment has been linked to documenting what students know and what they have learned. Often it has been associated with what teachers do in classrooms, at the end of a unit or a term. In the language of modern cognitive science, this view is well represented in the title of the 2001 National Research Council report entitled Knowing What Students Know: The Science and Design of Educational Assessment. This report (whose editors include Dan’s doctoral advisor Jim Pellegrino) nicely describes the mainstream view of educational assessment in terms of conceptual structures called “schema.” We here at re-mediating assessment often use the phrase “assessing individual understanding” to represent this commonly held view.
Assessment vs. Measurement
It is helpful to distinguish assessment from measurement. Most people use the term measurement to emphasize the use of sophisticated psychometric techniques. While these techniques are also useful for student assessment, they are typically associated with large-scale achievement testing. These techniques make it possible to figure out how hard one test item is compared to other test items. When you know this, you can estimate a person’s chance of getting a particular item correct after taking a few other items. This is how standardized tests come up with such precise scores that don’t change from test to test (this means the scores are reliable). As long as you think the items get at the thing you want to measure, you can efficiently and accurately compare how much of that thing different people have. This is most often used to measure achievement of educational standards, by creating large numbers of items that target those standards. Hence we here at re-mediating assessment often use the phrase “measurement of aggregated achievement” to describe how achievement tests are designed and used. By combining psychometrics, large pools of items of known difficulty, and computer-based testing, we get a big industry devoted to assessing aggregated achievement of lots of students. Because these tests are developed by institutions (usually companies) rather than teachers or schools, they are often called external tests.
Assessment, Measurement, and Educational Reform
Most (but certainly not all) efforts to improve education are concerned with raising scores on achievement tests. Because achievement tests target educational standards, they can be used to compare different textbooks or techniques that teach to those standards. Among those who care about achievement tests, there is a lot of debate about how directly education should target achievement tests. With the No Child Left Behind act, many educational reforms focused very directly on achievement tests. Sometimes people call this “drill and practice” but many teachers give students drills and then practice on things that are not directly included on achievement tests. The best example is what we call “test-prep.” These are computer-based programs that essentially train students in the very specific associations between words that will help them do well on a particular tests. Under No Child Left Behind, schools that were not making “adequate yearly progress” were required to take money out of the classrooms and use it to provide tutoring. Under NCLB, this often involves (and sometimes requires) schools paying private companies to deliver that tutoring, on the assumption that the teachers had failed. In practice, this meant that companies would get paid to use school computer labs to deliver computer-based test preparation. While the impact of these efforts on targeted achievement tests is a subject of fierce debate, one thing is for sure: It remains a very lucrative industry. (If you are interested see the column on Supplemental Educational Services by Edward Rotherham, aka Eduwonk).
As you might have guessed, we here at re-mediating assessment feel pretty strongly about test-prep practices. We also feel pretty strongly about what test mean, and have lots of debates about it. While Dan believe that achievement tests of literacy, numeracy, and domains like sciences and history measure stuff that is useful. Therefore as long as the tests are not directly taught to, higher scores are better. Jenna, like many, thinks that tests and the testing industry are so bad for education that they should be abandoned. But we both agree that (1) any viable current educational reforms must impact achievement tests and (2) that the transformation of knowledge into achievement tests makes that new representation of that knowledge practically useless for anything else. This is where the name of our blog comes from. When we transform knowledge of school subjects to make achievement tests, we re-mediate that knowledge. For example, consider the simple case of determining which brand of laundry detergent is the cheapest per ounce. Figuring this out while in the cleaning supplies aisle at the supermarket is very different from sitting at a table, with story problem, a pencil and a blank sheet of paper. Voila: Re-mediation. It is also worth noting that we also have some disagreements about the use of this term.
Re-mediation of knowledge for achievement tests usually means making multiple-choice items—lots of them. And this is where the trouble begins for many of us. A multiple choice item usually consists of a question (called a stem) and five answers (called responses). Most assume that test items have one “correct” answer while the rest are wrong. But it is usually more a matter of one response being “more” correct. When we take multiple choice tests we are as much ruling out wrong responses as we are recognizing the most correct one. Because the stem and five response make up five associations, we characterize achievement testing as the process of “guessing which of four or five loosely related associations is least wrong.” The more a student knows about the stuff being tested, the better they are at doing so.
What is the Problem with Test Prep?
Test prep programs raise achievement scores by training students to recognize hundreds or even thousands of specific associations that might appear on tests. Because of the way our brain works, we don’t need to “understand” an association to recognize it. All test prep programs have to do is help students recognize a few more associations as being “less wrong” or “more correct” to raise scores. Because of the way tests are designed, getting even a handful of the more difficult items correct can raise scores. A lot. And this is the root of the problem this blog is dedicated to solving. We believe that the way knowledge is remediated for tests makes that knowledge entirely worthless for teaching, and mostly worthless for classroom assessment. Specifically, we believe that training kids to recognize a bunch of isolated associations is mostly worthless for anything other than raising scores on the targeted tests. Test preparation practices and the politically motivated lowering of passing scores (“criteria”) on state achievement tests is why scores on state tests have gone up dramatically under No Child Left Behind, while scores on non-targeted tests (like the National Assessment of Educational Progress) and lots of other educational outcomes (like college readiness) have declined. Here is an article referencing some of the earlier studies. We are particularly distressed the so many schools find their computer laboratories locked up and their technology budgets locked down by computer based test preparation and interim “formative” testing. Despite a decade of e-rate funding, many students in many schools still don’t have access to networked computers to engage in networked technology practices that are actually useful.
There is a lot of debate about consequences of test preparation for achievement and its impact on other outcomes. We think that any programs that directly train students in specific associations on targeted tests is educational malpractice, because that knowledge is useless for any other purpose. This is because we think that knowledge is more about successful participation in social practices. And these practices have very little to do with tests scores. So, in summary, test preparation is the epitome of what participatory assessment is not. Our next post will try to explain what it is.
0 comments:
Post a Comment