How’s opinion polling like classroom testing? Reflections on election day
It’s election day in Brazil, and polls have featured extensively both in the traditional and social media. Some voters seem to work out their candidates based on the polling results; others doubt them. Either way, I don’t see much questioning of the importance of polling.
And that kind of reminds me of classroom tests. (I know, it sounds like a crazy association to be drawing, but bear with me.)
Firstly, there is the seeming unavoidability of election polls and educational tests. Death, taxes and tests, one could have said. Indeed tests may be inescapable depending on the school or system we work for. However, we must not forget that a test is but an instrument. Assessment, which is what we should be doing in class, can be carried out in several other ways, such as observation, portfolios and self-assessment questionnaires. In fact the more varied our instruments, the better the chance of capturing a more complete snapshot of our students’ achievements.
Secondly, pollings are often treated matter-of-factly. Here in Brazil I frequently hear (and have actually engaged in) the discourse of not voting on the candidate of choice because he or she stands no chance. “What if the polls are wrong?,” the skeptics are right to remind us. Polls are developed and carried out by humans, so there is the chance of human error or hidden agendas. Also, polls sample from the population. Finally, polls draw inferences and generalize from that sample.
Exactly like tests.
But as teachers we don’t quickly concede to that, now do we? We tend to act as if test results are crystal clear portraits of our students’ proficiency level or learning stage [grade = proficiency/achievement]. We forget the so many bridging inferences we have to draw to get to a test score.
To start, the grade depends on our rating/marking criteria and our ability to be consistent when applying them. That brings in one or two middlemen: [grade – criteria – rater – proficiency/achievement].
Plus, students’ performances on the test depend on their interaction with the tasks. Like all of us who have never been asked who are we going to vote for, many aspects of our students’ competence might not be tapped into by the tests we have been issuing them. So [grade – criteria – rater – tasks – performance – proficiency/achievement].
The tasks we choose and items we write also depend on our concepts of language and learning. Because we cannot possibly test everything there is to test, we consciously or unconsciously make judgment calls. With that we show our view on what learning objectives are most important. We’re now up to [grade – criteria – rater – teacher’s views on language learning – tasks – test – performance – proficiency/achievement]. And we could go on and on, adding factors that make those inferential jumps quite clear and show that testing is not so easy as we make it look.
And that brings us to the final similarity to boot: the power of polls and tests to influence decisions. Of course that is why we do classroom assessment in the first place: to go over what our students have and have not learned, source out problems in the learning process and try and tackle them. In theory, at least. And when “theory” becomes the operative word, all alarms should go off. It is all too easy to forget about the testing purpose and use grades to label students: “This is the front-runner or high achiever. That is the hopeless underdog.” Or worse still, regardless of how careful we are not to judge, learners themselves take their scores at face value and resign themselves to the role of “winner/loser” in the learning process. And hey, the elections are only over once those ballots are cast and counted.
To my fellow countrypeople, happy and responsible voting.
And to teachers, especially at the end of our school year, happy responsible testing!
This blog post has been influenced by Dr Matilde Scaramucci’s classes at Unicamp, as well as the following texts:
Bachman, L. (2005). Building and supporting a case for text use. Language Assessment Quarterly, 2 (1), pp. 1-34.
McNamara, T. F. (1996). Measuring second language performance. London: Longman.
McNamara, T.F. (2000). Language testing. Oxford: OUP.
Any faults and inaccuracies are my own, of course.