When I first began my high school teaching career back in 1973, my credentials consisted of a Bachelor’s, a Master’s, and a PhD (all in mathematics), a few education courses from college, and five years of teaching as a graduate student at UNC in Chapel Hill. I felt I had a pretty good handle on the mathematics I was teaching, and I really did care deeply about my students, but I realized that I had a lot to learn about the actual craft of teaching, especially as it applied to my adolescent clients. I was fortunate to have some terrific mentors at my school, and I certainly learned a lot from them, but I was also open to learning from other sources, so I decided to seek wisdom from the pages of education journals. It did not take me long to conclude that the authors of the papers in these journals, if they did have any wisdom to impart, were apparently not interested in communicating it to teachers like me. My mentors, who had come to the same conclusion long before I had, chuckled at my naiveté, and would ask me with mock concern such questions as, “aren’t you interested in effective strategies for impacting learning in the cognitive domain?” Bear in mind that these were the seventies, when educational researchers and mathematicians moved mostly in non-intersecting domains. They certainly did not speak the same language.
Fortunately for the world of mathematics education, the walls that kept the math people and the ed people apart began to crumble in the eighties, when professionals on both sides of the divide came together to design the NCTM Standards. The mathematicians had essentially come to the conclusion that they had not been teaching their subject very effectively, and the education researchers were able to confirm those suspicions with data. Indeed, some very distinguished university mathematicians became so intrigued by the data that they wanted to know more, and this eventually led both communities to develop a common language with less arcane jargon. Working together, mathematicians, mathematics teachers, and education researchers produced the NCTM Standards and forged a working partnership that endures to this day. That is probably why research-based education reforms are usually welcomed in the mathematics classes of most schools before they gain much traction in the other disciplines.
It seems that once the education specialists had attracted our attention, we mathematics teachers found that they had some compelling things to tell us about how students learn. In particular, lecturing on mathematics to adolescents (which I had been doing with some success) was not a good way to develop true understanding, nor was it an effective way to turn students into competent, creative problem solvers. I never would have gotten this message if it had involved “impacting the cognitive domain,” but there was a new wave of education articles that used metaphors like “the sage on the stage” versus “the guide on the side” to grab me by the lapels and make me read them. Having read them, I could not help adopting a new paradigm for what went on in my classroom. Eventually, I was even able to absorb some of the technical terms that I had so readily dismissed in the seventies. That finally brings me around to formative assessment.
My emperor’s-new-clothes awakening came when I realized that I was doing all the mathematics in my own classroom – except for on the day of a test or a quiz, when my students and I would finally discover what they could do. That paradigm obviously made the tests and quizzes stressful occasions for all of us, but the malignant effects went deeper than that. Both I and my students were led to believe that the purpose of my classes was to prepare them for the next quiz or test, when success for both of us would be defined by their score. This made me work all the harder on those other class days to make everything easier for them on the next test or quiz, and the cycle continued.
What was lacking in my classes in those days was (jargon alert) formative assessment. I was assessing my student with tests, quizzes, and homework, but it was all after the fact as far as my teaching was concerned. I thought I was running an interactive class, but I had it backwards: my students were questioning me about the mathematics I was showing them. I should have been watching them do the mathematics so I could question them about how and what they were doing. My role as teacher could then be to react to their triumphs or their misunderstandings and keep pushing them forward. This kind of “assessment” is not all that different from a test or a quiz in its intent (students are doing mathematics and I am giving them feedback), but consider the advantages: (a) the feedback is immediate and ongoing, (b) students can work with partners or in groups if we prefer, (c) there is no need to involve grades, so students can be wrong without dreaded consequences, and (d) I can observe my students doing mathematics every day, making test day a less stressful extension of business as usual.
It is unfortunate that many mathematics teachers have yet to embrace formative assessment, especially since the message of its importance has been around for a couple of decades. Part of the problem, I am sure, is that the communication gap between teachers and researchers still endures. Anyone with an ounce of marketing savvy would point out that “formative assessment” is a lousy name for a classroom strategy that we expect teachers to buy. Perhaps we need to dust off that classic metaphor from the early days of reform and use it to convince teachers of a much simpler truth: If you are the “sage on the stage,” you will have no use for formative assessment; if you are the “guide on the side,” you are probably already using it every day.