In an earlier post, I did write about my midterm exams and the problems I noticed. One of those problems was too many multiple choice questions. Each objective was measured by one or two multiple choice questions, regardless of how simple or complex that objective really was. Pretty soon after midterms I got an idea to measure simple objectives with multiple choice questions, intermediate objectives with a mix of diagrams, labeling tasks, or math problems, and advanced objectives were to be measured with essay questions. I could still match one or two task items to each objective, but the skills involved were much more indicative of the learning for which I was gathering evidence. In the end, the packet had basic multiple choice questions in the front with a score sheet matching each question to its objective, followed by diagrams and math problems with another score sheet, and finally the essays with their score sheet and objectives. If your best was early in the semester, great. If your best was on the exam, that was great too.
I spent a lot of time before the final exam thinking about how it should be factored in with the rest of the students' semester work. In general, our school's policy is each semester grade is 80% semester coursework and 20% exam for ninth grade students (75/25 for upper classes). I got permission to deviate from this because I was piloting this new grading system in my class. The midterm was treated as just another assessment, and the first semester grade was calculated by best ever demonstration, not most recent.
Somehow, I wanted to bring the final more in line with the school's policy by weighing the student's most recent demonstration (exam) with their semester work (best demonstration, not all attempts). I wasn't quite sure how to go about this, so I decided to talk to my students about it. I went to each table in my classes and asked them what the purpose of a final exam is. Once we discussed some purposes, then I brought up scores, grades, and weights. I wanted to make sure our discussion of the grade was closely related to the purpose of the exam. I even hosted after school discussion sessions for those who felt they may not have had enough time to give their opinion in class. Students also met with me on their own, and some sent very eloquent emails.
In the end, I gathered from our talks that there are four common purposes for final exams: synthesis of material, final demonstration of learning, retention, and college preparation.
The college preparation reason goes like this: we take exams because you'll take them in college and you should be prepared.
The retention reason goes like this: I want to make sure you remember everything we learned.
The final demo reason goes like this: before I calculate a grade, you have a last opportunity to show what you know
The synthesis reason goes like this: I want to see how well you connect the various things we've studied.
The college preparation reason for exams? Ok. I had some exams in college where I could write anything I wanted in a blue book because the professor was required to give an exam, but he didn't care or want to give an exam. I had some that were terrible, and some that were brilliantly differentiated. There's no guarantee our exams will prepare students for exams in college. so why try?
I'm really uncomfortable with the retention purpose of an exam. In a traditional system, it feels like I'm grading you once when you take the first test on the material, then again when I test the same material at the end of the semester. You get two grades, one for learning the material, and then another for remembering it. Or possibly, one for cramming it once, and another for cramming it twice. Some students felt it was really unfair to be tested on the same material twice like that, and I hated the idea of having the earlier, lower grade remain a factor in their grade if they showed significant improvement by the end of the semester. Plus, factoring the exam as the most recent demonstration along with their best raised the stakes for the final exam and contributed to the stress of exam week. I usually feel like such a sadist during exam week, and students never feel like it's their best work on display at that time. This was my chance to change that.
Only two of these fit with the philosophy and purposes of my course: synthesis and final demonstration. Because my objective are tiered by difficulty, I want to see if you can use the basic and intermediate objectives to enrich your understanding of the advanced objectives. You also have a last opportunity to demonstrate your knowledge of the simple and intermediate objectives. For these purposes, the best ever grading policy makes more sense than the most recent grading policy. So I decided to use the same system for the final exam as I did for the midterm: it's just another assessment and your best demonstration of an objective is the one that contributes to your final grade. If your best was early in the semester, great. If your best was on the exam, that was great too.
Here's what students liked best about this: they knew which objectives they had struggled with, and they knew to focus their study efforts on those. They also knew that if I saw a big drop in objectives they had already mastered, I would mention that in my final comment on their report card even if that wasn't factored into their final grade. So they could do "maintenance" studying on already mastered objectives, and focus their efforts where they could make the biggest different in their learning and their grade.
Yes, there were some who blew off the exam and still earned a good grade for the course. Does that mean they never learned the material? No. I still mentioned their poor exam performance in my final comment. Overall, I'm pretty happy with the way it worked out and I would do it again.