Posts Tagged assessment
Making the Move to Standards-Based Grading
Posted by Marta R. Stoeckel in Uncategorized on June 9, 2015
This spring, I’ve spent a lot of time analyzing how the year went and trying to identify my biggest frustrations. My goal isn’t to wallow in negativity; I’m much more interested in figuring out what I can do differently next year to reduce or eliminate those frustrations. As I reflected on the year, I identified my two biggest frustrations:
- My students, at least near the start of the year, are very focused on points and this makes it difficult for them to take risks or try something they don’t have step-by-step directions for. This isn’t a surprise since most of my students are 12th graders who’ve done very well in school by focusing on points. While most students got comfortable not always knowing the answer immediately by the end of the year, I’d like to make that transition faster and less painful.
- Many of my students did a brain dump after each test and at the end of each term. I quickly found that when I wanted students to build on concepts from a previous unit or see connections to a topic from last trimester, I had to build in time to review the earlier concept. Like the focus on points, this serves students very well in most classes, including ones I’ve taught, and the majority of students eventually made the necessary shifts, but I’d like to help them make the jump much sooner.
Both of these frustrations are promoted by my grading system. I’ve used a fairly traditional gradebook where I record scores for selected labs and problem sets in one category and scores for large unit tests in a separate category (with a larger weight). Of course when my grading system is built on accruing points students will focus on points! Of course when we have a unit test on some arbitrary date, then move on to a brand new topic with its own big test students will mentally move on, as well! Clearly, it is time for me to take a new approach for grading.
I decided I dive into standards-based grading (SBG). The key idea is that instead of receiving scores on specific assignments (such as unit 1 test or chapter 12 test), students receive scores on specific objectives. This steers the focus away from points in the traditional sense and towards what students truly need to know. Another key feature is that students have the opportunity to reassess standards, usually with the new score replacing the old one. This dramatically lowers the stakes for students. They can take a risk, trying a new approach to a problem or a lab, knowing that if they fail, they can always try again. In addition, students can’t get away with forgetting what they learned since every standard will be assessed multiple times. In many cases, teachers record only the most recent score for a standard, even if it goes down, with the goal of making a student’s final grade represent their knowledge and skills at the end of the course.
This is also good timing for a shift in my grading practices. My building has had a group of teachers studying the issue of grading for a few years and they have arrived at a several grading practices that every teacher in the building will need to follow next year, which means I’ll be making some changes no matter what, so I may as well make some big ones. The first task, however, is to make sure I see how I can fit SBG into next year’s requirements.
Requirement 1: Grades will have three weighted categories: summative (75%), formative (15%), and cumulative final (10%).
A major tenet of SBG is that students should have the opportunity to practice and master content without being penalized for mistakes, so the formative category isn’t in-line with SBG, but I think I know how I’d like to approach this requirement. The summative category is where I’ll place the course objectives. To keep things simple, I’ll update scores on each objective every time it is assessed so that only the most recent score affects a student’s grade. The formative category is where I’ll record scores for the formal lab reports I have students write (usually two per trimester). I see the lab reports as addressing overarching skills such as scientific practices and communication that I would like to include in grades, but are much broader than the typical content objective. At this point, I’m comfortable placing the lab reports in the formative category in order to give those skills more weight than a single objectives.
Requirement 2: In-progress and final grades will be reported as a percentage and mapped to a traditional letter grade.
Our gradebook software reports student percentages to two decimal places, a level of precision I don’t think I’m capable of as a grader. But, its what we have and percentages aren’t going away in my district any time soon, so I need to figure out how I’m going to work within those confines. For now, my plan is to simply make each objective an assignment worth whatever maximum I set my scale to. The summative category will then be worth points equal to the number of objectives x maximum possible score on each objective. The software will then take an average that it uses as a student’s grade in the summative category.
The main issue I have with this approach is a student could conceivably get a respectable grade with no progress towards mastery on some objectives (this can happen just as easily in a traditional grading system; its just easier to hide). For this year, I want to keep things simple, so I’m planning to just keep this in the back of my mind; I doubt I’ll see a significant number of students who do well overall, but ignore a few key standards. Down the line, I may try conjunctive SBG where certain standards are required to earn a passing grade. I may also consider giving certain standards more weight in the gradebook either because they are more complex or more crucial to future learning than the other standards.
Requirement 3: Every class will have a culminating activity during the final exam period.
While I haven’t found much on final exams in the SBG materials I’ve read so far, giving significant weight to what a student does during a certain 90 minute period seems to go against much of the thinking behind SBG. Ideally, what I’d like to do is move away from a traditional written exam, where students do an assortment of problems from throughout the trimester, and toward a more authentic assessment. One option would be an open-ended project, such as Casey Rutherford’s final project where students must come up with a physics question, then collect data to answer it. Another option would be to follow my district’s STEM integration efforts and develop an engineering design challenge where students must apply physics to solve some kind of real-world problem. The trick here would be to come up with something where students would truly have to apply their physics knowledge in a meaningful way.
Requirement 4: Scores no lower than 50% will be recorded for any summative assessment students attempt.
I absolutely agree with this requirement; it makes no sense that most grades cover a range of 10 percentage points (less if you count grades with a + or -) while an F covers 60 percentage points. The main trick is what it will look like to follow this guideline using SBG. My plan is to give students a numerical score for each standard, and have the floor at half the points. For example, many teachers who use SBG give their students a 1, 2, or 3 on each standard they attempt. I will probably give a 2, 3, or 4, instead.
Requirement 5: Students will have at least one reassessment opportunity on all summative assessments.
This requirement is very in-line with SBG; the only question is how I want to manage reassessments. In a good physics class, there is some spiraling of content that happens naturally, and I plan to treat that as one option for reassessment. For example, I had some students this year who did poorly on linear constant acceleration, but, by the time we finished projectile motion, were nailing complex problems that used the same skills. In those cases, I would have no problem updating a student’s scores for constant acceleration objectives.
I also want to offer more explicit reassessment opportunities. I am a fan of Sam Shah’s reassessment application and am planning to modify it for out of class reassessment. I really like that he forces students to reflect on what got in their way and to articulate what they’ve done to improve, rather than allowing students to take the all-too-familiar approach of just trying again on the assumption that it will go better.
I’m also considering Kelly O’Shea’s “test menus” for in-class assessments. It sounds fairly easy to manage for a large number of students (which is important, since my average class size will be somewhere above 30 next year) while still providing significant student choice in their assessment.
I feel like I’ve got the broad strokes in place for next year, but there are still a lot of details to work out. My next big task will be to revise my objectives. My district has been using learning targets (a certain flavor of objective) for a few years, but we didn’t have much dedicated time to work on objectives, so mine are, at best, mediocre. If they are going to become the basis of my gradebook, I need to put in the time to write clearer, more precise learning targets.
My other big summer task will be to finalize (at least for now) some of the details for how I want to grade and revise my syllabus accordingly. While I fully expect to revise my syllabus and details of my grading system as the year progresses, I need to have some of the structure worked out before the fall to help students feel some sense of security in this new adventure.
Free Fall and Assessment
Posted by Marta R. Stoeckel in Physics on September 28, 2014
The bulk of this week was spent wrapping up acceleration by doing some problems with free fall. It took some time, but my students are getting comfortable with graphical solutions instead of more traditional approaches. Students continue to talk about what’s happening in the problem, rather than the formulas, which is great to see. A few kids are trying to memorize formulas, but watching their peers who use the graphs apply what they know to new situations with relative ease has helped convert the memorizers.
This week was also the first test in physics, and a lot of kids “took the bet and lost.” Based on the reading and thinking I’ve been doing about assessment and grades, I’m grading a lot less than in previous years. The trick is, students used to the way most teachers grade translate not graded as not worth doing. Not surprisingly, these students were not prepared for the test. That said, even when I’ve graded almost everything, I’ve had students find ways to copy or otherwise get out of doing the daily work, then have the first test hit them like a truck.
To try and address this problem, I stole an idea from Frank Noschese and have been giving my students weekly, self-graded quizzes. In addition to all the other benefits of frequent, low-stakes assessments, I hoped my students would figure out the benefits of engaging in the daily work early on. It worked for most of my students, and I saw more students digging into their work after the first quiz, but the stakes were too low for others to catch on.
I’m doing a two-stage collaborative exam for this test, so students will have a chance to recover come Monday. I’m looking forward to seeing how that goes. In the future, however, I’d like to work on strategies to get students away from the idea that not graded means not worth doing a lot earlier. I may make those early quizzes worth more points (at least on paper) or split our 1D motion into separate tests over constant velocity and uniform acceleration so that students will be taking the first test a lot sooner.