Why am I writing this personal entry? Well, it is not an attempt to gain any sympathy. It attempts to show what is possible if a clear intention and goal serve the learner's needs. In May of 2022 just near the end of another fantastic school year, I do not remember what happened. But, I was unable to finish the school year and was unable to teach the following year. Why? On May 21st, 2022, I fell down a flight of 16 stairs (luckily carpeted) from the 2nd to 1st story of our home. I was found at the bottom of the stairs. I was found foaming at the mouth. This would lead to a 2-month hospital stay which included an induced coma because my seizures would not stop, several rounds of lumbar punctures, and relearning basic physical movements like something as simple as being able to roll in the hospital bed. Simply put, when I was admitted to the hospital, I was diagnosed as being “critically ill.” Please take a moment and read those words: critically ill. They are not terms...
At Brookfield Central High School have just passed the three week grading period and are approaching the first parent teacher conferences of the year. My thoughts are turning to clarifying my grading practices to students and parents as more scores are being entered into the gradebook. I have completely restructured the grade reporting in my online gradebook this year. This was due to struggles I had last year in trying to implement what I believe to be best grading practices into my grade reporting. Much of my grading philosophy has been informed by Robert Marzano and Marzano Research, specifically the wonderful book Classroom Assessment & Grading that Works.
Traditionally, as I prepare for teacher conferences, I use a student summary report I print from our online gradebook to guide the discussion with parents. Our grading program in my district is Infinite Campus (IC). I really like the software and find it extremely easy to use. Below you’ll see a sample student grade report from last year. I’ll use it to highlight some of the issues I’ve had and the need to restructure my use of IC for this year.
When talking with parents, I tended to jump from the initial grade at the top of the page directly to the assignment list at the middle. Basically the summary section was just an area to see what the grade for the course was. The Assignment Detail section contains a list of assignments separated into only two categories: course objectives and the practice work. The course objective assignments are summative and named based on the objective they cover. The other major category is practice work which entails all formative assessment work. It’s pretty clear by looking at this report that the course objectives are worth more points than the course objectives which is the case. Also, looking at the scores give a clear idea of how the percentage of the assignments work.
In my opinion, this report seems kind of backwards, the summary should provide more information about the student before having to jump to the detail. That is one problem I was looking to remedy.
Another issue that is not apparent in the report is that I had to monkey around with entering grades so that they could reflect my scoring rubrics. I grade using 0 - 4 point rubrics, but that system wasn’t working with the grading scale my IC gradebook was using.
One other major issue was making sure that the progress to course objective mastery was clear in the document and objective weights were balanced.
With these ideas in mind, I began exploring and experimenting with what could be done in IC.
Grading Scales
I use a 0 -4 scale on all of my rubrics. Now, some might be 0, 2, 4, but all have a minimum of 0 and max of 4. Translating those grades into the standard percentage curve in our gradebook was difficult because scoring a 3 on the rubric amounts to a 75% which translates to a D in the gradebook rather than the B it should translate to in my rubric scale. I had previously heard that my district offered the option for a grading scale other than the “Elmbrook Standard Curve” for teachers to use. This scale is called the integer scale. Below, is a table that compares the two.
Grade
|
Elmbrook Standard Curve Percent
|
Integer Scale 0 - 4 Percent
|
A
|
100 - 93
|
100 - 88
|
B
|
92 - 84
|
87 - 63
|
C
|
83 - 76
|
62 - 38
|
D
|
75 - 70
|
37 - 13
|
F
|
<70
|
< 13
|
The integer scale makes full use of all 100% to allow teachers to design rubrics that have a distinct point value, 0 - 4, for each grade received. If grades are being translated to percentages, why is the standard scale stopping at 70%? If the standard scale shows failure as not scoring a 70%, are teachers truly designing assessments that measure minimal mastery at exactly 70%?
For the past three years, my assessments have been aligned based on objectives to this 0 - 4 scale, but putting the grades into the standard curve has been frustrating. I had to set up an equivalent percentage scale to retrofit into the standard cure that would make a 3 out of 4 a “B” in the gradebook rather than a “D”. The integer scale has erased this frustration. To make it clear, this doesn’t mean that I need to make all of my assignments in IC out of 4 points, but I choose to do so for all formal assessments for consistency. I want this translation from an integer to a grade to be clear and easy to remember. It’s the percentage side that took a bit of explaining. When students see that they are getting a 75% in my course, they (and their parents) have shown concern. But, once the scale was clarified, all parties got it immediately. So as always, communication of grading practices needs to be upfront and open.
Objectives
As an upfront disclaimer, Infinite Campus has support for standards based grading that looks quite impressive. My district has not opened this option up at the high school level yet. So, this is my current solution to this issue. It is not my preferred solution, but it’s what I’ve done with what I have to work with. That being said, I completely understand the rationale of my district to not open this option without having clear models for standards based grading in place at the secondary level in our district.
Over the past years, I have been working hard to tie all of my formal assessments to learning outcomes. The process of writing and refining a reasonable number of course objectives is a very time consuming process that I am still going through and will save for a future post. Each year, I start with a concrete list of objectives that I may refine but will not change in number. These objectives include about 2-4 objectives specific to a unit of instruction, 4 overarching science practice objectives, and a success skill category. The unit objectives are assessed several times within a unit of instruction but only in that unit. The science practices are assessed and tracked over the entire course and cover experimental design, data analysis, applying models and theories, and mathematics. Finally, success skills are tied specifically to 21st Century Skills such as communication, collaboration, creativity, and critical thinking. In the future, I hope to design outcomes for literacy that can be tracked over the course such as argumentation.
In the past, all of my objectives were placed into one category called “Course Objectives” which contributed to 90% of the student grade. Formal summative assignments that measured the objective were placed in this bin. This created one big bin with lots of stuff to be sorted out. The more difficult piece was when a single assignment addressed multiple objectives. Also, with all of these assignments in one bin it was difficult to I ensure each objective was equally weighted.
My current remedy to these issues is making each objective a specific category. Each unit objective is a category that will have 3 -4 individual assessment grades. The overarching objectives (science practices and success skills) will have many more assessments grades. The weight of each category/objective was set up to be the same. This way, it doesn’t matter how many times an objective is assessed, they are all equally important. This did require me to know exactly how many objectives I had so I could weight them appropriately so they equaled 100%. Just as importantly, I had to have a reasonable number of objectives. Creating 100 different weighted groups would defeat the purpose of the redesign. In the end, I’ve designed 25 total for an entire 2 semester course.
Individual Assignments
Now that categories were set, I had to determine how to handle entering my individual assignments.
The most common assessment my students complete are formative. These include problem sets, most of which are completed on our district’s learning management system, Canvas. These problem sets are broken up based on the specific learning objective. In fact, students may have 3 different problem sets focused on the same unit objective, just at different difficulty levels or focused on a different aspect of the objective. Ultimately, these are practice. The problems are a chance for students to see where they are at in terms of mastery of the specific outcome. The assignments are graded on completion not proficiency. For me as a teacher, it is a measure of the student’s self-direction and accountability. For this reason, the formative work falls under the success skills category and there it is placed in the gradebook. I still do use these results to guide my instruction (see my next blog post for how I use the rich analytics provided by Canvas to inform my instruction).
As for formal/summative assessments, these are recorded in the unit objective(s) and science practice(s) they assess. As I stated earlier, most summative assessments measure multiple objectives. A single lab report can be used to cover every objective in a unit in addition to several science practices. For this reason, I don’t give a single all encompassing grade for a summative assessment. The rubric below gives and example for what I am speaking to.
When putting the grade into IC, this name of this assignment will appear in different categories/objectives. What will be recorded is the score from that part of the rubric. So, it may be a 4 out of 4 for one objective/category but under another category it will be a 3.
Within a single category/objective, I try to offer my students multiple attempts to demonstrate mastery. If this is the case, managing these multiple attempts is important. For example, look at these three students below. Imagine these are 3 measures of the same objective.
First Attempt
|
Second Attempt
|
Third Attempt
| |
Student 1
|
1 out of 4
|
2 out of 4
|
4 out of 4
|
Student 2
|
2 out of 4
|
3 out of 4
|
2 out of 4
|
Student 3
|
1 out of 4
|
4 out of 4
|
2 out of 4
|
Now, all of these students achieved the same number of “points” and the same average, but it is the trend that is important to be able to look at if we want to measure objective mastery. Student 1 shows growth towards a 4. Student 2 shows slight growth then regression. Student 3 shows growth then regression as well. How can we give them a grade that accurately reflects their ability?
Marzano Research has devised a calculation called the Power Law that provides a reasonable calculation of this growth factor. It is available in IC, but only if standards based grading is enabled. So, I’m currently experimenting with my own model which is far from perfect but is manageable for me and rewards growth and retention.
In my model, a student who shows growth in an objective will be allowed to drop previous lower assessments on that objective. Applying it to the above example, it would look like this:
First Attempt
|
Second Attempt
|
Third Attempt
| |
Student 1
|
1 out of 4
|
2 out of 4
|
4 out of 4
|
Student 2
|
2 out of 4
|
3 out of 4
|
2 out of 4
|
Student 3
|
1 out of 4
|
4 out of 4
|
2 out of 4
|
Once a level of proficiency has been demonstrated, any regression will not be dropped. So, students need to continue to demonstrate proficiency. Based on this model, here’s how the students would rate on the integer scale:
First Attempt
|
Second Attempt
|
Third Attempt
|
Final rating
| |
Student 1
|
1 out of 4
|
2 out of 4
|
4 out of 4
|
4
|
Student 2
|
2 out of 4
|
3 out of 4
|
2 out of 4
|
2.5
|
Student 3
|
1 out of 4
|
4 out of 4
|
2 out of 4
|
3
|
This is far from a perfect system, and I entirely expect to run into bumps and exceptions. But the fact, that I allow students to be assessed on any assessment (if they have kept up with the practice work) will hopefully alleviate some issues with regression.
Looking forward
As I look forward to the conferences coming up this week, I’m pulling up progress sheets to see if they will provide a better guide to discussions of student performance. Here’s one below:
I do like that the grading summary is a place where I can begin discussion. Now, a central issue is still that I know what the objectives are and parents might not know. So, I’ll be sure to change those names to reflect what they entail before conferences. But, I feel that running down the list of them in the grading summary will help to reflect strengths and weaknesses before going to the assignment detail. It is a much more accurate summary of student performance than my previous version. So, if we see an objective area that is low, we can go down to the assignment detail for the evidence. Student, parent, and teacher can narrow the focus on what needs to be addressed. It is a document that guides the reader towards the areas of weakness rather than a single assignment that was poorly done. Also, I like that scores and percentages are limited to only a handful of values per assignment that are, in my opinion, easier to decode and are fairly universal in my course.
Still one major issue
The issue I’m still struggling with is how I can go from a collection of equally important objectives each with their own score to a single grade for the course. In the end, it ends up still being averaged into a single number and a single grade. What’s the solution to this? What is best practice?
I would love to hear your opinions and your ideas. If my students need feedback to improve, I can only benefit from the same.
Comments
Post a Comment