Article Insight: A Closer Look at SAGrader



Recently The Chronicle of Higher Education featured SAGrader in an article titled Professors Cede Grading Power to Outsiders—Even Computers. The article included some strategies college instructors are using to tackle their grading load and combat grade inflation. We’re happy for the mention and have been following the ensuing discussion closely as educators and students have been posting their comments.
A good number of questions, assumptions and critiques have surfaced in these discussions that weren’t addressed in the article, and we would like to help fill in the gaps.

With this in mind, I’ve compiled responses to the most oft-occurring questions and assumptions about SAGrader, as prompted by reader comments on the Chronicle article and Slashdot.com.

Outsourcing assessment
The Chronicle article presented SAGrader as one method of outsourcing assessment, but SAGrader is really more of an instructor-driven learning tool. SAGrader provides feedback and assessment to students based on an assignment-specific topic outline provided by the instructor. We do not rely on generic algorithms or statistics modeling. In this sense, the instructor has a great deal of direct control over how SAGrader scores each submission. Our instructors don’t see themselves as “outsourcing” their grading work; rather, they are using SAGrader to provide personal feedback to their students more efficiently 24/7.

Where’s the feedback?
A popular concern that readers expressed about automated grading revolved around the role of feedback in improving the student learning experience. This wasn’t covered in the Chronicle article, but it’s probably the most important benefit SAGrader provides. Along with an overall score, SAGrader provides students with detailed, actionable feedback about their submission. Students are encouraged to examine their feedback, make revisions to their answer and submit again.

Immediate feedback, paired with the opportunity to make assignment revisions, supports an iterative writing cycle that promotes learning. Many of our instructors allow unlimited student submissions, so that students can practice as much as they need to before the due date.  We have evidence that this process increases student comprehension and can be especially empowering for students with learning challenges.

Promoting student/instructor interactions
Another concern expressed by readers of the article is that using such an automated tool would widen the communication divide between instructors and students. The thought is that introducing another layer between students and instructors would limit interaction, and thus hinder an instructor’s ability to nurture the student/instructor relationship.

However, this is a misunderstanding of SAGrader’s intended use. SAGrader in its primary function is intended for large classes (100-1000+ students), where personal contact is already constrained by the sheer number of students and student work generated. The program is designed to give these instructors an alternative tool to combat the limits often imposed by classes of this size. Instead of being limited to multiple choice, true/false questions, and a participation grade, instructors can now test and reinforce more substantive knowledge in their students through the use of short answer and essay questions, thereby shifting student learning to higher levels of learning. As one of our instructors told us, SAGrader offers “the ability to have large classes do the same types of work as small classes.”

In addition, SAGrader includes a number of features that allow students and instructors to connect. Students can challenge any feedback they feel is unfair, giving instructors a chance to offer additional explanation or improve the grading rubric. The student performance reports allow instructors to find areas where students may be having trouble, or identify students who may be at risk of failing.

Lloyd, an SAGrader instructor, has this to say about his experience:

“I also appreciate being able to track which students are not doing well, or who are consistently late submitting assignments. It has given me the opportunity to contact those students, express concern about their performance, and in a few cases to eventually help them get back on track with their grades.”

Above all, SAGrader makes it possible to offer more writing assignments in class, giving instructors a better view into their students’ heads. Instructors tell us that allowing their students to write more has given them much more insight into what their students know and don’t know, and identify points where they may be struggling.

Can computers really do that?
As expected, the most common questions readers asked revolved around whether computer software could actually grade student responses in any sort of meaningful manner. It’s a question we at Idea Works answer every day. Advancements in artificial intelligence, natural language processing, and expert systems have come a long way, and today’s computers are capable of things that just ten years ago would have been considered in the realm of science fiction.  As Chronicle commenter becauseisaidso expressed:

“Why not prepare an article on the (obviously not the proprietary, secret) ways an algorithm can grade an essay test?  Clearly, those of us not in the AI biz can’t judge whether or not this makes sense unless we have some clue how it is done…”

Ok, here it goes: SAGrader account representatives work with course instructors to create SAGrader assignments by first developing a conceptual model of the knowledge the instructor wants the student to be able to convey. This information is then modeled in SAGrader and tested to ensure reliability and general cohesiveness of the knowledge domain. We ensure that the program accounts for the hundreds or even thousands of ways a student could express each concept.

When a student submits an answer, SAGrader takes the student response and compares it against the instructor’s conceptual model to identify which concepts in the response were expressed correctly and which were not. If the student fully expresses the concepts and accurately relates them to other ideas the instructor was looking for, they get the points. SAGrader then gives the student feedback on which concepts they got right and which ones need further revision.

So there you go; automated grading of student submissions with assistive feedback in less time than it takes to brush your teeth.  We’d love to hear what you have to say, so if you’re excited, apprehensive, or just plain curious about the technology, sound off in the comments below, or send me an email at Luis@ideaworks.com.

A link to the original Chronicle article can be found here.