Should online quizzes be auto-graded?

I have recently been working on helping an instructor get quiz questions into her online courses in UR Courses, the University of Regina’s learning management system (LMS). Whether or not you like using an LMS, it does offer a secure place to give and submit assessments.

Working on online quizzes has me thinking about the urge to set things up so quizzes grade themselves. The questions I am dealing with are all auto-graded, multiple choice or a fill-in-the-blank question with an approximate right or wrong answer. There is an answer you can set to be correct and you can determine what grade students get for that answer. Anyone who has ever graded piles of exams or quizzes will completely understand the temptation of auto-grading. It would save so much time and be so easy. Do the work once, and students get automatic and immediate feedback! No more mind-numbingly long hours spent with a pen, going over the same thing again and again.

Then again, why do we have assignments? In this case, the quizzes are intended to be self-check assessment for students. It isn’t for a grade and it allows students to test their knowledge multiple times. Even when they are for marks, usually the point of an assignment is to test fact-absorption or memorization. We use other assignments to get more in-depth, or longer exams.

Auto-grading is problematic though. The most basic problem is that the answer is either right or wrong. You have to design a question that is completely clear cut, only has one possible answer, and that will lead students to choose the correct answer if they know the material or a wrong answer if they don’t. Easy, right?

Sure. It’s easy to write questions like that, but usually those questions do not actually assess comprehension, just a very basic memorization (BLOOMS). Derek Bruff has actually written some great material about using Clickers. The principle is similar. You want students to be able to press a button to respond to a question. To do that well, actually getting at comprehension instead of memorization (the lowest level of Bloom’s Taxonomy), is hard. It takes a lot of work. Writing a good multiple choice question can be nearly as much work as grading a short essay. You need to know the likely wrong answers, and craft potential answers that test whether students truly comprehend or just appear to. Again, Derek Bruff has written fantastic material on how to do this well so it is worth a read if you are interested in using technology in this way. Works for things like PollEverywhere also.

Then there is the issue of fill-in-the-blanks. When trying to create an auto-graded question, you have to come up with possible correct answers. If you’re just grading it yourself, you’d know that it does not matter if there is an “s” included, or know that those alternate words are okay. Computers are dumb and they don’t know that unless you tell them. It takes a ton of work to ensure that you have included every possible answer you would accept. Alternately, you just put in one answer and anything else shows as wrong. Then either your students are confused and frustrated that their answer should be correct, or you spend time grading the quiz anyway. Again, it is hard to come up with a question where you either have one answer and only one answer, or you have included every possible answer.

This does not include the time spent formulating the questions and entering them in. Yes, there are plenty of ways to automate this. Then something fails. That means you spend time fixing the questions.

Not nearly as simple as it appears. Audrey Watters actually wrote a great piece on Automation and Artificial Intelligence that discussed the trend toward auto-grading and the issues tied in with it. This has been seen in quite a few MOOCs that include graded assignments because nobody can afford to grade hundreds, let alone thousands, of assignments from non-paying students.

Bottom line, though, is the fact that testing for base level memorization is only nominally useful. Yes, there are things that need to be memorized. But not nearly as many as there used to be. A quick Google search can answer a whole lot of questions very quickly. Facts are easy to find. It’s the comprehension of those facts and what they mean, application and analysis, that can’t be solved by LMGTFY.

Tech has made it easy to be lazy. So I guess it’s time to ask ourselves whether being lazy is going to cut it when it comes to teaching?

Leave a Reply

Your email address will not be published. Required fields are marked *