Blended Assessment

Week 3 of #BlendKit2014 is looking at assessment – how to know that our students are learning something from the course (hopefully linked to the learning outcomes). Kelvin Thompson and his colleagues began with the reasonable claim that ‘it is imperative that assessment is provided to check the depth of students’ learning’. They also stressed the importance of making the learning applicable, or else students adopting a strategic approach may not engage with it. The question is, who is checking the depth of a student’s learning, and why?

We were provided with some thought provoking reading and asked to reflect on these four questions:

  1. How much of the final course grade do you typically allot to testing? How many tests/exams do you usually require? How can you avoid creating a “high stakes” environment that may inadvertently set students up for failure/cheating?
  2. What expectations do you have for online assessments? How do these expectations compare to those you have for face-to-face assessments? Are you harbouring any biases?
  3. What trade-offs do you see between the affordances of auto-scored online quizzes and project-based assessments? How will you strike the right balance in your blended learning course?
  4. How will you implement formal and informal assessments of learning into your blended learning course? Will these all take place face-to-face, online, or in a combination?

Each of these is addressed in turn below:

How much testing to do?

I’m not sure this is the right question! I think the question should be when/why are tests needed in your course? I like diagnostic tests at the start of a course (ideally tied to a Just in Time Teaching model of delivery, tailoring the rest of the course to the knowledge and experience of the students). Students should be free to take these as often as they want. As an online learner, the need for some sort of progress report, a confirmation that you are on-track is possibly even greater when you have less (or possibly none) face to face time with teaching staff. Short tests throughout the course can meet this need. My only real concern is with the final assessment – how best can this be done online?

Quite a few of the participants in the live webinar expressed concern over the potential for cheating. Perhaps this is why there is now a MOOC course on canvas looking at online cheating – which I discovered via this article in the Chronicle of Higher Education. This saddens me a bit. I’m not a fan of the camera-based remote proctoring solutions, particularly if the student has to purchase them. If I have to choose between spending time devising ways to stop students cheating, or trying to make my courses better, I’d rather the latter. In the end, cheats are only cheating themselves.

My expectations of online testing

The question ‘Are you harbouring any biases?‘ was unexpected, but on reflection I think it is a fair one. I certainly have changed my stance. When I started, I worked with staff on a medical course and noted to my horror that although many students were starting online assessments, only a few finished the tests. Were they too hard? The fact that these tests were delivered online meant we could ask this question, but to get to the answer I had to talk to the students. It turns out that we had come across an example of impromptu group work. Students went to a computer lab (that dates this anecdote) to start the tests on their own. Part way through a friend came in (or else they spotted them amongst the banks of monitors). Rather than work through the questions alone, they discovered it was more effective to discuss the questions as a group, and try and justify their answers to each other, before one person submitted the result on behalf of the group. That explained the high drop-off rate and taught me to take nothing for granted!

The trade-offs

The trade-offs seem pretty clear. Anything that can be automatically marked, providing students with rapid feedback is constrained by those marking tools. If they use some form of pattern-based scoring then poorly designed questions or distractors (e.g. offering students the choice between two words that are similarly spelled but have very different meanings – conservative and conservation) may seriously misrepresent some student’s learning.   More creative, personal assessment options offer the chance to encourage deeper learning, but  require more skilled interpretation. David Nichol and his colleagues (2013) have shown how peer feedback (N.B. not grading) can help everyone learn from the process, and perhaps that offers one way out.

I was also struck at a recent learning and teaching conference how engaged students were in a project where they were asked to create a short (2 minute) video to explain a key concept in the course. In this, the challenge was to know what to leave out. That’s not something that you can mark automatically, but it could be a great online submission task.

Implementation

In a true blended course, you have the luxury of both face to face and online. I think I prefer online diagnostic and formative assessments, but keeping the summative work offline. I think that also reduces the stress for both staff and students (no-one really wins when a big online exam goes ‘castors up’ as they say in the world of TV repairs).  That’s probably why I don’t think it’s worth spending money on anti-cheating hardware. Spend it on e-books instead 🙂

by-nc-sa Featured Image by Jared Stein shared on https://www.flickr.com/photos/5tein/2348649408/
Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s