Tag Archives: assessment

Not one for sitting on the fence

Today I was in Newcastle, lucky enough to attend the #audreytalk event in person –  thanks to Suzanne Hardy and Mike Cameron for the invitation. Audrey Watters – think Hacked Education, Educating Modern Learners and most importantly her own domain [read on] – had travelled north after her ALT-C keynote to challenge the assembled audience sitting in Newcastle University to think Beyond the VLE. I’ve long been a follower of her blogs and her challenging opinions (thus the post title) and it was a great chance to meet her in person. Her slides and notes are available here.

She began with an apology – about the US-centric nature of her writing. She then talked about ed-tech as a route for a new US cultural imperialism. (She didn’t use these words, but I think this process could offer an alternative, darker (re)definition of Euan’s Semple’s catchily named trojan mouse concept). She then had a kick at Blackboard, and another, and another, and also all the ed-tech startups/wanabees who think “Blackboard sucks” but then essentially want to create another, but skinned differently. To over-mix my metaphors, a Blackboard in Facebook/Coursera/any old sheep’s clothing still sucks, even if the potential market and spending record of schools and universities on such systems has investors drooling at the mouth. She was exceptionally critical of the lock-in of data and ideas that a VLE facilitates, fenced off from the outside world (nicely illustrated with slides of cows looking at you across barbed wire).

Yet this walled garden did not suddenly come about when institutions signed up in droves to buy VLEs. There have been fences around schools since at least Victorian times. The reasons for these remain complex – is it protection of identity, income, reputation? Is it thought to promote the rarefied atmosphere ‘required’ for learning – i.e. to keep others out?  iThis self-imposed fencing was explored further by questions from the audience – the internet is still being portrayed as something to be protected from – witness the 4 page acceptable internet-use agreement my 9 year old son son and I had to sign for at the start of his year 5* class at school earlier this month. It could be argued that we get the solutions we pay for, and these learning management focussed systems dovetailed neatly with the needs of institutional managers. Her point is though, that the learners and teachers had little say in this.

She talked about the danger of storing things in the cloud – witness the recent iCloud password hack ‘exposing’ [pun intended] celebrity photos – and stressed the importance of owning your own data. She then talked of a different approach taken by the University of Mary Washington – their Domain of One’s Own initiative – where students and staff are bought their own domain (whose name they can negotiate) and helped to set up LAMP tools such as a blog. An interesting idea and a very brave marketing strategy – note the equal number of dislikes as likes on their introductory video.  I’m guessing from the abandoned Bagman blog that this approach (be it marketting or DIY infrastructure) wasn’t to everyone’s taste. This hands on, take control of your data approach is one that resonates with Audrey.

It has a sense of coming full circle. It was reminiscent of the early web publishing activities of staff and students in the time before VLEs – Audreys uses her own graduate teaching at the University of Oregon in the late nineties as a case study. This was interesting and resonated with my own early teaching experience. I was also a member of what we could term ‘Generation tilde‘ – those who had public web space on their University’s servers, accessible by simply appending ~ and your username to the institution’s domain. We were certainly much freer to publish content than we are now, but I think we suffered from the lack of data about what people were doing on our pages and few had the skills to code online tests, discussion boards, let alone provide tools where students could begin to construct and challenge their understanding online together. The web was freer, but it was also a lonelier place then.

Reflecting on her talk as I travelled home, I couldn’t help feeling that she is on to something, that somehow we need to improve the base level of digital literacy in the population and heighten awareness of where our data is held and how valuable it is. I loved her quote “data is the new oil”. I also loved another version that Doug Belshaw had heard (apologies I remembered the quote but not the source) – “data is the new soil” – I think that neatly captures the fact that this the data are the beginning not the end-point. I am still wrestling with the inherent tension between the desire to be open and the need for private spaces to learn. I think it might be easier for professionals (e.g. teachers) to share materials and if possible the journey (including any wrong turns), all subject to continual refinement and reflection. Martin Weller and Gráinne Conole are a good examples of this from HE. Yet I think we also owe it to our students to provide them a ‘safe place to fail’ – somewhere to experiment, try different approaches and angles, without worrying that these actions will haunt them online through the rest of their life.  If we ask/require students to make their learning public, can we predict the effects? I am worried that such an approach may have a negative effect on learners with low self-esteem, the slower thinkers, those still struggling with the subject, or trying to consider things from an alternative perspective. Would it promote an attitude of playing safe, favouring the students who are first with the most obvious answer, reinforcing the actions of the loudest, or playing to the audience?  Yet how many people would really read students assignments? Shouldn’t I also draw hope from the fact that people seem to find the courage to post the most remarkable things on Facebook (or perhaps that is exactly what I should be worrying about). The web allows does allow you to go back and update your content [if you own the data], so am I just overly-paranoid?

Audrey pointed out that at least some of these issues can be avoided through the use better assessments and of pseudo-anonymity – e.g. choosing the name of your blog and domain with care. Doug Belshaw provided a great counter-example of a UK student who was working on a history project blog about native American Indians. His initial postings were not too great, but his enthusiasm for the topic was fired up when out of the blue a comment was posted from the son of an Amerindian chief (I hope I got that right Doug!).  That’s the way I want things to work.  Perhaps it’s time to rethink that fence…

 

* That’s the equivalent of primary six for anyone in a sensibly numbered education system – is it any wonder many children find maths confusing if we can’t even apply the most basic principles of arithmetic to the year numbering?

CC BY Image: A photo of Alyson Shotz’ – Mirror fence – taken by Erik Anestad and shared on flickr using a CC BY 2.0 license.

Advertisement

Canvassing Opinion

I spent today in Edinburgh at a presentation about Canvas – a VLE by the strangely named company Instructure (it doesn’t exactly roll off the tongue). To declare a potential interest, I currently work for an institution that has a long history with a competitor – Blackboard. I leave it to the reader to decide whether/how that colours my comments.

A large part of the day was spent listening to staff not sales people – Darren Marsh and Margaret Donnison from Birmingham University which has recently been through the periodic VLE review process that every educational institution undergoes from time to time. In Birmingham’s case, their current VLE (WebCT) had gone ‘end of life’ and so they were facing a migration regardless of which product they chose. In that sense it is an excellent example of an unbiased evaluation, as there was no status quo option. On the down side, WebCT is pretty long in the tooth and would fare poorly in a comparison with almost any VLE on the market today.

As well as the need to find a new VLE system, the University felt that distance and blended learning was becoming more important and that the market was  undergoing a period of disruption due to factors such as increased student fees, MOOCs and alternate modes of delivery. Their needs were clearly expressed in a few points:

  • a high quality product
  • fit for purpose
  • distinctive

That last point is interesting – in a market dominated by a small number of vendors, is there a risk that all institutional offerings look the same? This is an intriguing proposition that I have some issues with – is the online learning experience only skin deep? Equally does just changing the appearance of content (akin to applying a different template to a WordPress site) significantly alter the learner’s experience in any meaningful way? That doesn’t fit with my experience. I think the MOOC I learnt most from so far was the #OCL4Ed course hosted on WikiEducator/MediaWiki. It looked awful and was hard to navigate, but the learning activities were well-designed and stimulated meaningful collaboration amongst the participants (e.g. commenting on each other’s blogs – see http://apperleyrd.wordpress.com/).

A question I didn’t think to ask at the time was where had this notion of distinctiveness come from? Was it requested by academics, tired of working in the same old online courses, was it from students, or perhaps from marketing? I have seen a lot of student feedback describing institutional VLEs as ‘clunky’ and ‘tired looking’ but I’ve never seen any students asking for them to be more distinctive!

The findings of Birmingham’s detailed tender process were echoed in the subsequent demonstration of the Canvas product – there is a large feature set common across all the major VLE platforms. We saw demonstrations of online marking using the cloud Crocodoc/Box View service, adaptive release of content based on dates and tests scores, integration with third party services such as Kaltura, Panopto, YouTube. Whilst slick, these features should have been familiar to the audience and many required the purchase of third party services (e.g Kaltura and Panopto). Assignment workflow was a little disappointing, lagging behind that in Moodle or Blackboard – no support for moderated marking, anonymity and other factors held dear (even if perhaps in some cases misguidedly) by many UK HEIs.

Great play was made of the ability to use the IMS LTI standard to connect to third party systems. They publish an impressive catalogue of possible integrations at http://www.edu-apps.org/index.html. A closer inspection shows that very few of these services have been certified as compliant by IMS (see http://developers.imsglobal.org/catalog.html), which makes me wonder whether they take advantage of the full range of LTI features (e.g. populating the grade centre) or are just a simple launch point that may or may not actually implement LTI.  Later I browsed through a few entries on edu-apps and some of the comments about tools no longer working (including the YouTube integration) were a bit worrying – although in this case they might have just referred to integration via Moodle.

Also, although IMS are working at a standard for analytics data – caliper – this is not yet ready to implement, so integrations that rely on LTI will not provide any tracking/usage data to the parent VLE. This is a missed opportunity for both staff interested in their learners actions in a given course and those trying to aggregate data across courses, or attempting to measure the return on investment in a particular tool.

Interesting too that like many other VLEs,  the ability to integrate with 3rd party systems using LTI first requires action by a user with appropriate privileges (see http://www.edu-apps.org/tutorials.html). Whilst the document suggests this can be done at a course level, in practice I think this may be restricted to system administrators –  if only  to keep the lawyers happy and to safeguard the privacy of our users – creating a potential bottleneck to innovation.

Canvas offered a distinctive hierarchy of user accounts and sub-accounts (with permissions inherited) that allows you to model the University, breaking it down into faculties/colleges, then further into schools/departments, and assign branding, permissions, even custom javascript. This is interesting and something I plan to explore further. As ever the devil is in the detail and Universities seem to excel at complicating situations. For example should you divide it up by faculty, or level of study (e.g. separating undergraduate from postgraduate courses?) Should the mode of delivery matter – differentiating between face to face, blended and distance courses? I wonder if this user account model cope with several different overlapping possible hierarchies? Should these change in the future, how easy will it be to change this?

Although only just coming to the end of their first year of using Canvas, Birmingham had found the time to solicit student feedback via questionnaires. The usual caveats about small sample sizes, risk of only capturing the extremes of opinion and questionable use of some leading questions all apply. Still 84% of students agreed with the statement that they found canvas easy to use, and an encouraging 88% found it useful for their studies. Perhaps more worrying is why 12% did not, assuming that it contains links to the course materials, online tests and e-submission!

Common themes that the students praised were ease of use and a clean layout. Looking at Birmingham’s implementation (which provides a pretty standard canvas course) you can understand the ease of use – the interface is relatively uncluttered and the content is restricted to materials relevant to the courses they are taking. There was no evidence of any portal functionality being delivered through canvas – a later perusal of their website identified [my.bham] – a student portal based on SunGard’s Luminis product.

The clean layout is an interesting comment. I’m not sure if this means ‘it looks like Facebook/Wordpress’ and just reflects the widespread adoption of this user interface model, or whether it was very like the old WebCT course structure they already knew? Screenshots showed templates with similarly labelled folders on Canvas, some even going to the trouble of replication the icons representing folders in WebCT.  On a more positive note, it might be the result of carefully planned and structured courses on the new system.

One advantage of switching learning environments is that it offers the institution a chance to start again. It is all too easy for systems to become bloated over the years (like an old laptop) with content that is no longer used, courses based on copies of copies of copies, all of which can have a negative impact on performance. Also it provides staff with the chance to review the content and online components of their course. Doing this across a whole institution and with a real fixed deadline, where just using the same stuff as last year is not be an option, has benefits that can’t be achieved through an isolated course review (though I’m not arguing you should stop doing this either, there’s just an extra benefit/multiplier effect when everyone is thinking, talking and sharing about this at the same time). It’s also a good time to check all the links work, content is copyright cleared, etc.

It is also a good motivator to get staff to attend training. Birmingham use a mix of face to face workshops with online materials – with separate courses for staff and students.

As a relative newcomer to the market and built for a hosted, scalable solution from day 1, I was interested to see canvas performs on mobile and tablet devices. Sadly there was no evidence of responsive design comparing the experience in a standard browser at different screen sizes and on laptops and tablets 😦
Like many other vendors, they have released mobile apps for iOS and Android. I thought that the mobile UI they showed actually looked nicer than the standard one, with clear icons next to course menu buttons giving an extra clue to the functionality of the links . Special apps exist for dedicated tasks e.g. the SpeedGrader app for online Grading – which on a cursory inspection seems a bit like Turnitin’s GradeAnywhere app, though without support for offline marking of downloaded scripts.

This video shows Canvas deployed on a range of devices and footage of the custom SpeedGrader app:

A few eyebrows were raised around the room when they mentioned their approach to versioning/software release: there is only one version. They operate an agile approach with new releases every three weeks. When probed, there is a degree of control, it is possible to turn off or delay the implementation of new features on your build. This is good news if you want to avoid any changes during key times (e.g. online exams) but seems to contradict the one version policy and I am not sure how it works with their online help documentation – does it respect all these local settings?

The product is only available as a hosted service, sitting on the Amazon AWS cloud, providing a scalable solution, with a promise from Instructure (UK) of 99.9% uptime over a year – assuming it doesn’t fall foul of a denial of service attack by those angry about it’s approach to in-country taxation.  They use the Dublin-based AWS European Data Centre for EU clients to keep everyone happy. It is unclear whether all the bundled extras – e.g. the  Big Blue Button conferencing app – also offer an EU or  -Safe Harbor compliant solution.

Although Canvas’ origin lies with American computer science  students dissatisfied with their current online experience (sound familiar?) the staff present in Edinburgh were keen to play the international card. It was good to hear them supporting localisation for different languages (no support for Gaelic yet) and with research and development teams available in-country – in the case of the UK in London. As one of the small fishes in a pond still dominated by the US, it is always nice to know that someone is listening and able to act locally.

Although we ran out of time, they are also analytics options and Instructure staff were  keen to hear from UK institutions wanting to use their Canvas network product to facilitate MOOCs (like #BlendKit2014).

More information about Birmingham’s experience can be found on the UK Canvas site  (though tred carefully as the comparison table Canvas publish doesn’t give me much confidence in their QA – I found errors in the third row: Development Technology). They also link to this video, note it was uploaded to YouTube by Canvas, not Birmingham:

Some final thoughts:

Q. Did the day leave me feeling that our current platform (Blackboard) was pedestrian or had been eclipsed?
A. No – some features in Canvas look slicker/more mature/better than Blackboard, but equally some  features in Blackboard look slicker/ more mature/better than Canvas.

Q. If I was looking to implement a VLE from scratch or undergo a review of provision would Canvas be on my short list?
A. Yes.

 

by Featured Image iVincent by JD Hancock shared on http://photos.jdhancock.com/photo/2014-02-22-200113-ivincent.html

Blended Assessment

Week 3 of #BlendKit2014 is looking at assessment – how to know that our students are learning something from the course (hopefully linked to the learning outcomes). Kelvin Thompson and his colleagues began with the reasonable claim that ‘it is imperative that assessment is provided to check the depth of students’ learning’. They also stressed the importance of making the learning applicable, or else students adopting a strategic approach may not engage with it. The question is, who is checking the depth of a student’s learning, and why?

We were provided with some thought provoking reading and asked to reflect on these four questions:

  1. How much of the final course grade do you typically allot to testing? How many tests/exams do you usually require? How can you avoid creating a “high stakes” environment that may inadvertently set students up for failure/cheating?
  2. What expectations do you have for online assessments? How do these expectations compare to those you have for face-to-face assessments? Are you harbouring any biases?
  3. What trade-offs do you see between the affordances of auto-scored online quizzes and project-based assessments? How will you strike the right balance in your blended learning course?
  4. How will you implement formal and informal assessments of learning into your blended learning course? Will these all take place face-to-face, online, or in a combination?

Each of these is addressed in turn below:

How much testing to do?

I’m not sure this is the right question! I think the question should be when/why are tests needed in your course? I like diagnostic tests at the start of a course (ideally tied to a Just in Time Teaching model of delivery, tailoring the rest of the course to the knowledge and experience of the students). Students should be free to take these as often as they want. As an online learner, the need for some sort of progress report, a confirmation that you are on-track is possibly even greater when you have less (or possibly none) face to face time with teaching staff. Short tests throughout the course can meet this need. My only real concern is with the final assessment – how best can this be done online?

Quite a few of the participants in the live webinar expressed concern over the potential for cheating. Perhaps this is why there is now a MOOC course on canvas looking at online cheating – which I discovered via this article in the Chronicle of Higher Education. This saddens me a bit. I’m not a fan of the camera-based remote proctoring solutions, particularly if the student has to purchase them. If I have to choose between spending time devising ways to stop students cheating, or trying to make my courses better, I’d rather the latter. In the end, cheats are only cheating themselves.

My expectations of online testing

The question ‘Are you harbouring any biases?‘ was unexpected, but on reflection I think it is a fair one. I certainly have changed my stance. When I started, I worked with staff on a medical course and noted to my horror that although many students were starting online assessments, only a few finished the tests. Were they too hard? The fact that these tests were delivered online meant we could ask this question, but to get to the answer I had to talk to the students. It turns out that we had come across an example of impromptu group work. Students went to a computer lab (that dates this anecdote) to start the tests on their own. Part way through a friend came in (or else they spotted them amongst the banks of monitors). Rather than work through the questions alone, they discovered it was more effective to discuss the questions as a group, and try and justify their answers to each other, before one person submitted the result on behalf of the group. That explained the high drop-off rate and taught me to take nothing for granted!

The trade-offs

The trade-offs seem pretty clear. Anything that can be automatically marked, providing students with rapid feedback is constrained by those marking tools. If they use some form of pattern-based scoring then poorly designed questions or distractors (e.g. offering students the choice between two words that are similarly spelled but have very different meanings – conservative and conservation) may seriously misrepresent some student’s learning.   More creative, personal assessment options offer the chance to encourage deeper learning, but  require more skilled interpretation. David Nichol and his colleagues (2013) have shown how peer feedback (N.B. not grading) can help everyone learn from the process, and perhaps that offers one way out.

I was also struck at a recent learning and teaching conference how engaged students were in a project where they were asked to create a short (2 minute) video to explain a key concept in the course. In this, the challenge was to know what to leave out. That’s not something that you can mark automatically, but it could be a great online submission task.

Implementation

In a true blended course, you have the luxury of both face to face and online. I think I prefer online diagnostic and formative assessments, but keeping the summative work offline. I think that also reduces the stress for both staff and students (no-one really wins when a big online exam goes ‘castors up’ as they say in the world of TV repairs).  That’s probably why I don’t think it’s worth spending money on anti-cheating hardware. Spend it on e-books instead 🙂

by-nc-sa Featured Image by Jared Stein shared on https://www.flickr.com/photos/5tein/2348649408/