When is a cat not just a cat?

When it is being used as an example of digital literacies.

Digital Literacy is a term that is increasingly being bandied about the web. Whilst not (yet?) as misused as digital natives – it is already beating this term three times over in the google matching game:

Over 17 million matches on google, c.f. only 5,620,000 for digital natives

So what does it mean? On the 27th of June, Doug Belshaw will be launching a book which should help you approach/hone your definition, drawing on his Ed D thesis. The abstract of that is refreshingly short:

Digital literacy has been an increasingly-debated and discussed topic since the publication of Paul Gilster’s seminal Digital Literacy in 1997. It is, however, a complex term predicated on previous work in new literacies such as information literacy and computer literacy. To make sense of this complexity and uncertainty I come up with a ‘continuum of ambiguity’ and employ a Pragmatic methodology. This thesis makes three main contributions to the research area. First, I argue that considering a plurality of digital literacies helps avoid some of the problems of endlessly-redefining ‘digital literacy’. Second, I abstract eight essential elements of digital literacies from the research literature which can lead to positive action. Finally, I argue that co-constructing a definition of digital literacies (using the eight essential elements as a guide) is at least as important as the outcome.

CC0 Public Domain Belshaw, D (2011) What is digital literacy? A Pragmatic investigation. Ed D Thesis shared under a CC0 license

Doug’s book builds on this, but is more than just a distillation of these ideas. It has been several years in the making and benefits from the ongoing reflection this has allowed. Its genesis reflects his commitment to open scholarship and shared scholarship.  Draft chapters of the book have been available in advance of the final release, with comments encouraged. Doug used a tapering cost model – the earlier you got involved the lower the purchase cost. The final edition is DRM-free and sharing is permitted: he includes the line ‘You’re welcome to share it with your friends, but please do encourage them to purchase a copy if they find it useful.’
We all have to eat…

The book comprises nine chapters. The first is an introduction, which explains how the remainder of the book is structured and suggests paths through it. Chapter 2 attempts to define the problem this book addresses. Doug explores the ideas of ‘digital’ and ‘literacy’ (in reverse order) and the reader learns to replace the concept of  literacy with  literacies. Chapter 3 stresses the ambiguous nature of such ideas, arguing that this ambiguity should be actively embraced rather than avoided. Models of digital literacies are critiqued in chapter 4, with an alternative – Doug’s eight Essential Elements of Digital Literacies offered in chapter 5. The rest of the book tries to apply this framework. Chapter 6 uses memes as an way to understand digital texts. The next chapter looks at remixes (with a brief nod to copyright) and chapter 8 (perhaps unsurprising given Doug’ s current work at Mozilla) looks at coding the web. The final chapter provides a conclusion and encourages the reader to rip and remix the book.

Doug manages to draw a lot of ideas together in his book – we travel from the invention of the printing press to the World of Warcraft. He blends ideas from academic disciplines – education, linguistics, history, computing, philosophy – with everyday life – gaming, cooking, even furniture.  The result could be a terrible pastiche, but it is not. Doug avoids this by weaving the thread of digital literacies through the book, thus demonstrating the value this lens can provide. Some chapters are flagged as skip this if you like, but I think they are accessible enough and worthy of reading. Some sections  (such as the challenge  to the requirement for linear progress in education) leave you wanting more (note this is not necessarily a criticism).

A real strength of the book are the well-chosen examples used throughout – no technical knowledge is assumed. To  illustrate the potential confusions around copyright Doug uses the concept of recipes (yes, as in cooking) and he derives an enormous amount of meaning from an ugly looking baby* and cheezy cats when analyzing internet memes.

*Apologies to his mother,  flickr user Laney G. Checking her photostream shows said picture to be an uncharacteristic shot. He’s better looking than me, I think I should stop there…

In summary, The Essential Elements of Digital Literacies is a short, informative book written in a clear, often amusing style. If I was being really picky it could probably benefit from one less font, but that is a minor criticism and does not detract from the thoughtfulness of the debate. I think it is one of those books that cannot be read widely enough and I recommend it to anyone. Reading it will not instantly make you  digital literate, but it will give you an understanding of why this is important and offers a framework to help you reflect on your own practice and that of others.

Advertisements

Content & Assignments

Week 4 of #BlendKit2014 is looking at ways to facilitate student learning, through student engagement with content items in general and assignments in particular. The reading began with a brief nod towards Didaktik design which at least at one level can be taken to focus on the what, why and how of learning by focussing on the design of learning activities/environments to achieve a particular pedagogical outcome. They also draw our attention to the opportunity and dilemma posed by the ever increasing number of online tools available. They stressed that relevant and integrated activities were the keys to success. Hard to argue against that! They were also advocates of consistency – possibly going one step further, citing this quote from Kaminski & Currie (2008, p205):

Within each Learning activity, uniformity also helps to guide students through the content.

We were asked to reflect on these four questions:

  1. In what experiences (direct or vicarious) will you have students participate during your blended learning course? In what ways do you see these experiences as part of the assessment process? Which experiences will result in student work that you score?
  2. How will you present content to students in the blended learning course you are designing? Will students encounter content only in one modality (e.g., face-to-face only), or will you devise an approach in which content is introduced in one modality and elaborated upon in the other? What will this look like?
  3. Will there be a consistent pattern to the presentation of content, introduction of learning activities, student submission of assignments, and instructor feedback (formal and informal) in your blended learning course? How can you ensure that students experience your course as one consistent whole rather than as two loosely connected learning environments?
  4. How can specific technologies help you present content, provide meaningful experiences, and pitch integration to students in your blended course? With your planned technology use, are you stretching yourself, biting off more than you can chew, or just maintaining the status quo?

Each of these is addressed in turn below:

Student Experiences

I am answering this question thinking about a blended course we are planning to run to support school students making the transition to HE. I think the key will be to design activities where students are willing to express their own opinions, test their knowledge and possibly get things wrong the first time. It think that the automated testing systems (e.g. online formative/diagnostic quizzes) might be a good way of encouraging people to engage openly and honestly, supplementing this with online and face to face discussion once individuals have a bit of confidence.

Presenting Content

The course will include both online and face to face components. In part this is to give students a degree of flexibility regarding where and when they will take it. Some of the material can really only be delivered online  – e.g. using the Stanford Teaching Privacy tools to establish how big your digital footprint is, or watching short ‘vox-pox’ videos from former students. Equally, it would be useful to have some face to face sessions, to help establish relationships and a sense of community amingst the learners and get early feedback if things aren’t going as planned.

Consistency

This is a really important area, too easily overlooked. A lack of consistency in layout/structure is one of the most common complaints we get in student feedback about the online component of blended courses. There is a fine-line to be walked between a common structure and effectively dictating the structure of a course. When I started at University I was firmly in the camp of let staff structure their course as they see fit (railing against any institutional template – think PowerPoint). As time has gone by and I have discussed these issues with students (see a recent Student-Led project I was involved in with) I have changed position and am now in the minimum thresholds camp that seems to be gathering momentum in UK HEI (e.g. this fine example from Newcastle University) – though not everyone agrees (see David Jones’ blog).

Use of Technology

As a practising learning technologist, I hope I can get this bit right! I’d like to offer students and staff the options to embed video directly from their webcam as a form of comment. I think that might be a bit more immediate and engaging than a purely text-based form of discussion.

Canvassing Opinion

I spent today in Edinburgh at a presentation about Canvas – a VLE by the strangely named company Instructure (it doesn’t exactly roll off the tongue). To declare a potential interest, I currently work for an institution that has a long history with a competitor – Blackboard. I leave it to the reader to decide whether/how that colours my comments.

A large part of the day was spent listening to staff not sales people – Darren Marsh and Margaret Donnison from Birmingham University which has recently been through the periodic VLE review process that every educational institution undergoes from time to time. In Birmingham’s case, their current VLE (WebCT) had gone ‘end of life’ and so they were facing a migration regardless of which product they chose. In that sense it is an excellent example of an unbiased evaluation, as there was no status quo option. On the down side, WebCT is pretty long in the tooth and would fare poorly in a comparison with almost any VLE on the market today.

As well as the need to find a new VLE system, the University felt that distance and blended learning was becoming more important and that the market was  undergoing a period of disruption due to factors such as increased student fees, MOOCs and alternate modes of delivery. Their needs were clearly expressed in a few points:

  • a high quality product
  • fit for purpose
  • distinctive

That last point is interesting – in a market dominated by a small number of vendors, is there a risk that all institutional offerings look the same? This is an intriguing proposition that I have some issues with – is the online learning experience only skin deep? Equally does just changing the appearance of content (akin to applying a different template to a WordPress site) significantly alter the learner’s experience in any meaningful way? That doesn’t fit with my experience. I think the MOOC I learnt most from so far was the #OCL4Ed course hosted on WikiEducator/MediaWiki. It looked awful and was hard to navigate, but the learning activities were well-designed and stimulated meaningful collaboration amongst the participants (e.g. commenting on each other’s blogs – see http://apperleyrd.wordpress.com/).

A question I didn’t think to ask at the time was where had this notion of distinctiveness come from? Was it requested by academics, tired of working in the same old online courses, was it from students, or perhaps from marketing? I have seen a lot of student feedback describing institutional VLEs as ‘clunky’ and ‘tired looking’ but I’ve never seen any students asking for them to be more distinctive!

The findings of Birmingham’s detailed tender process were echoed in the subsequent demonstration of the Canvas product – there is a large feature set common across all the major VLE platforms. We saw demonstrations of online marking using the cloud Crocodoc/Box View service, adaptive release of content based on dates and tests scores, integration with third party services such as Kaltura, Panopto, YouTube. Whilst slick, these features should have been familiar to the audience and many required the purchase of third party services (e.g Kaltura and Panopto). Assignment workflow was a little disappointing, lagging behind that in Moodle or Blackboard – no support for moderated marking, anonymity and other factors held dear (even if perhaps in some cases misguidedly) by many UK HEIs.

Great play was made of the ability to use the IMS LTI standard to connect to third party systems. They publish an impressive catalogue of possible integrations at http://www.edu-apps.org/index.html. A closer inspection shows that very few of these services have been certified as compliant by IMS (see http://developers.imsglobal.org/catalog.html), which makes me wonder whether they take advantage of the full range of LTI features (e.g. populating the grade centre) or are just a simple launch point that may or may not actually implement LTI.  Later I browsed through a few entries on edu-apps and some of the comments about tools no longer working (including the YouTube integration) were a bit worrying – although in this case they might have just referred to integration via Moodle.

Also, although IMS are working at a standard for analytics data – caliper – this is not yet ready to implement, so integrations that rely on LTI will not provide any tracking/usage data to the parent VLE. This is a missed opportunity for both staff interested in their learners actions in a given course and those trying to aggregate data across courses, or attempting to measure the return on investment in a particular tool.

Interesting too that like many other VLEs,  the ability to integrate with 3rd party systems using LTI first requires action by a user with appropriate privileges (see http://www.edu-apps.org/tutorials.html). Whilst the document suggests this can be done at a course level, in practice I think this may be restricted to system administrators –  if only  to keep the lawyers happy and to safeguard the privacy of our users – creating a potential bottleneck to innovation.

Canvas offered a distinctive hierarchy of user accounts and sub-accounts (with permissions inherited) that allows you to model the University, breaking it down into faculties/colleges, then further into schools/departments, and assign branding, permissions, even custom javascript. This is interesting and something I plan to explore further. As ever the devil is in the detail and Universities seem to excel at complicating situations. For example should you divide it up by faculty, or level of study (e.g. separating undergraduate from postgraduate courses?) Should the mode of delivery matter – differentiating between face to face, blended and distance courses? I wonder if this user account model cope with several different overlapping possible hierarchies? Should these change in the future, how easy will it be to change this?

Although only just coming to the end of their first year of using Canvas, Birmingham had found the time to solicit student feedback via questionnaires. The usual caveats about small sample sizes, risk of only capturing the extremes of opinion and questionable use of some leading questions all apply. Still 84% of students agreed with the statement that they found canvas easy to use, and an encouraging 88% found it useful for their studies. Perhaps more worrying is why 12% did not, assuming that it contains links to the course materials, online tests and e-submission!

Common themes that the students praised were ease of use and a clean layout. Looking at Birmingham’s implementation (which provides a pretty standard canvas course) you can understand the ease of use – the interface is relatively uncluttered and the content is restricted to materials relevant to the courses they are taking. There was no evidence of any portal functionality being delivered through canvas – a later perusal of their website identified [my.bham] – a student portal based on SunGard’s Luminis product.

The clean layout is an interesting comment. I’m not sure if this means ‘it looks like Facebook/Wordpress’ and just reflects the widespread adoption of this user interface model, or whether it was very like the old WebCT course structure they already knew? Screenshots showed templates with similarly labelled folders on Canvas, some even going to the trouble of replication the icons representing folders in WebCT.  On a more positive note, it might be the result of carefully planned and structured courses on the new system.

One advantage of switching learning environments is that it offers the institution a chance to start again. It is all too easy for systems to become bloated over the years (like an old laptop) with content that is no longer used, courses based on copies of copies of copies, all of which can have a negative impact on performance. Also it provides staff with the chance to review the content and online components of their course. Doing this across a whole institution and with a real fixed deadline, where just using the same stuff as last year is not be an option, has benefits that can’t be achieved through an isolated course review (though I’m not arguing you should stop doing this either, there’s just an extra benefit/multiplier effect when everyone is thinking, talking and sharing about this at the same time). It’s also a good time to check all the links work, content is copyright cleared, etc.

It is also a good motivator to get staff to attend training. Birmingham use a mix of face to face workshops with online materials – with separate courses for staff and students.

As a relative newcomer to the market and built for a hosted, scalable solution from day 1, I was interested to see canvas performs on mobile and tablet devices. Sadly there was no evidence of responsive design comparing the experience in a standard browser at different screen sizes and on laptops and tablets 😦
Like many other vendors, they have released mobile apps for iOS and Android. I thought that the mobile UI they showed actually looked nicer than the standard one, with clear icons next to course menu buttons giving an extra clue to the functionality of the links . Special apps exist for dedicated tasks e.g. the SpeedGrader app for online Grading – which on a cursory inspection seems a bit like Turnitin’s GradeAnywhere app, though without support for offline marking of downloaded scripts.

This video shows Canvas deployed on a range of devices and footage of the custom SpeedGrader app:

A few eyebrows were raised around the room when they mentioned their approach to versioning/software release: there is only one version. They operate an agile approach with new releases every three weeks. When probed, there is a degree of control, it is possible to turn off or delay the implementation of new features on your build. This is good news if you want to avoid any changes during key times (e.g. online exams) but seems to contradict the one version policy and I am not sure how it works with their online help documentation – does it respect all these local settings?

The product is only available as a hosted service, sitting on the Amazon AWS cloud, providing a scalable solution, with a promise from Instructure (UK) of 99.9% uptime over a year – assuming it doesn’t fall foul of a denial of service attack by those angry about it’s approach to in-country taxation.  They use the Dublin-based AWS European Data Centre for EU clients to keep everyone happy. It is unclear whether all the bundled extras – e.g. the  Big Blue Button conferencing app – also offer an EU or  -Safe Harbor compliant solution.

Although Canvas’ origin lies with American computer science  students dissatisfied with their current online experience (sound familiar?) the staff present in Edinburgh were keen to play the international card. It was good to hear them supporting localisation for different languages (no support for Gaelic yet) and with research and development teams available in-country – in the case of the UK in London. As one of the small fishes in a pond still dominated by the US, it is always nice to know that someone is listening and able to act locally.

Although we ran out of time, they are also analytics options and Instructure staff were  keen to hear from UK institutions wanting to use their Canvas network product to facilitate MOOCs (like #BlendKit2014).

More information about Birmingham’s experience can be found on the UK Canvas site  (though tred carefully as the comparison table Canvas publish doesn’t give me much confidence in their QA – I found errors in the third row: Development Technology). They also link to this video, note it was uploaded to YouTube by Canvas, not Birmingham:

Some final thoughts:

Q. Did the day leave me feeling that our current platform (Blackboard) was pedestrian or had been eclipsed?
A. No – some features in Canvas look slicker/more mature/better than Blackboard, but equally some  features in Blackboard look slicker/ more mature/better than Canvas.

Q. If I was looking to implement a VLE from scratch or undergo a review of provision would Canvas be on my short list?
A. Yes.

 

by Featured Image iVincent by JD Hancock shared on http://photos.jdhancock.com/photo/2014-02-22-200113-ivincent.html

Blended Assessment

Week 3 of #BlendKit2014 is looking at assessment – how to know that our students are learning something from the course (hopefully linked to the learning outcomes). Kelvin Thompson and his colleagues began with the reasonable claim that ‘it is imperative that assessment is provided to check the depth of students’ learning’. They also stressed the importance of making the learning applicable, or else students adopting a strategic approach may not engage with it. The question is, who is checking the depth of a student’s learning, and why?

We were provided with some thought provoking reading and asked to reflect on these four questions:

  1. How much of the final course grade do you typically allot to testing? How many tests/exams do you usually require? How can you avoid creating a “high stakes” environment that may inadvertently set students up for failure/cheating?
  2. What expectations do you have for online assessments? How do these expectations compare to those you have for face-to-face assessments? Are you harbouring any biases?
  3. What trade-offs do you see between the affordances of auto-scored online quizzes and project-based assessments? How will you strike the right balance in your blended learning course?
  4. How will you implement formal and informal assessments of learning into your blended learning course? Will these all take place face-to-face, online, or in a combination?

Each of these is addressed in turn below:

How much testing to do?

I’m not sure this is the right question! I think the question should be when/why are tests needed in your course? I like diagnostic tests at the start of a course (ideally tied to a Just in Time Teaching model of delivery, tailoring the rest of the course to the knowledge and experience of the students). Students should be free to take these as often as they want. As an online learner, the need for some sort of progress report, a confirmation that you are on-track is possibly even greater when you have less (or possibly none) face to face time with teaching staff. Short tests throughout the course can meet this need. My only real concern is with the final assessment – how best can this be done online?

Quite a few of the participants in the live webinar expressed concern over the potential for cheating. Perhaps this is why there is now a MOOC course on canvas looking at online cheating – which I discovered via this article in the Chronicle of Higher Education. This saddens me a bit. I’m not a fan of the camera-based remote proctoring solutions, particularly if the student has to purchase them. If I have to choose between spending time devising ways to stop students cheating, or trying to make my courses better, I’d rather the latter. In the end, cheats are only cheating themselves.

My expectations of online testing

The question ‘Are you harbouring any biases?‘ was unexpected, but on reflection I think it is a fair one. I certainly have changed my stance. When I started, I worked with staff on a medical course and noted to my horror that although many students were starting online assessments, only a few finished the tests. Were they too hard? The fact that these tests were delivered online meant we could ask this question, but to get to the answer I had to talk to the students. It turns out that we had come across an example of impromptu group work. Students went to a computer lab (that dates this anecdote) to start the tests on their own. Part way through a friend came in (or else they spotted them amongst the banks of monitors). Rather than work through the questions alone, they discovered it was more effective to discuss the questions as a group, and try and justify their answers to each other, before one person submitted the result on behalf of the group. That explained the high drop-off rate and taught me to take nothing for granted!

The trade-offs

The trade-offs seem pretty clear. Anything that can be automatically marked, providing students with rapid feedback is constrained by those marking tools. If they use some form of pattern-based scoring then poorly designed questions or distractors (e.g. offering students the choice between two words that are similarly spelled but have very different meanings – conservative and conservation) may seriously misrepresent some student’s learning.   More creative, personal assessment options offer the chance to encourage deeper learning, but  require more skilled interpretation. David Nichol and his colleagues (2013) have shown how peer feedback (N.B. not grading) can help everyone learn from the process, and perhaps that offers one way out.

I was also struck at a recent learning and teaching conference how engaged students were in a project where they were asked to create a short (2 minute) video to explain a key concept in the course. In this, the challenge was to know what to leave out. That’s not something that you can mark automatically, but it could be a great online submission task.

Implementation

In a true blended course, you have the luxury of both face to face and online. I think I prefer online diagnostic and formative assessments, but keeping the summative work offline. I think that also reduces the stress for both staff and students (no-one really wins when a big online exam goes ‘castors up’ as they say in the world of TV repairs).  That’s probably why I don’t think it’s worth spending money on anti-cheating hardware. Spend it on e-books instead 🙂

by-nc-sa Featured Image by Jared Stein shared on https://www.flickr.com/photos/5tein/2348649408/

Blended Interactions

This week’s #BlendKit2014 session explored how much support and guidance students should get in an online course and posited four models of educators and learners:

  1. Atelier Learning – akin to an art studio, where students can learn from the work of each other as well as the teacher – John Seely Brown (2013)
  2. Educators as network administratorsClarence Fisher – where learners as well as educators can help construct and plug gaps in our learning/knowledge networks
  3. Educators as conciergeCurtis Bonk (2007) – where the educator provides ‘soft guidance’ directing learners towards resources and ideas that they may not yet be aware of
  4. The educator as curatorGeorge Siemens (2007) – learners are free to explore, but the expert curator helps them to engage with the key concepts of a discipline.

For more information see the full text from which the above references were obtained.

John Seely Brown’s conceptualisation of teachers as artists and architects reminded me of the artisian representations used by Hokanson, Miller and Hooper (2007) in their discreditation of ADDIE.  I didn’t find Fisher’s model very useful, as I find it very hard to really visualise what a learning network would/should look like – even in these days of social network analysis! I was a student on one of Curtis Bonk’s MOOCs and so can claim first hand experience of his concierge approach. A times it felt like a relentless barrage of concepts, where the learner has little time to get to grips with one idea, before they are presented with the next. That may have just been me getting the balance of online and offline wrong, and it was certainly very stimulating. Siemen’s view of the curatorial educator who ‘balances the freedom of individual learners with the thoughtful interpretation of the subject being explored‘ is very seductive, who wouldn’t want to be taught in that way, or indeed to be able to teach in it? I’ve also been on one of his cMOOCs and it had a very different style.

We were asked to reflect on these four questions:

  1. Is there value in student-to-student and student-to-instructor interaction in all courses regardless of discipline?
  2. What role does interaction play in courses in which the emphasis is on declarative knowledge (e.g., introductory “survey” courses at the lower-division undergraduate level) or, similarly, in courses that cultivate procedural knowledge (e.g., technical courses requiring the working of problem sets)
  3. As you consider designing a blended learning course, what kinds of interactions can you envision occurring face-to-face, and how might you use the online environment for interactions? What opportunities are there for you to explore different instructional strategies in the blended course than you have in the past?
  4. What factors might limit the feasibility of robust interaction face-to-face or online?

Each of these is addressed in turn below:

The value of interaction

I think interaction should be valued in any disciple, in essence a universality. Student-student interaction can be very different to student-teacher interaction. The former may at times be more likely to achieve learning (particularly of threshold concepts), as it may use a common language – that of the novice, rather than the mismatch between the vocabulary of the novice and the practitioner.

The role of interaction

I don’t think interaction should be omitted from declarative or procedural courses. This would imply that there is nothing more to learn, or no better way in which the subject can be taught. Even if the interaction is limited to explaining concepts to your peer group, I think this has the potential to advance understanding of those involved and listening/reading along.

Planning interactions

Deciding which activities are best online and which face-to-face is tricky and is one of the areas that I hope will become clearer through my participation in this course. It would seems sensible if the face to face activities were either designed to help socialise the group, or relate to tasks which students might find difficult – e.g. it is unclear how to begin, where they would benefit from scaffolding and are likely to seek early confirmation from teaching staff (or their peers). Online activities may provide learners with a greater opportunity to reflect and prepare their argument – e.g. finely hone a video presentation before sharing it with the group.

Limits to interaction

Obvious limits to interaction are a lack of time, engagement/motivation. The ‘atmosphere’ of the course is also important – is it acceptable to try and fail, indeed  have these opportunities been designed as part of the course? If students are presented with a ‘course and a half’ then they may react by adopting a strategic approach and only participating in the activities which result in grades.

Also it is unrealistic to expect that all the interaction will occur within the chosen environment and be visible to the educator. There will be face to face discussions in the pub on blended courses, or on non-institutional systems (such as Facebook) for online courses – of value to the learners because they are out of the gaze of their teachers (see this article in the Independent).

by-nc-sa Featured Image by Cobalt123 shared on https://www.flickr.com/photos/cobalt/2626780211

Understanding Blended Learning

This post is my first as part of the USF BlendKit 2014 Course  on Canvas – see https://www.canvas.net/courses/becoming-a-blended-learning-designer for more details. For this assignment I have been asked to review materials in the first chapter of the toolkit accompanying this course:

Blended Learning Toolkit

by-nc-sa Materials in the toolkit have been shared under a CC BY-NC-SA license

 

Participants have been asked to reflect on the nature of blended learning and consider four questions:

  1. Is it most helpful to think of blended learning as an online enhancement to a face-to-face learning environment, a face-to-face enhancement to an online learning environment, or as something else entirely?
  2. In what ways can blended learning courses be considered the “best of both worlds” (i.e., face-to-face and online)? What could make blended learning the “worst of both worlds?”
  3. As you consider designing a blended learning course, what course components are you open to implementing differently than you have in the past? How will you decide which components will occur online and which will take place face-to-face? How will you manage the relationship between these two modalities?
  4. How often will you meet with students face-to-face? How many hours per week will students be engaged online, and how many hours per week will students meet face-to-face? Is the amount of student time commitment consistent with the total time commitment of comparable courses taught in other modalities (e.g., face-to-face)?

Each of these is addressed in turn below:

What is Blended Learning?

My first experience of blended learning was from courses that were originally taught face to face and which have gradually been “adapted” for blended delivery.  I think many others may be the same (even in a MOOC, much of the materials may have come from a f2f course). The key I think is the degree of adaption. Simply putting files online (the ‘document dump’ – sensu Horrigan and Clark) isn’t really blended learning in my book. Whilst I am no fan of trying to set a required threshold of online vs. face to face activities, for me, to be truly blended, there must be at least some activities that need to be carried out online. That implies some conscious design of these activities and so aligns with the thinking of McGee & Reis (2012) cited in the paper.

Is it really the best of both worlds?

Face to face is surely the best, but only if the timing works for both parties. That’s not saying it is the most cost-effective, the most scalable  or the most flexible. Done well, blended learning should help address some of these restrictions, particularly as it may allow learners to repeat sections until they achieve that ‘Eureka moment’. That was certainly something that hit home in a presentation I saw by Sal Khan, where it took one man over 50 plays of a video before he finally grasped a particular mathematical concept. His point was that blended learning allowed the lesson to be replayed 50 times with equal patience and in the absence of judgement. That may be true but I couldn’t help wondering if a real teacher couldn’t have changed the instruction and got him there more quickly.

The risk though is that what is delivered in a blend is a confusing pastiche, lacking the consistency of a fully online or face to face course.

Managing the Modalities

One phrase that worried me in the kit was “Context is king”. If that was true then surely MIT’s OpenCourseware project would have been a case of online suicide. It also didn’t fit well with the rest of the discussion, which reassuringly focussed on the activity of the learner, examining course planning approaches on a spectrum from teacher-centred to learner (or learning) centred.

Getting the Timing Right

I think translating learning activities from the classroom to online is one of the hardest things to do (and I admit to occasionally still getting the timings of my lectures wrong). I’m not sure there is a magic formula for getting it right first time, but I think online components should provide teaching staff with a better idea of  just how long students spend on a task – e.g. the number of edits on a blog post.

by-nc Featured Image by  Rob Sutton shared on https://www.flickr.com/photos/rsutton1223/4196233702/

No more writing on the wall?

For the last ten years, electronic whiteboards – such as those produced by Promethean and SMART Technologies – have been standard items on any classroom refit and usually enthusiastically received by staff and students (see Smith, Higgins, et al. 2005). Every day when I drop my children off at primary school, they walk into rooms where the electronic whiteboard is up and running. Any parent who arrives late and has to take their child into the classroom is likely to see them all performing a 5 minute exercise routine following instructions on the board! For examples see this Pinterest site or the Activityworks website, the latter includes some explanations of why some people believe this approach is effective.

My observations of whiteboard use in schools is very different to that in universities, where the boards now usually sit switched off (in some cases hidden behind larger projection screens)!  Even in areas for group work – e.g. booths where students sit around a table with a laptop and an electronic whiteboard at one end – more often than not the whiteboard is unused.  The question is why?

This is a technology that showed a lot of early promise. A study by two staff at the University of Pittsburgh published in 2012 has caught the attention of SMART (in that they add a link of dubious legality to it on their website).  Jang and Schunn watched the way groups of engineering students interacted with/were constrained by the technology. The authors contrast what they term “individual tools” such as a computer or a person’s notes, with “shareable/collaborative tools” such as an electronic whiteboard or a physical prototype. Their results suggest that students who used collaborative tools from the start, and continued to use them throughout the project, were more likely to deliver. There’s a bit of circularity here, and I don’t feel you can unpick whether the availability of the boards increased communication, or if it just shows that people who were already experienced at group communication made good use of the available collaborative tools. SMART certainly hope you take the former view as you can see in this infographic they published summarising the findings. Opinion remains divided – e.g. this 2010 study by Torff & Tirotta suggests that some of the motivation-enhancing effects often associated with electronic whiteboards are overstated.

From my personal experience of using the boards, I have come across a couple of problems (ignoring the high cost of these devices):

  1. They are usually poorly placed in rooms (particularly “meeting rooms”) , meaning that many people sit with their back to the board and have to turn away from the rest of the group to see it. This has the effect of inhibiting conversation, or favouring their use in small groups. The worst example of this is putting them at one end of a table in a booth, making them essentially off-limits for everyone but the two people nearest the board.
  2. They are often too small, making the content hard to read and annotations blocky. Too often they are not as good as a plain whiteboard, failing even at the first substitution stage. If you are trying to project a high resolution image, often projectors aren’t up to the job – with neither the resolution or the contrast. Solutions that can make use of an LED or plasma TV can give much better results – digital versions of microscope slides can finally look as good as  old Fujichrome slides!
  3. Pens and the erasers can go missing, and the on-screen tools you can drive with your fingers are always a bit clunky.
  4. Most boards only support one “touch” at a time. This means two people can’t really draw at the same time – which is something I’d hope would be a key part of collaboration. Users  accustomed to navigating with multi-gestures on their phones and tablets will find the electronic whiteboard a frustrating experience. Suppliers are catching up, but I still think the model is wrong.
  5. The boards can do strange things with other USB devices connected to the same computer (e.g. blocking voting system dongles or some slide remotes). Collaboration tools should play nicely together in my opinion and not restrict you to the tools built into the board.
  6. The required software can be a bit flaky, and some versions are not as backwards-compatible as they should be. Furthermore, the developers seem to write the code where the tools are deployed using helper apps that launch on start up, do they really expect staff to always shackle their laptop to a whiteboard?
  7. Finally, and in my opinion the worst feature, is that they only work when you stand in front of them. If you fix them at a height where most people can reach both the top and bottom of the screen, then you probably won’t see much if there is someone sitting between you and the board. It is difficult to use them without turning your back on the rest of the room. Yes I have seen setups linked to an interactive tablet, but this still needs tethering via USB and so is rarely passed around the table. Perhaps in response to this failing, some boards are now available as “tables”. Whilst the videos of people flicking through and rotating photos look slick, I’m not sure this really is the action most conducive to learning in a tutorial or seminar setting. If the table becomes the screen then you can’t put things on it – what good is a meeting if there is nowhere to put your coffee cup?  How will you take notes?

Despite this list, I have seen people use them and use them well. By capturing a carefully designed “board-centric” activity, the focus of participants can shift from trying to record what is happening, to actually making things happen. That has to be a good thing. They also allow annotation of figures on the fly, which can help address any issues that were not anticipated when you prepared the materials.

Whilst I was initially attracted by the large number of page templates, I think there is a danger of over-preparing the session. Students need to be free to contribute to the session and take it in a direction that, whilst still meeting the learning outcomes, may not be exactly what you had planned. I find that if I have spent a lot of time preparing particular slides, I am more resistant to deviating. That is wrong.

I think the answer is probably to stop trying to write directly on the board. There have been a lot of advances in educational technology since the first electronic whiteboards were designed. The two key ones for me are the rapid growth and availability of wireless networks and tablets. Technologies such as Apple’s Airplay (sharing content from an iPad or Mac newer than mine via an AppleTV) are very slick and free you from the constraints of a single app/program (no matter how good it is). Connection is literally child’s play which should encourage staff and students to have a go. Rather than ask a student to come up to the board, or try and pass them a tethered tablet, surely it is better to get them to take control of the screen from their device. If someone wants to suggest a minor change, pass the tablet. That said, I think this model may be more suited to “serial collaboration” unless the app you are sharing support live collaboration (e.g. a wiki).

The cloud is also changing things –  if someone takes a picture with a phone or tablet (perhaps the result of a particular experiment, or something that illustrates a point they want to make), how easy is it to get it displayed on-screen? Do you need to swap devices, or is there some common repository (be it Flickr, Dropbox, OneDrive or iCloud) that you can use to facilitate instant sharing?

If we do decide to replace electronic whiteboards, it might mean we can finally get rid of switches like this one that are just asking someone to see what happens if you do!

Temptation
Temptation

Some random musings on technology enhanced learning