Canvassing Opinion

I spent today in Edinburgh at a presentation about Canvas – a VLE by the strangely named company Instructure (it doesn’t exactly roll off the tongue). To declare a potential interest, I currently work for an institution that has a long history with a competitor – Blackboard. I leave it to the reader to decide whether/how that colours my comments.

A large part of the day was spent listening to staff not sales people – Darren Marsh and Margaret Donnison from Birmingham University which has recently been through the periodic VLE review process that every educational institution undergoes from time to time. In Birmingham’s case, their current VLE (WebCT) had gone ‘end of life’ and so they were facing a migration regardless of which product they chose. In that sense it is an excellent example of an unbiased evaluation, as there was no status quo option. On the down side, WebCT is pretty long in the tooth and would fare poorly in a comparison with almost any VLE on the market today.

As well as the need to find a new VLE system, the University felt that distance and blended learning was becoming more important and that the market was  undergoing a period of disruption due to factors such as increased student fees, MOOCs and alternate modes of delivery. Their needs were clearly expressed in a few points:

  • a high quality product
  • fit for purpose
  • distinctive

That last point is interesting – in a market dominated by a small number of vendors, is there a risk that all institutional offerings look the same? This is an intriguing proposition that I have some issues with – is the online learning experience only skin deep? Equally does just changing the appearance of content (akin to applying a different template to a WordPress site) significantly alter the learner’s experience in any meaningful way? That doesn’t fit with my experience. I think the MOOC I learnt most from so far was the #OCL4Ed course hosted on WikiEducator/MediaWiki. It looked awful and was hard to navigate, but the learning activities were well-designed and stimulated meaningful collaboration amongst the participants (e.g. commenting on each other’s blogs – see http://apperleyrd.wordpress.com/).

A question I didn’t think to ask at the time was where had this notion of distinctiveness come from? Was it requested by academics, tired of working in the same old online courses, was it from students, or perhaps from marketing? I have seen a lot of student feedback describing institutional VLEs as ‘clunky’ and ‘tired looking’ but I’ve never seen any students asking for them to be more distinctive!

The findings of Birmingham’s detailed tender process were echoed in the subsequent demonstration of the Canvas product – there is a large feature set common across all the major VLE platforms. We saw demonstrations of online marking using the cloud Crocodoc/Box View service, adaptive release of content based on dates and tests scores, integration with third party services such as Kaltura, Panopto, YouTube. Whilst slick, these features should have been familiar to the audience and many required the purchase of third party services (e.g Kaltura and Panopto). Assignment workflow was a little disappointing, lagging behind that in Moodle or Blackboard – no support for moderated marking, anonymity and other factors held dear (even if perhaps in some cases misguidedly) by many UK HEIs.

Great play was made of the ability to use the IMS LTI standard to connect to third party systems. They publish an impressive catalogue of possible integrations at http://www.edu-apps.org/index.html. A closer inspection shows that very few of these services have been certified as compliant by IMS (see http://developers.imsglobal.org/catalog.html), which makes me wonder whether they take advantage of the full range of LTI features (e.g. populating the grade centre) or are just a simple launch point that may or may not actually implement LTI.  Later I browsed through a few entries on edu-apps and some of the comments about tools no longer working (including the YouTube integration) were a bit worrying – although in this case they might have just referred to integration via Moodle.

Also, although IMS are working at a standard for analytics data – caliper – this is not yet ready to implement, so integrations that rely on LTI will not provide any tracking/usage data to the parent VLE. This is a missed opportunity for both staff interested in their learners actions in a given course and those trying to aggregate data across courses, or attempting to measure the return on investment in a particular tool.

Interesting too that like many other VLEs,  the ability to integrate with 3rd party systems using LTI first requires action by a user with appropriate privileges (see http://www.edu-apps.org/tutorials.html). Whilst the document suggests this can be done at a course level, in practice I think this may be restricted to system administrators –  if only  to keep the lawyers happy and to safeguard the privacy of our users – creating a potential bottleneck to innovation.

Canvas offered a distinctive hierarchy of user accounts and sub-accounts (with permissions inherited) that allows you to model the University, breaking it down into faculties/colleges, then further into schools/departments, and assign branding, permissions, even custom javascript. This is interesting and something I plan to explore further. As ever the devil is in the detail and Universities seem to excel at complicating situations. For example should you divide it up by faculty, or level of study (e.g. separating undergraduate from postgraduate courses?) Should the mode of delivery matter – differentiating between face to face, blended and distance courses? I wonder if this user account model cope with several different overlapping possible hierarchies? Should these change in the future, how easy will it be to change this?

Although only just coming to the end of their first year of using Canvas, Birmingham had found the time to solicit student feedback via questionnaires. The usual caveats about small sample sizes, risk of only capturing the extremes of opinion and questionable use of some leading questions all apply. Still 84% of students agreed with the statement that they found canvas easy to use, and an encouraging 88% found it useful for their studies. Perhaps more worrying is why 12% did not, assuming that it contains links to the course materials, online tests and e-submission!

Common themes that the students praised were ease of use and a clean layout. Looking at Birmingham’s implementation (which provides a pretty standard canvas course) you can understand the ease of use – the interface is relatively uncluttered and the content is restricted to materials relevant to the courses they are taking. There was no evidence of any portal functionality being delivered through canvas – a later perusal of their website identified [my.bham] – a student portal based on SunGard’s Luminis product.

The clean layout is an interesting comment. I’m not sure if this means ‘it looks like Facebook/Wordpress’ and just reflects the widespread adoption of this user interface model, or whether it was very like the old WebCT course structure they already knew? Screenshots showed templates with similarly labelled folders on Canvas, some even going to the trouble of replication the icons representing folders in WebCT.  On a more positive note, it might be the result of carefully planned and structured courses on the new system.

One advantage of switching learning environments is that it offers the institution a chance to start again. It is all too easy for systems to become bloated over the years (like an old laptop) with content that is no longer used, courses based on copies of copies of copies, all of which can have a negative impact on performance. Also it provides staff with the chance to review the content and online components of their course. Doing this across a whole institution and with a real fixed deadline, where just using the same stuff as last year is not be an option, has benefits that can’t be achieved through an isolated course review (though I’m not arguing you should stop doing this either, there’s just an extra benefit/multiplier effect when everyone is thinking, talking and sharing about this at the same time). It’s also a good time to check all the links work, content is copyright cleared, etc.

It is also a good motivator to get staff to attend training. Birmingham use a mix of face to face workshops with online materials – with separate courses for staff and students.

As a relative newcomer to the market and built for a hosted, scalable solution from day 1, I was interested to see canvas performs on mobile and tablet devices. Sadly there was no evidence of responsive design comparing the experience in a standard browser at different screen sizes and on laptops and tablets 😦
Like many other vendors, they have released mobile apps for iOS and Android. I thought that the mobile UI they showed actually looked nicer than the standard one, with clear icons next to course menu buttons giving an extra clue to the functionality of the links . Special apps exist for dedicated tasks e.g. the SpeedGrader app for online Grading – which on a cursory inspection seems a bit like Turnitin’s GradeAnywhere app, though without support for offline marking of downloaded scripts.

This video shows Canvas deployed on a range of devices and footage of the custom SpeedGrader app:

A few eyebrows were raised around the room when they mentioned their approach to versioning/software release: there is only one version. They operate an agile approach with new releases every three weeks. When probed, there is a degree of control, it is possible to turn off or delay the implementation of new features on your build. This is good news if you want to avoid any changes during key times (e.g. online exams) but seems to contradict the one version policy and I am not sure how it works with their online help documentation – does it respect all these local settings?

The product is only available as a hosted service, sitting on the Amazon AWS cloud, providing a scalable solution, with a promise from Instructure (UK) of 99.9% uptime over a year – assuming it doesn’t fall foul of a denial of service attack by those angry about it’s approach to in-country taxation.  They use the Dublin-based AWS European Data Centre for EU clients to keep everyone happy. It is unclear whether all the bundled extras – e.g. the  Big Blue Button conferencing app – also offer an EU or  -Safe Harbor compliant solution.

Although Canvas’ origin lies with American computer science  students dissatisfied with their current online experience (sound familiar?) the staff present in Edinburgh were keen to play the international card. It was good to hear them supporting localisation for different languages (no support for Gaelic yet) and with research and development teams available in-country – in the case of the UK in London. As one of the small fishes in a pond still dominated by the US, it is always nice to know that someone is listening and able to act locally.

Although we ran out of time, they are also analytics options and Instructure staff were  keen to hear from UK institutions wanting to use their Canvas network product to facilitate MOOCs (like #BlendKit2014).

More information about Birmingham’s experience can be found on the UK Canvas site  (though tred carefully as the comparison table Canvas publish doesn’t give me much confidence in their QA – I found errors in the third row: Development Technology). They also link to this video, note it was uploaded to YouTube by Canvas, not Birmingham:

Some final thoughts:

Q. Did the day leave me feeling that our current platform (Blackboard) was pedestrian or had been eclipsed?
A. No – some features in Canvas look slicker/more mature/better than Blackboard, but equally some  features in Blackboard look slicker/ more mature/better than Canvas.

Q. If I was looking to implement a VLE from scratch or undergo a review of provision would Canvas be on my short list?
A. Yes.

 

by Featured Image iVincent by JD Hancock shared on http://photos.jdhancock.com/photo/2014-02-22-200113-ivincent.html

Blended Assessment

Week 3 of #BlendKit2014 is looking at assessment – how to know that our students are learning something from the course (hopefully linked to the learning outcomes). Kelvin Thompson and his colleagues began with the reasonable claim that ‘it is imperative that assessment is provided to check the depth of students’ learning’. They also stressed the importance of making the learning applicable, or else students adopting a strategic approach may not engage with it. The question is, who is checking the depth of a student’s learning, and why?

We were provided with some thought provoking reading and asked to reflect on these four questions:

  1. How much of the final course grade do you typically allot to testing? How many tests/exams do you usually require? How can you avoid creating a “high stakes” environment that may inadvertently set students up for failure/cheating?
  2. What expectations do you have for online assessments? How do these expectations compare to those you have for face-to-face assessments? Are you harbouring any biases?
  3. What trade-offs do you see between the affordances of auto-scored online quizzes and project-based assessments? How will you strike the right balance in your blended learning course?
  4. How will you implement formal and informal assessments of learning into your blended learning course? Will these all take place face-to-face, online, or in a combination?

Each of these is addressed in turn below:

How much testing to do?

I’m not sure this is the right question! I think the question should be when/why are tests needed in your course? I like diagnostic tests at the start of a course (ideally tied to a Just in Time Teaching model of delivery, tailoring the rest of the course to the knowledge and experience of the students). Students should be free to take these as often as they want. As an online learner, the need for some sort of progress report, a confirmation that you are on-track is possibly even greater when you have less (or possibly none) face to face time with teaching staff. Short tests throughout the course can meet this need. My only real concern is with the final assessment – how best can this be done online?

Quite a few of the participants in the live webinar expressed concern over the potential for cheating. Perhaps this is why there is now a MOOC course on canvas looking at online cheating – which I discovered via this article in the Chronicle of Higher Education. This saddens me a bit. I’m not a fan of the camera-based remote proctoring solutions, particularly if the student has to purchase them. If I have to choose between spending time devising ways to stop students cheating, or trying to make my courses better, I’d rather the latter. In the end, cheats are only cheating themselves.

My expectations of online testing

The question ‘Are you harbouring any biases?‘ was unexpected, but on reflection I think it is a fair one. I certainly have changed my stance. When I started, I worked with staff on a medical course and noted to my horror that although many students were starting online assessments, only a few finished the tests. Were they too hard? The fact that these tests were delivered online meant we could ask this question, but to get to the answer I had to talk to the students. It turns out that we had come across an example of impromptu group work. Students went to a computer lab (that dates this anecdote) to start the tests on their own. Part way through a friend came in (or else they spotted them amongst the banks of monitors). Rather than work through the questions alone, they discovered it was more effective to discuss the questions as a group, and try and justify their answers to each other, before one person submitted the result on behalf of the group. That explained the high drop-off rate and taught me to take nothing for granted!

The trade-offs

The trade-offs seem pretty clear. Anything that can be automatically marked, providing students with rapid feedback is constrained by those marking tools. If they use some form of pattern-based scoring then poorly designed questions or distractors (e.g. offering students the choice between two words that are similarly spelled but have very different meanings – conservative and conservation) may seriously misrepresent some student’s learning.   More creative, personal assessment options offer the chance to encourage deeper learning, but  require more skilled interpretation. David Nichol and his colleagues (2013) have shown how peer feedback (N.B. not grading) can help everyone learn from the process, and perhaps that offers one way out.

I was also struck at a recent learning and teaching conference how engaged students were in a project where they were asked to create a short (2 minute) video to explain a key concept in the course. In this, the challenge was to know what to leave out. That’s not something that you can mark automatically, but it could be a great online submission task.

Implementation

In a true blended course, you have the luxury of both face to face and online. I think I prefer online diagnostic and formative assessments, but keeping the summative work offline. I think that also reduces the stress for both staff and students (no-one really wins when a big online exam goes ‘castors up’ as they say in the world of TV repairs).  That’s probably why I don’t think it’s worth spending money on anti-cheating hardware. Spend it on e-books instead 🙂

by-nc-sa Featured Image by Jared Stein shared on https://www.flickr.com/photos/5tein/2348649408/

Blended Interactions

This week’s #BlendKit2014 session explored how much support and guidance students should get in an online course and posited four models of educators and learners:

  1. Atelier Learning – akin to an art studio, where students can learn from the work of each other as well as the teacher – John Seely Brown (2013)
  2. Educators as network administratorsClarence Fisher – where learners as well as educators can help construct and plug gaps in our learning/knowledge networks
  3. Educators as conciergeCurtis Bonk (2007) – where the educator provides ‘soft guidance’ directing learners towards resources and ideas that they may not yet be aware of
  4. The educator as curatorGeorge Siemens (2007) – learners are free to explore, but the expert curator helps them to engage with the key concepts of a discipline.

For more information see the full text from which the above references were obtained.

John Seely Brown’s conceptualisation of teachers as artists and architects reminded me of the artisian representations used by Hokanson, Miller and Hooper (2007) in their discreditation of ADDIE.  I didn’t find Fisher’s model very useful, as I find it very hard to really visualise what a learning network would/should look like – even in these days of social network analysis! I was a student on one of Curtis Bonk’s MOOCs and so can claim first hand experience of his concierge approach. A times it felt like a relentless barrage of concepts, where the learner has little time to get to grips with one idea, before they are presented with the next. That may have just been me getting the balance of online and offline wrong, and it was certainly very stimulating. Siemen’s view of the curatorial educator who ‘balances the freedom of individual learners with the thoughtful interpretation of the subject being explored‘ is very seductive, who wouldn’t want to be taught in that way, or indeed to be able to teach in it? I’ve also been on one of his cMOOCs and it had a very different style.

We were asked to reflect on these four questions:

  1. Is there value in student-to-student and student-to-instructor interaction in all courses regardless of discipline?
  2. What role does interaction play in courses in which the emphasis is on declarative knowledge (e.g., introductory “survey” courses at the lower-division undergraduate level) or, similarly, in courses that cultivate procedural knowledge (e.g., technical courses requiring the working of problem sets)
  3. As you consider designing a blended learning course, what kinds of interactions can you envision occurring face-to-face, and how might you use the online environment for interactions? What opportunities are there for you to explore different instructional strategies in the blended course than you have in the past?
  4. What factors might limit the feasibility of robust interaction face-to-face or online?

Each of these is addressed in turn below:

The value of interaction

I think interaction should be valued in any disciple, in essence a universality. Student-student interaction can be very different to student-teacher interaction. The former may at times be more likely to achieve learning (particularly of threshold concepts), as it may use a common language – that of the novice, rather than the mismatch between the vocabulary of the novice and the practitioner.

The role of interaction

I don’t think interaction should be omitted from declarative or procedural courses. This would imply that there is nothing more to learn, or no better way in which the subject can be taught. Even if the interaction is limited to explaining concepts to your peer group, I think this has the potential to advance understanding of those involved and listening/reading along.

Planning interactions

Deciding which activities are best online and which face-to-face is tricky and is one of the areas that I hope will become clearer through my participation in this course. It would seems sensible if the face to face activities were either designed to help socialise the group, or relate to tasks which students might find difficult – e.g. it is unclear how to begin, where they would benefit from scaffolding and are likely to seek early confirmation from teaching staff (or their peers). Online activities may provide learners with a greater opportunity to reflect and prepare their argument – e.g. finely hone a video presentation before sharing it with the group.

Limits to interaction

Obvious limits to interaction are a lack of time, engagement/motivation. The ‘atmosphere’ of the course is also important – is it acceptable to try and fail, indeed  have these opportunities been designed as part of the course? If students are presented with a ‘course and a half’ then they may react by adopting a strategic approach and only participating in the activities which result in grades.

Also it is unrealistic to expect that all the interaction will occur within the chosen environment and be visible to the educator. There will be face to face discussions in the pub on blended courses, or on non-institutional systems (such as Facebook) for online courses – of value to the learners because they are out of the gaze of their teachers (see this article in the Independent).

by-nc-sa Featured Image by Cobalt123 shared on https://www.flickr.com/photos/cobalt/2626780211

Understanding Blended Learning

This post is my first as part of the USF BlendKit 2014 Course  on Canvas – see https://www.canvas.net/courses/becoming-a-blended-learning-designer for more details. For this assignment I have been asked to review materials in the first chapter of the toolkit accompanying this course:

Blended Learning Toolkit

by-nc-sa Materials in the toolkit have been shared under a CC BY-NC-SA license

 

Participants have been asked to reflect on the nature of blended learning and consider four questions:

  1. Is it most helpful to think of blended learning as an online enhancement to a face-to-face learning environment, a face-to-face enhancement to an online learning environment, or as something else entirely?
  2. In what ways can blended learning courses be considered the “best of both worlds” (i.e., face-to-face and online)? What could make blended learning the “worst of both worlds?”
  3. As you consider designing a blended learning course, what course components are you open to implementing differently than you have in the past? How will you decide which components will occur online and which will take place face-to-face? How will you manage the relationship between these two modalities?
  4. How often will you meet with students face-to-face? How many hours per week will students be engaged online, and how many hours per week will students meet face-to-face? Is the amount of student time commitment consistent with the total time commitment of comparable courses taught in other modalities (e.g., face-to-face)?

Each of these is addressed in turn below:

What is Blended Learning?

My first experience of blended learning was from courses that were originally taught face to face and which have gradually been “adapted” for blended delivery.  I think many others may be the same (even in a MOOC, much of the materials may have come from a f2f course). The key I think is the degree of adaption. Simply putting files online (the ‘document dump’ – sensu Horrigan and Clark) isn’t really blended learning in my book. Whilst I am no fan of trying to set a required threshold of online vs. face to face activities, for me, to be truly blended, there must be at least some activities that need to be carried out online. That implies some conscious design of these activities and so aligns with the thinking of McGee & Reis (2012) cited in the paper.

Is it really the best of both worlds?

Face to face is surely the best, but only if the timing works for both parties. That’s not saying it is the most cost-effective, the most scalable  or the most flexible. Done well, blended learning should help address some of these restrictions, particularly as it may allow learners to repeat sections until they achieve that ‘Eureka moment’. That was certainly something that hit home in a presentation I saw by Sal Khan, where it took one man over 50 plays of a video before he finally grasped a particular mathematical concept. His point was that blended learning allowed the lesson to be replayed 50 times with equal patience and in the absence of judgement. That may be true but I couldn’t help wondering if a real teacher couldn’t have changed the instruction and got him there more quickly.

The risk though is that what is delivered in a blend is a confusing pastiche, lacking the consistency of a fully online or face to face course.

Managing the Modalities

One phrase that worried me in the kit was “Context is king”. If that was true then surely MIT’s OpenCourseware project would have been a case of online suicide. It also didn’t fit well with the rest of the discussion, which reassuringly focussed on the activity of the learner, examining course planning approaches on a spectrum from teacher-centred to learner (or learning) centred.

Getting the Timing Right

I think translating learning activities from the classroom to online is one of the hardest things to do (and I admit to occasionally still getting the timings of my lectures wrong). I’m not sure there is a magic formula for getting it right first time, but I think online components should provide teaching staff with a better idea of  just how long students spend on a task – e.g. the number of edits on a blog post.

by-nc Featured Image by  Rob Sutton shared on https://www.flickr.com/photos/rsutton1223/4196233702/

No more writing on the wall?

For the last ten years, electronic whiteboards – such as those produced by Promethean and SMART Technologies – have been standard items on any classroom refit and usually enthusiastically received by staff and students (see Smith, Higgins, et al. 2005). Every day when I drop my children off at primary school, they walk into rooms where the electronic whiteboard is up and running. Any parent who arrives late and has to take their child into the classroom is likely to see them all performing a 5 minute exercise routine following instructions on the board! For examples see this Pinterest site or the Activityworks website, the latter includes some explanations of why some people believe this approach is effective.

My observations of whiteboard use in schools is very different to that in universities, where the boards now usually sit switched off (in some cases hidden behind larger projection screens)!  Even in areas for group work – e.g. booths where students sit around a table with a laptop and an electronic whiteboard at one end – more often than not the whiteboard is unused.  The question is why?

This is a technology that showed a lot of early promise. A study by two staff at the University of Pittsburgh published in 2012 has caught the attention of SMART (in that they add a link of dubious legality to it on their website).  Jang and Schunn watched the way groups of engineering students interacted with/were constrained by the technology. The authors contrast what they term “individual tools” such as a computer or a person’s notes, with “shareable/collaborative tools” such as an electronic whiteboard or a physical prototype. Their results suggest that students who used collaborative tools from the start, and continued to use them throughout the project, were more likely to deliver. There’s a bit of circularity here, and I don’t feel you can unpick whether the availability of the boards increased communication, or if it just shows that people who were already experienced at group communication made good use of the available collaborative tools. SMART certainly hope you take the former view as you can see in this infographic they published summarising the findings. Opinion remains divided – e.g. this 2010 study by Torff & Tirotta suggests that some of the motivation-enhancing effects often associated with electronic whiteboards are overstated.

From my personal experience of using the boards, I have come across a couple of problems (ignoring the high cost of these devices):

  1. They are usually poorly placed in rooms (particularly “meeting rooms”) , meaning that many people sit with their back to the board and have to turn away from the rest of the group to see it. This has the effect of inhibiting conversation, or favouring their use in small groups. The worst example of this is putting them at one end of a table in a booth, making them essentially off-limits for everyone but the two people nearest the board.
  2. They are often too small, making the content hard to read and annotations blocky. Too often they are not as good as a plain whiteboard, failing even at the first substitution stage. If you are trying to project a high resolution image, often projectors aren’t up to the job – with neither the resolution or the contrast. Solutions that can make use of an LED or plasma TV can give much better results – digital versions of microscope slides can finally look as good as  old Fujichrome slides!
  3. Pens and the erasers can go missing, and the on-screen tools you can drive with your fingers are always a bit clunky.
  4. Most boards only support one “touch” at a time. This means two people can’t really draw at the same time – which is something I’d hope would be a key part of collaboration. Users  accustomed to navigating with multi-gestures on their phones and tablets will find the electronic whiteboard a frustrating experience. Suppliers are catching up, but I still think the model is wrong.
  5. The boards can do strange things with other USB devices connected to the same computer (e.g. blocking voting system dongles or some slide remotes). Collaboration tools should play nicely together in my opinion and not restrict you to the tools built into the board.
  6. The required software can be a bit flaky, and some versions are not as backwards-compatible as they should be. Furthermore, the developers seem to write the code where the tools are deployed using helper apps that launch on start up, do they really expect staff to always shackle their laptop to a whiteboard?
  7. Finally, and in my opinion the worst feature, is that they only work when you stand in front of them. If you fix them at a height where most people can reach both the top and bottom of the screen, then you probably won’t see much if there is someone sitting between you and the board. It is difficult to use them without turning your back on the rest of the room. Yes I have seen setups linked to an interactive tablet, but this still needs tethering via USB and so is rarely passed around the table. Perhaps in response to this failing, some boards are now available as “tables”. Whilst the videos of people flicking through and rotating photos look slick, I’m not sure this really is the action most conducive to learning in a tutorial or seminar setting. If the table becomes the screen then you can’t put things on it – what good is a meeting if there is nowhere to put your coffee cup?  How will you take notes?

Despite this list, I have seen people use them and use them well. By capturing a carefully designed “board-centric” activity, the focus of participants can shift from trying to record what is happening, to actually making things happen. That has to be a good thing. They also allow annotation of figures on the fly, which can help address any issues that were not anticipated when you prepared the materials.

Whilst I was initially attracted by the large number of page templates, I think there is a danger of over-preparing the session. Students need to be free to contribute to the session and take it in a direction that, whilst still meeting the learning outcomes, may not be exactly what you had planned. I find that if I have spent a lot of time preparing particular slides, I am more resistant to deviating. That is wrong.

I think the answer is probably to stop trying to write directly on the board. There have been a lot of advances in educational technology since the first electronic whiteboards were designed. The two key ones for me are the rapid growth and availability of wireless networks and tablets. Technologies such as Apple’s Airplay (sharing content from an iPad or Mac newer than mine via an AppleTV) are very slick and free you from the constraints of a single app/program (no matter how good it is). Connection is literally child’s play which should encourage staff and students to have a go. Rather than ask a student to come up to the board, or try and pass them a tethered tablet, surely it is better to get them to take control of the screen from their device. If someone wants to suggest a minor change, pass the tablet. That said, I think this model may be more suited to “serial collaboration” unless the app you are sharing support live collaboration (e.g. a wiki).

The cloud is also changing things –  if someone takes a picture with a phone or tablet (perhaps the result of a particular experiment, or something that illustrates a point they want to make), how easy is it to get it displayed on-screen? Do you need to swap devices, or is there some common repository (be it Flickr, Dropbox, OneDrive or iCloud) that you can use to facilitate instant sharing?

If we do decide to replace electronic whiteboards, it might mean we can finally get rid of switches like this one that are just asking someone to see what happens if you do!

Temptation
Temptation

One Step at a Time

This video is released under the standard YouTube License

A colleague just sent me a link to this video created locally. I hope it helps dispel the myth that multimedia is only for the arts and humanities. It shows people working in the sciences  flexing their creative muscles. It is also great to see them willing to share this resource with others. I wish my lecturers had done things like this. Makes you wonder what the students are doing too…

Learning from Schools

I spent most of today at an iPads in Education event organised by Jigsaw24 (an IT solutions company specialising in education).

It opened with Andy Nagle from Apple. Unsurprisingly he stressed the importance of design and illustrated this with the final sequence of Pablo Picasso’s Essence of Bull.  Some die-hard anti-Apple types might find that title strangely fitting 🙂 He then talked about toasters – an object he claims lie unused in your house for 98% of their life. Thus you don’t just buy them for their function – design matters. [I wonder what he’d make of my “toaster” – an AGA?] Whilst good design is a very nice thing to have, I don’t think he really made it clear why design matters in education. Several presenters gave reasons why it might be over the course of the day. Phrases such as “it just works” and “it has to work first time and every time” being frequently uttered by presenters and delegates alike. Whatever 21st century learners may be like, it seems their teachers are not tolerant of technical compatibility issues 🙂

Andy introduced a couple of models that seem to have been very important in shaping Apple’s thinking:  Ruben Puentedura‘s SAMR model – Substitution, Augmentation, Modification, Redefinition and Matthew Koehlers’ TPACKTechnological Pedagogical Content Knowledge – where the key is that each of these three aspects is of equal importance. Some people I know may take issue with that, but I can see the sense in this if we are thinking about learning as opposed to teaching.

TPACK

Although we didn’t see this video today, this “horse’s mouth” video explains these theories well. In the US, many in education see the challenge as moving education “above the line” – from enhancement activities (SA) to transformational activities (MR).

It ends with an interesting set of nine components he thinks should be part of any 21st Century Learning. That is something I should give more thought to in a future post.

Later surfing showed that some people have even tried to apply this framework to iPad apps: http://edudemic.com/wp-content/uploads/2013/05/padagogy-version2.png – I thought that this was interesting but I don’t think any blanket categorisation like this can ever hold up. What is transformative in one situation may be only augmenting existing practice in another.

Andy then left the building Elvis style and we were left in the capable hands of Abdul Chohan from the ESSA Academy.

ESSA Academy

Abdul
Abdul

He spoke about the way they had transformed a failing school. It was an inspiring talk and you can get a feel for the impact this whole-scale rebuild of the school, its processes, attitudes and beliefs in this Apple video case study: https://www.apple.com/uk/education/real-stories/essa/

He had amazing clarity of vision. His business, he said was learning. What he was trying to do was reduce the time people spent on “busy-ness” (processes/admin) rather than “the business” – learning.  As well as the technological change, the school had looked hard at the accompanying processes. By moving a lot of activities to the iPad, he felt many became more transparent and more visible. Parents could see content on their child’s iTunesU site being updated at night or over the weekend, helping to dispell the myth that teachers stop work when the bell rings at 3.15. Staff could take registers and the data was instantly uploaded into the school’s MIS.

There were a range of other good presentations, most of which just managed to avoid being an out and out sales pitch. One looked at the networking needs – all this mobile technology will put considerable strains on your wireless network. Networks, it seem are not all the same.  Lots of iPads need management. There are tools for this, and some include a series of content filters and access locks. These can be used so that staff, students and parents can all see what they were used for outside school (so no downloading porn or your kid’s iPad folks, or you’ll get a strongly worded letter from the Head!) I also learned about some of the built in accessibility features in iOS. The Invert Colours feature was clever (though not all apps seem to support it). As an aside, I wonder if switching from a largely white to largely black screen would have any impact on battery life?

The last  major presentation looked at another iPad roll-out from a local school which sounds a bit like a crematorium – Stephenson Memorial Primary School.  Emma Overton spoke about their iPad@myPad project. One of the most interesting things to me was the way they had involved the children, recognising and fostering their skills. They recruited a series of “geniuses” – borrowing from the Apple lingo (who later appointed themselves assistants) – each with declared areas of expertise – e.g. Twitter, blogging, iBooks. This was seen to significantly change the way staff and students engaged with each other, with these geniuses supporting other students, or “bought in” to help staff plan or develop particular aspects of their teaching. I think this is a model that could (indeed should) make the transition to HE. There were also some great stories of how a technical intervention can help stimulate the children to improve through greater engagement. My favourite was using the Aurasma augmented reality app to bring the children’s drawings of dragons to life, demanding better stories.  That might be harder to transpose to the HE setting…

All in all a very thought provoking day that really through down the gauntlet to higher education. If this is how these children are learning now, how can we continue this process and challenge them (in a positive productive way) should they choose to come to a University?
Featured Image: http://www.flickr.com/photos/aperturismo/4488250788/
CC BY-SA

Some random musings on technology enhanced learning