Any views expressed within media held on this service are those of the contributors, should not be taken as approved or endorsed by the University, and do not necessarily reflect the views of the University in respect of any particular issue.

Blog posts from my participation in Introduction to Digital Environments for Learning, amongst other stuff

Month: November 2016

Being at, and in, Edinburgh

This week, I was delighted to have a reason to re-read the Sian Bayne, Michael Sean Gallagher, James Lamb 2013 paper Being ‘at’ university: the social topologies of distance students.

The paper draws on Mol and Law’s four kinds of social space:

  • regional (stable boundaries)
  • networked (stable relations)
  • fluid (shifting boundaries and relations)
  • fire (comples intersections of presence and absence).

and organises the research data into three broad themes:

  • homing and the sentimental campus
  • the metaphysics of presence (campus envy)
  • the imagined campus.

In interviewing students (current and recently graduated) from the MSc Digital Education programme it found that “the material campus continues to be a symbolically and materially significant ‘mooring’ for a group of students who may never physically attend that campus” (p.581).

This concept of ‘mooring’ is echoed in other parts of the paper. When students talked of travelling to Edinburgh for the graduation ceremony the campus “becomes talismanic, the ‘single present centre'” (p.578). Similarly, there was a “tendency for students to view the campus not so much as a senitmental ‘home’ … but rather as a kind of touchstone—a logos—which functioned as a guarantor of the authenticity of academic experience which was not always easy to articulate” (p.577).

This echoed my own experience with the programme (albeit as a citizen of Edinburgh). A few years ago, working as an instructional designer, I investigated the part-time postgraduate opportunities in my field. Two opportunities presented themselves: an MSc in e-learning (as it was then known) at the University of Edinburgh and an MA in Online and Distance Education with the Open University. The Open University were the originators of distance education so why did I choose the University of Edinburgh? A (misplaced) sense of prestige? Perhaps. But I think it was something more than that. I had studied as an undergraduate at Edinburgh. So I was familiar with the campus. Even though I would not attending campus for class, and all I would need is this:

 

wifi icon

wifi icon

and this:

power point

power point

 

I would be picturing this:

Old College

Old College

and this:

New College

New College

and of course, this:

The Prime of Miss Jean Brodie opening credits

The Prime of Miss Jean Brodie opening credits

 

In fact, as I am writing this I am sitting in the ECA Library at Evolution House. Look at me, being all studenty, with my week 10 reading printouts and my laptop:

laptop and print-outs

And if I turn my head and ignore the brutalist Argyle House, I can just about make out Edinburgh Castle and imagine Jean Brodie giving me a history of the Old Town.

View from ECA Library, Evolution House

View from ECA Library, Evolution House


Images of Old and New Colleges taken from the The University of Edinburgh Image Collections

Week 9: Learning Analytics (reflections on the Clow paper)

Clow, D. (2013). An overview of learning analytics. Teaching in Higher Education, 18(6) pp.683–695

I was particularly interested to read of the example of faculty at Texas A&M University being measured against their net contribution (or not) to the University’s financial position. I am aware of similar concerns amongst academic colleagues at the University of Edinburgh. When the University recently announced that they were investing millions of pounds in a new lecture capture service, academic staff raised concerns that their performance metrics would be used to inform their annual review. It perhaps doesn’t help that the current lecture capture service employed by the University is called Panopto.

It’s always nice to see a diagram in an academic paper. Clow’s Learning Analytics Cycle draws on Campbell and Oblinger’s (2007) five steps in the learning analytics process: capture, report, predict, act and refine.

Predictive modelling

Clow outlines the practical differences between predictive modelling and the ‘human’ equivalent of a teacher giving extra help to students they notice may be struggling as thus:

  1. “the output of predictive modelling is a set of estimate probabilities and … many people struggle to correctly understand probabilities” (p.687)
  2. the student data is made available to others (not just the teacher)
  3. the data can “trigger actions and interventions without involving a teacher at all”.

This last point feels significant as it corresponds with many teachers fears that the locus of power is shifting away from the teacher and towards the faceless administrator. It was therefore interesting to read of the Course Signals project at Purdue University. Perhaps integral to the success of the project is the fact that the “the teacher is central to the process and uses their judgement to direct students to appropriate existing resources within the university” (p.688).

This discussion also prompted some other questions:

  1. as predictive modelling ensures efforts are aimed at ‘marginal students’, is this at the expense of other students? (the experience of Signals at Purdue suggests this doesn’t have to be the case)
  2. could an unintended consequence of predictive modelling be a trend towards more conservative choices regarding courses? In other words, would institutions end up prioritising existing courses (because we have data for these) against new courses?

 

Social Network Analysis

It’s hard to think of Social Networks without thinking of The Social Network. Nevertheless, it was interesting to read of the following SNA projects:

What wasn’t discussed here, but is of interest, is if/how students behave differently on ‘professional’ social networks (eg forums where they are being assessed, forums which their tutor can access) and ‘personal’ social networks (eg Facebook). Does ‘editing’ oneself in the former encourage similar behaviour in the latter?

The possibility of a richer (computational) analysis of textual data is an interesting field of study and Clow refers to the Point of Originality tool which uses the WordNet database to identify originality in key concepts. Clow notes “a strong correlation between originality scores in the Point of Originality tool and the grades achieved for the final assessment and also between the originality of their writing  and the quantity of their contributions online” (p.690). However, it is important to remember that correlation does not equal causation.

When Clow suggests that “perhaps the greatest potential benefit [of recommendation engines] lies in more open-ended and less formal learning contexts” (p.691) it’s hard to disagree. However, warnings about the dangers of Filter Bubbles should be heeded here too.

Finally, I was most struck by the following point made by Clow in the Discussion section:

“the opportunity to learn by making mistakes in a safe context can be a powerful learning experience, and not many learners are happy to have their mistakes kept on record for all time” (p.692).

How can we ensure the student data we track and measure, and present to administrators, teachers, and (at times) the students themselves benefits their learning, if by the very nature of the task we are performing, we are creating a relationship of mistrust which compromises the learning at the outset? In other words, the Panopticon (below) doesn’t look to me like the optimal space for learning.

The Panopticon

The Panopticon

Your attendance has in general been poor

The GIF above is a pretty accurate metaphor for how I’ve felt for the last two weeks. One week of no study was all it took for me to fall behind. And so, week 9 proved an opportune time to participate in the weekly ‘Larcing’ about with your data activity.

Tasked with generating at least three reports (and the freedom to choose those ‘themes’ we wished to analyse) I did what I suspect most did: and ran a report for every week of the course so far, selecting all themes. If Moodle has collected this much data on my interaction with the VLE, why would I limit myself to viewing only some of the data? Was this a belated attempt to leverage some control over the process?

So, what did the LARC data tell me and how was it presented?

Your attendance has in general been poor

Oh. What about the following week?

Your attendance has in general been poor and this week you logged on less often than usual

Oh.

Hmm. OK, there were some other words but these were the first and this is what I remember.

Interestingly, it is only possible to run weekly reports in LARC. As such, in order to gain a sense of how I was performing (according to the metrics) over the duration of the course, I had to input the quantitative data into a spreadsheet myself. What did this tell me?

  • My ‘attendance’ never reached above 32% of class average.
  • My ‘interaction’ performed much better, nearly always recording as above 100% of class average.
  • My ‘social engagement’ also appeared to be pretty poor (only twice recording as above class average).

So how did this data, presented in this way, make me feel?

The first thing to mention is the assumptions made by the data and presented back to me don’t quite feel right. Not because it shows me as performing poorly (I expected as such) but because it doesn’t resemble my recollection of my involvement with the course. For example, I haven’t posted in the forums in weeks (as evidenced by the accumulative tally of positive / negative / neutral posts) so why does my ‘social engagement’ score not reflect this? How is LARC calculating social engagement?

I am assuming that ‘interaction’ is calculated by clicks on core and recommended reading. This seems to equate to what I remember.

Is ‘attendance’ calculated simply by the number of times I have logged into Moodle? Surely it’s more complex than that? Does it include the number of clicks performed once logged in? Does it include interaction with the forums? Is there an overlap between the metrics used for measuring attendance, and those used for measuring ‘social engagement’?

You do not really care what others in the class think about you

This is true. But what if I did? No one (and no algorithm) can tell someone how they feel. So why is LARC attempting to do so? What is it hoping to achieve by including this in the ‘feedback’?

A question of trust.

The definition of learning analytics as adopted by the associated First International Conference on Learning Analytics and Knowledge in 2011 is thus:

“the measurement, collection, analysis and reporting of data about learners and their contexts, for purposes of understanding and optimising learning and the environments in which it occurs”.

I would like to argue that optimising learning requires a relationship of trust. When we partake in a multiple choice quiz we trust that there are discrete answers of which a machine is capable of identifying as correct or incorrect. However, when we allow our data traces to be interpreted by a machine into making predictions regarding success, we become understandably cautious. Our relationship with the machine has become distrustful. Perhaps we are frightened that the machine can tell us things about ourselves we would rather not know. Or perhaps we intrinsically understand that Big Data is being done to us. Also, what does it do to the relationship between student and tutor? I must confess to being a little disappointed at the thought that when my tutor reached out to me to check in after a period of inactivity, she might not have done so because she happened to notice my absence, but because she was prompted to by some red traffic light on her dashboard.

One question Hamish McLeod posed on the forum this week was ‘how might the average student react to this information?’ Learning Analytics ‘solutions’ are often presented as universal. But, I would argue, there is no such thing as an average student. Different demographics will often respond differently to this kind of information. For everyone who wants to know how they are doing compared to their peers, there is someone else who doesn’t. And if we simply make this information available and say ‘it’s your choice if you wish to access’ aren’t we transferring the responsibility of presenting that data from the institution to the individual?

Does hidden algorithms = hidden agenda?

The LARC activity for this week asked us, the IDEL students, to reverse engineer the tool to attempt to identify the logic behind the tool. Many of the algorithms used in learning analytics are proprietorial and therefore hidden. But are there other (good) reasons why the algorithms should be concealed? One argument is if they were revealed (to students), would it offer opportunities, like teaching to the test, to ‘game the system’? I’m not sure how convincing this particular argument is. Let’s go on a thought experiment.

I am a student on the IDEL course and I have access to LARC from the start of the course. I generate a report in weeks 1 and 2 and notice that all I have to do to improve my attendance score is to log in to Moodle at least once a day. I do this (because it’s easy and I care about my scores, even if they don’t feed into any summative assessment) but no other change of behaviour is recorded. My reports don’t raise any kind of red flag to my tutor, who continues to feedback on my blog posts.

Has having access to LARC benefitted me (the student) in any way? Would knowing how the attendance score is calculated benefit me in any way? Or would it mean I spend precious, valuable time, trying to improve my score? Time which would be better spent reading course material and interacting on the forums?

As an exercise in trying to understand which metrics were used in the algorithms employed in LARC, I found this weeks activity incredibly useful. But for every metric I could think of, I could think of countless others which aren’t measured (and aren’t measurable). So, bearing in mind we are working with a limited data set, how responsible is it to provide students with this information? How do we know that we are telling students the ‘right’ thing? What is the right thing? Aren’t there as many ‘right things’ as there are people? Is it ethical to capture this data and present it to administrators and tutors but keep it from students? Conversely, is it ethical to present this data to students when it may provoke the opposite of the desired effect of encouraging greater participation and success? In short, is it going to make them feel like the guy in the GIF at the top of this post?

Week 8: Data and Education (visualising IDEL data)

Last week (week 8) we were tasked with visualizing Twitter data from IDEL using TAGS explorer. The task involved searching the hashtag #mscidel. The resulting visualisation looked like this:

screen-shot-2016-11-16-at-17-09-58

I then tried some alternative search criteria and ran the script again. However, the resulting visualisation appeared to confuse the results of both searches. I would need to play more with the script to investigate what the issue was here.

The initial visualisation demonstrated a mismatch in the potential for Twitter data harvesting data and the relatively small scale activity around the #mscidel hashtag. I would imagine a lot more could be gleaned from a MOOC hashtag for example.

It’s also worth bearing in mind that all this tells us of course is who has typed #mscidel in a tweet. It doesn’t tell us who is on the course (although it is unlikely that it would be used by someone not enrolled on the course). It also doesn’t tell us everyone enrolled on the course. Many course participants either won’t have a Twitter account or simply won’t have tweeted using that particular hashtag. And how do we account for a possible hashjacking?

So we know what data we have. But what data do we need? Are we using Twitter to harvest data just because we can? What questions are we trying to answer? Who is we? Do we understand why we are collecting this data? Who is ultimately benefitting from the collection of this data? To whom is the data being made available? Finally, and perhaps most importantly, what are the ethical considerations of this activity? Harvesting data from Twitter appears at first uncontroversial. Twitter is an open platform and each tweet can be considered a publication.  However, Michael Zimmer raises some interesting points in his blog post Is it ethical to harvest public twitter accounts without consent? Can we really assume that those who tweet do so understanding how their data may be used? And even if we can conclude that we don’t require to seek specific consent from Tweeters to harvest their data, how do we suppose this data will be used? I was particularly interested to read of Militello et al.’s (2013) study which showed the contrast between how different groups responded to data (Selwyn 2015 p.71). If Education researchers are to use Twitter APIs, these are the kinds of questions we need to keep at the forefront of our minds.

Week 8: Data and Education (thoughts on key readings)

My studies last week were interrupted by the Presidential election in the USA. Like millions of others, I spent Tuesday evening checking the results as they were first broadcast by the TV Networks. It soon became clear that, not for the first time, the pollsters got it wrong.

As Selwyn reminds us in Data entry: towards the critical study of digital data and education data in its digital form are now being generated and processed on an unprecedented scale (p.64). But is big Data the panancea it is often presented as?

In an age when we have access to more data than ever, how useful is this data?

As discussed in the TED Radio Hour episode Big Data Revolution, data is everywhere. But what is the value of Big Data? And which metrics do we overvalue and which do we undervalue? What does this tell us about ourselves?

If we are to maximise the possibilities of Big Data we must first acknowledge that data can be a blunt instrument. As data analyst Susan Etlinger says in the episode ‘data doesn’t create meaning – people do’. We therefore need to spend more time on our critical thinking skills. An important question Etlinger raises is: did the data really show us this? Or does the result make us feel more successful, or more comfortable?

All core readings for this week explored various considerations around what Big Data means for Education. In The rise of Big Data: what does it mean for education, technology, and media research? (2013) Rebecca Eynon argues that ‘as a community we need to shape the (Big Data) agenda rather than simply respond to the one offered by others’ (p238) and offers three areas requiring particular attention:

  1. what are the ethical considerations surrounding Big Data? Eynon offers a clear example in the shape of using data to predict drop-out rates. If an institution calculates a particular student is likely to drop-out, what do they do with that information?
  2. what data do we have? we can only study data we have or we can collect, therefore the (limited) data we have restricts what we can research (including inferring meaning).
  3. how Big Data can reinforce and even exacerbate existing social and educational inequalities.

Eynon also raises the challenge of how we train (future) academics in this field to ensure ‘we use these techniques to empower researchers, practitioners, and other stakeholders who are working in the field’ (p.240). This point is echoed in Learning in the Digital Microlaboratory of Educational Data Science where Ben Williamson references Roy Pea (Stanford University) who has called for a new specialised field in this area and identifies “several competencies for education data science”. The report also calls for ‘new undergraduate and graduate courses to support its development’.

Williamson then goes on to discuss the educational publisher and software vendor Pearson and their Centre for Digital Data, Analytics and Adaptive Learning. Digital microlaboratories such as these ‘relocate the subjects of educational research from situated settings and psychological labs to the digital laboratory inside the computer, and in doing so transform those subjects from embodied individuals into numerical patterns, data models, and visualized artefacts’. What nuances are lost in this?

I was interested to learn of the startup schools Williamson refers to (AltSchoolKahn Lab Schoolthe Primary School) which utilise ‘data tracking and analytics to gain insights into the learners who attend them, in order to both “personalise” their pedagogic offerings through adaptive platforms and also test and refine their own psychological and cognitive theories of learning’.

Also of interest was how Pearson has partnered with Knewton to create The Newton Adaptive Learning Platform which uses proprietary algorithms to deliver a personalized learning path for each student.

This reminded me of Todd Rose’s presentation at TedX on The Myth of the Average. It also reminded me of German Chancellor Angela Merkel’s recent warning on the dangers of the potential of proprietary algorithms to narrow debate.

The paper which discussed in most detail the implications for Big Data in Education was Selwyn, N. 2015. Data entry: towards the critical study of digital data and education. Learning, Media and Technology. 40(1). Again, we are reminded that ‘as with most sociological studies of technology, [these] researchers and writers are all striving to open up the ‘black box’ of digital data’ (p.69).  Digital sociologists don’t see data as neutral, but rather inherently political in nature. But ‘data are profoundly shaping of, as well as shaped by, social interests’ (p69). Selwyn argues that educational researchers therefore need to be influencing this new area of sociology. What role is digital data playing in the operation of power? (How) does it reproduce existing social inequalities? How does it reconfigure them?

A key question to ask is therefore ‘who benefits from the collection of this data in education contexts’?

Data surveillance (dataveillance) supports data profiling and crucially, ‘predictive’ profiling (p.74) (echoing Eynon’s point about predicting college drop-outs). Digital surveillance is of course helped, and perhaps made more transparent by the increasing use of VLEs in educational contexts. Whilst this is often framed as an opportunity to evaluate the effectiveness of different aspects of a course, this heightened transparency can lead to ‘coded suspicion’ between academic staff, administrators and students (Knox 2010).

In addition to creating suspicion, analysing data is inherently reductive. Nuanced social meaning is easily lost when data is presented as discrete and finite. We therefore need to consider specifically what reductions must we consider in relation to education. Selwyn argues that firstly we must acknowledge that we tend to measure what we can measure most easily. In an education context this means we measure attendance, student satisfaction and assessment results – all of which can be crude instruments.

Finally, all educational researchers need to be familiar with a variety of data tools and analytics models. In his conclusion Selwyn argues that we need to refuse to take digital data ‘at face value’ but rather recognise the ‘politics of data’ in education and act against it (p.79).

—————————————————————-

Key questions to ask in relation to Big Data in Education

  • what data do we have?
  • what data do we need?
  • how is the data collected?
  • how does the harvesting of data affect relationships between faculty, administrators and students?
  • who benefits from data collection?
  • to whom is the data being made available?
  • who is collecting data in education?
  • what skillsets do data researchers need to better understand data?

 

 

Opportunity costs and badges of honour

Reflections on week 7: infrastructures, credentialing and badging

A summary of Edwards, R. 2015. Knowledge infrastructures and the inscrutability of openness in educationLearning, Media and Technology. 40(3).pp.251-264.

What is the opportunity cost of online education? Although a term traditionally used in accounting, this seems a useful analogy here when discussing the main thrust of Edwards’ argument. “Openness alone is not an educational virtue” (p.253) as the pursuit of openness does not equate to additional educational opportunities. A path taken is a path not taken. Therefore, we need to ask ourselves “what forms of openness are worthwhile, and for whom” (my emphasis) (p.253). Except, what of the circumstances when open education does represent an additional opportunity? I’m thinking of when Dr Emma Smith, Professor of Shakespeare studies at the University of Oxford, made her Approaching Shakespeare lectures freely available on iTunes. For little to no extra effort on behalf of the lecturer, a series of OERs was created and distributed. I am struggling to think what the opportunity cost of this would be.

Edwards makes the important point that the positive claims made for ‘open education’ need to be checked with the following:

  • the availability of electricity and bandwidth (and hardware and software)
  • how digital selects data, information and knowledge
  • the worthwhileness of the OERs (do they match participants goals and aspirations?)
  • what is learnt, rather than what is available (much harder to measure)
  • how is knowledge produced?

The paper then goes on to investigate the concept of knowledge infrastructures. Because there is a selection at play with knowledge infrastructures, we need to pay attention to the ontologies developed and deployed: ‘the digital is not a neutral tool for learning, but is an actor in shaping possibilities for education’ (p.259). This is particularly true when considering the increasing important of algorithms in our digital lives.  Edwards argues that algorithms can’t be contained by the framework of current disciplines (eg computer science, sociology). They are inscrutable (Barocas, Hood, and Ziewitz 2013). This means that in answering ‘teach students to code’ to the question of hidden knowledge infrastructures is not a satisfactory one.

At this point in the paper, I was thinking, yep, this is great, but there’s a lot of description in this paper, and very little prescription. As such, I was pleased to see the author close with a reference to Edwards et al. (2013) and their ‘strategies for researching the work of the digital in knowledge infrastructures’.

—————————————————————

“Honor is a mere scutcheon” (Falstaff) Henry IV part I.

I’ve been thinking about this (honour is but a badge, as opposed to a badge of honour) when reading  Halavais, A.M.C (2012) A Genealogy of Badges: inherited meaning and monstrous moral hybrids, Information, Communication & Society, 15:3, 354-373.

Before reading the article, I thought about what came to mind when someone mentioned badges. I thought of this. And this. And this. So, when Halavais opens his paper with ‘badges have baggage’ I am inclined to agree.

The paper starts as an interesting walkthrough of the history of different types of badges:

  • Badge as persona / identity
  • Badge as achievement
  • Badge as member of a group
  • (Because of the history of badges of dishonour, they are rarely found in the online world)
  • Badge as grading of skill. This has advantages for the organisation (readily identifiable skill-set) as well as the individual (incremental rewards rather than having to wait years for mastery)
  • ‘Campaign badge’. An online equivalent of a campaign badge (the overlay of a Facebook profile pic for example) serves two functions: promoting a political cause and signalling user’s interests and attitudes
  • Fake badges – at present (2011) online badges are not valuable enough to bother faking. I shall have to read further to investigate if this is still the case in 2016.

I particularly enjoyed Halavais’ neat summary ‘part of the problem with badges is simply that they continue to look like badges’ (p.367). In other words, they can carry with them both intended, and unintended value leakage.

The author then introduces Jacobs’ argument in Systems of Survival (1992) that the competing values of the guardian class vs the commercial class are complementary on a social scale but when ‘the same actors engage in a combination of values from each syndrome, it produces ‘monstrous moral hybrids’ (p.368). This again reminded me of how Shakespeare explores such issues in his contrasting of the valiant Hotspur and pragmatic Falstaff. Both Hotspur and Falstaff need each other to frame what they are *not* as much as what they are.  I found the argument that ’emergent governance’ and ‘stewardship governance’ (Wenger 2004), should not attempt to exert their interests through the same system, else risk ‘significant dysfunction’ (p.369) to be convincing. It also reminded me of one of the contradictions Knox (2013) highlights in Five Critiques of the Open Educational Resources Movement: ‘In proposing that university approval for qualifications will raise the perception of OER, Macintosh, McGreal, and Taylor (2011) appear to acknowledge the status and value of the institution. Yet, in advancing a model of self-directed OER learning, the pedagogical proficiency that undoubtedly contributes to the prestige of the institution is eliminated’ (p.825).

As a postscript, I notice that Mozilla created Open Badges in 2011 – the same year as the Halavais paper. I should like to write a further post on how, and if, we should review the Halavais paper in light of developments in open badges in the last five years.

Cost is not the only barrier (part 2): reflections on enrolling on a MOOC

One of the activities for week 6 was to enrol on a MOOC to allow us to critically evaluate the educational activities it provides. For this task, I chose A MOOC developed by the University of Edinburgh and delivered on the MOOC platform Coursera titled ‘The Making of the US President: A Short History in Five Elections‘.

How did I search for this MOOC?

Well, firstly, I had to know that MOOCs are a thing. It just so happens, because of the line of work I’m in, I have known about MOOCs since around 2011. In addition, as stated above, this was a key task for the course and we were even directed to some examples. For this exercise, the most important criteria in searching for a MOOC was the start date (an on-demand MOOC was also a possibility). However, I am assuming most enrol because they are interested in the subject, and will wait for the start date if required.

What happened when I enrolled on the course?

I received an email from the platform welcoming me to the course. The introductory message was nice and short and included the following:

Follow us on twitter @MakingPresMOOC #MakingPresMOOC.

As a frequent Twitter user I liked this. It also served to remind me how Twitter has become an ideal companion to MOOCs. However, while not a prerequisite of the course, it does exclude participants domiciled in countries where Twitter is banned.

I then received a second ‘welcome to the course’ email. This included a prompt to watch a short video – a familiar approach for a MOOC user. However, I was expecting (and hoping) for an introductory video talking about what was going to be covered on the course. Instead, the video was a short lecture which threw us straight in to the subject area. Personally, I am a fan of ‘week 0’ type activities for online courses. Give me a list of things to do, which I can tick off, and which can help make me feel that I am prepared for the course, in advance of any actual teaching. This approach therefore left me feeling a little under-prepared.

A note about the user interface

I noticed that Coursera is now providing an ‘interactive’ transcript. This is something Lynda.com has been delivering for years. I once asked a Lynda.com representative at the Learning Technologies Fair if this was openly sourced. Alas no.

I also noticed the ‘thumbs up / thumbs down’ option for each video. It seems a little incongruous here. How am I, a student on the course, going to benefit from this feature? Is there a place where I can find all videos I’ve ‘liked’? Will the videos be ranked in some way? Or is this (as I suspect) simply a way for the MOOC delivery team to review course content?

A note about the video content

Each video lecture is lecturer talking to camera, with no cutaways. This felt like a missed opportunity. Why not use other assets to enhance the points you are trying to make? It is taking the lecture format and transposing it onto the screen, rather than recontextualising the material.

Course format

I notice the second item in the course is ‘Learning Objectives’ which reminded me of Donald Clark’s rant about the dangers of including these.

Bugbear

Several years ago I used to bank with Barclays. If I ever had to use telephone banking I would eventually be put through to a representative who would assist with transferring money / amending a standing order etc. However, the rep would *always* then try and sell a particular product to me before allowing me to finish the call. It was infuriating. I was reminded of this when at the end of each video lecture for this MOOC, I was prompted to pursue the certification route. There are very good reasons to follow the certification route (not least the greater likelihood you will complete the course). But to sell this at the end of every video felt, well, rather vulgar.

Summary

This particular MOOC didn’t feel like it was trying to do anything different to any other MOOC I have enrolled in previously. It consisted of the following:

  • A series of video mini-lectures.
  • A discussion board where participants can post questions / observations.
  • A short video with a Q&A format where a ‘producer’ asks the lecturer some key questions raised on the discussion boards
  • Multiple choice type quizzes are used to assess whether the participant ‘passes’ the course.

If we are to consider Gregory Bateson’s Hierarchy of Learning (as mentioned in the Gardner Campbell Keynote – Ecologies of Yearning – Open Ed 12) when assessing this MOOC, I would argue that it sits somewhere between learning I (change in specificity of response by correction of errors of choice within a set of alternatives) and learning II (learning-to-learn, context recognition). It certainly doesn’t take us to learning III (meta-contextual perspective, imagining and shifting contexts of understanding) – “where we become most human and where we can exercise agency within an ecology of ideas”.

I particularly enjoyed the point raised by Campbell when talking about a quiz he asks students to take prior to each class. The purpose is not to show they can recall information (although useful) but they need to have a habit of being (like read assigned material twice, read unassigned material). MOOCs are usually around 5-7 weeks in duration. The Making of the US President: A Short History in Five Elections is only three weeks in duration. I would argue that it isn’t possible to help foster a habit of being in this time.

Powered by WordPress & Theme by Anders Norén

css.php

Report this page

To report inappropriate content on this page, please use the form below. Upon receiving your report, we will be in touch as per the Take Down Policy of the service.

Please note that personal data collected through this form is used and stored for the purposes of processing this report and communication with you.

If you are unable to report a concern about content via this form please contact the Service Owner.

Please enter an email address you wish to be contacted on. Please describe the unacceptable content in sufficient detail to allow us to locate it, and why you consider it to be unacceptable.
By submitting this report, you accept that it is accurate and that fraudulent or nuisance complaints may result in action by the University.

  Cancel