Any views expressed within media held on this service are those of the contributors, should not be taken as approved or endorsed by the University, and do not necessarily reflect the views of the University in respect of any particular issue.

Blog posts from my participation in Introduction to Digital Environments for Learning, amongst other stuff

Category: Uncategorized Page 1 of 3

Week 11 activity: what kind of space is IDEL? (part 2)

On reading Cousin, G. (2005). Learning from cyberspace. In R. Land & S. Bayne (Eds.), Education in Cyberspace. Abingdon: RoutledgeFalmer I was struck by the following quote from Davis, E. (1998) TechGnosis, Harmondsworth: Penguin

The moment we invent a significant new device for communication – talking, drums, papyrus… – we partially reconstruct the self and its world, creating new opportunities (and new traps) for thought, perception and social experience [my emphasis].

It was this, the sense that we should be aware of how we (re)construct our world that led me to the metaphor below.

The obvious metaphor feels like a literal virtual campus. Again, as quoted in the Cousin paper, Oxford University invites their students:

… to visualise a virtual learning environment … [as] the plan of the campus of any educational establishment with which you are familiar.

I could have built a plan of campus, including the library, IT help, student union etc and used this to represent their online equivalent. And in fact this is frequently how University’s do attempt to represent themselves. As mentioned in Bayne, S., Gallagher, M. S., & Lamb, J. (2014). Being ‘at’ university: the social topologies of distance students. Higher Education, 67, 569-583 this is one of the many ways in which the “privileging of the bounded space of the campus is played out” (p.576).

Furthermore, I wanted to demonstrate how, as Cousin puts it so succinctly, “the medium is the pedagogy” (p.117). Or, as stated in  Bayne, S., Gallagher, M. S., & Lamb, J. (2014). Being ‘at’ university: the social topologies of distance students. Higher Education, 67, 569-583:

“What it means to be ‘on the course’, or to be ‘at’ Edinburgh, is never one thing—it is always multiple, enacted differently for every student almost at every moment.” (p.580)

Here is a very basic interactive graphic demonstrating my IDEL metaphor (explained in previous post).

For this graphic I used Thinglink. The basic (free) version doesn’t allow for the upload of images to accompany the on-hover states. I had hoped to include further images from the 59 Productions video to better explain my concept. Images like this:

screen-shot-2016-12-06-at-12-30-22

Me as actor / camera operator

and this:

screen-shot-2016-12-06-at-12-29-57

Me as mixing desk operator, choosing my ‘shots’

 

I hope the metaphor conveys a sense of layered, textured, multiple perspectives enacting simultaneous multiple realities of what it means to be ‘on’ the IDEL course.

 

Week 11 activity: what kind of space is IDEL? (part 1)

Tasked with thinking about a suitable metaphor for the IDEL course (or rather my experience of the IDEL course) I imagined myself at a mixing desk. A little like this:

Sitting alone with an array of tools at my fingertips, with the recommended texts represented here by the original audio recordings. Each selection on the mixer would produce a different texture, a different quality, would reveal something new. The near infinitesimal number of possible outputs was exciting, and simultaneously overwhelming. This simultaneousness also struck a chord. If I was to try and distil the main theme of the course in one word it would be “and”. Recognising that something can be both present and distant, closed and open, transparent and hidden, has been essential to my understanding of the themes raised.

Except I’m not really alone am I? I have a personal tutor to guide me. And a team of tutors who are guiding the whole cohort through the myriad of texts, themes, approaches. And then there’s my peers. I know they’re there because I can see them interacting on the discussion boards.

So, I tried to think through the mixing desk metaphor a little further and thought about 59 Productions Live Cinema performances. Here is a short video about the process of making a “Live Cinema” performance:

What I find fascinating about this approach to live performance is the technique (simultaneously depicting the scene and showing the artificial creation of that scene) and how in making transparent the artificiality of the image doesn’t diminish the image but adds to it. It is able to create multiple, synchronous realities.

I think the metaphor continues to work when we consider the IDEL course as a performance. Each instance of the course is a new performance, with new actors (students) – all directed by the IDEL teaching team. I am simultaneously both an actor on stage and the mixing desk operator, selecting which shots to include in the projection.

I heart the Library

I referenced in an earlier blog post that I have become particularly fond of the 5th floor of the University Library in George Square. In fact I am here again as I type this. I went to check in via FourSquare and noticed the following comment from Pearson – Always Learning:

George Square Library gets the thumbs up

George Square Library gets the thumbs up

 

So I am clearly not alone in favouring this spot for some quiet, reflective study. A simple look around me also tells me this. In Week 12 (revision week) there is not a spare desk to be found. It occurs to me that there are multitudinous reasons why this space is favoured:

  • it is warm (it is currently -3°C outside)
  • there is no cost to entry (except of course a valid student or staff ID card)
  • there is no cost to staying
  • it is quiet
  • it has plug sockets for your mobile devices (well, ordinarily it does)
  • it has printers
  • it has books
  • it has lots of other people doing something very similar to yourself.

It is this last point which I think warrants emphasis. Does being surrounded by people engaged in a similar activity to you help create a sense of community (even if you are not talking to any of them)? I think it definitely helps combat any sense of isolation which is often cited as a downside to online study.

Finally, I want to mention how the sounds of the library (the hushed whispers, the crinkling of a crisp packet, the unzipping of a bag) bring to mind the sounds associated with ASMR. If I were to record these sounds (for the purpose of listening to them in a non-library environment) I should therefore utilise the binaural recording method¹.


¹ @Phil – I would love to hear the soundscape you recorded for the Library project. Where did you place the mics?

I know this blog post has no academic value. I just wanted to log my thoughts. Happy to remove before submitting if you think appropriate.

 

Expectations of privacy

I’m a keen follower of Robert Sharp’s blog. In his speech arguing for a better debate around no platforming on campus, I was struck by the following:

“Having people read over your shoulder chills free expression! In order to write, think speak freely we need an expectation of privacy” (my emphasis).

This seemed to me to encompass a few of the issues we have covered in recent weeks on the IDEL course. I was surprised to find that the ‘spaces’ theme covered in weeks 10 and 11 didn’t touch on the concept of ‘safe’ spaces (or perhaps it did and I missed it). Does an ODL student require a safe space as much as an on-campus student? What would this look like? Is a statue of Cecil Rhodes just as offensive to an ODL student as it is to an on-campus student who has to walk past it every day? As covered in my recent post on being ‘at’ and ‘in’ Edinburgh, we experience campus in multiple ways.

It also, I think, touches on the discussion we had in weeks 8 and 9 on data analytics. Do data analytics have the same potential to chill free expression? What if, by being transparent about the data the institution collects from the student, we are not empowering the student to make more informed choices but rather run the risk of them attempting to play the game, and do what they think is expected of them? Or perhaps we should just accept that Big Data is a thing and it is therefore morally incumbent on the institution doing the collecting to help students with how to interpret this (in much the same way that the Managing your digital footprint project tries to do with students’ social media presence)?

Finally, what does an expectation of privacy mean in relation to a class? Are classes private spaces? Does moving them online (with the resultant shift from speaking to writing, or perhaps more accurately the shift from something impermanent to permanent) change how students experience the class? Or in the age of social media do we (as students) only feel that we have experienced learning if we have logged it somewhere digital (ie not via any Ars Memoria technique)? I am reminded of the example given in week 1 of the IDEL course where ‘Joe’ takes a photo of himself reading a course paper and then tweets it to prove that he is studying.

Being at, and in, Edinburgh

This week, I was delighted to have a reason to re-read the Sian Bayne, Michael Sean Gallagher, James Lamb 2013 paper Being ‘at’ university: the social topologies of distance students.

The paper draws on Mol and Law’s four kinds of social space:

  • regional (stable boundaries)
  • networked (stable relations)
  • fluid (shifting boundaries and relations)
  • fire (comples intersections of presence and absence).

and organises the research data into three broad themes:

  • homing and the sentimental campus
  • the metaphysics of presence (campus envy)
  • the imagined campus.

In interviewing students (current and recently graduated) from the MSc Digital Education programme it found that “the material campus continues to be a symbolically and materially significant ‘mooring’ for a group of students who may never physically attend that campus” (p.581).

This concept of ‘mooring’ is echoed in other parts of the paper. When students talked of travelling to Edinburgh for the graduation ceremony the campus “becomes talismanic, the ‘single present centre'” (p.578). Similarly, there was a “tendency for students to view the campus not so much as a senitmental ‘home’ … but rather as a kind of touchstone—a logos—which functioned as a guarantor of the authenticity of academic experience which was not always easy to articulate” (p.577).

This echoed my own experience with the programme (albeit as a citizen of Edinburgh). A few years ago, working as an instructional designer, I investigated the part-time postgraduate opportunities in my field. Two opportunities presented themselves: an MSc in e-learning (as it was then known) at the University of Edinburgh and an MA in Online and Distance Education with the Open University. The Open University were the originators of distance education so why did I choose the University of Edinburgh? A (misplaced) sense of prestige? Perhaps. But I think it was something more than that. I had studied as an undergraduate at Edinburgh. So I was familiar with the campus. Even though I would not attending campus for class, and all I would need is this:

 

wifi icon

wifi icon

and this:

power point

power point

 

I would be picturing this:

Old College

Old College

and this:

New College

New College

and of course, this:

The Prime of Miss Jean Brodie opening credits

The Prime of Miss Jean Brodie opening credits

 

In fact, as I am writing this I am sitting in the ECA Library at Evolution House. Look at me, being all studenty, with my week 10 reading printouts and my laptop:

laptop and print-outs

And if I turn my head and ignore the brutalist Argyle House, I can just about make out Edinburgh Castle and imagine Jean Brodie giving me a history of the Old Town.

View from ECA Library, Evolution House

View from ECA Library, Evolution House


Images of Old and New Colleges taken from the The University of Edinburgh Image Collections

Week 9: Learning Analytics (reflections on the Clow paper)

Clow, D. (2013). An overview of learning analytics. Teaching in Higher Education, 18(6) pp.683–695

I was particularly interested to read of the example of faculty at Texas A&M University being measured against their net contribution (or not) to the University’s financial position. I am aware of similar concerns amongst academic colleagues at the University of Edinburgh. When the University recently announced that they were investing millions of pounds in a new lecture capture service, academic staff raised concerns that their performance metrics would be used to inform their annual review. It perhaps doesn’t help that the current lecture capture service employed by the University is called Panopto.

It’s always nice to see a diagram in an academic paper. Clow’s Learning Analytics Cycle draws on Campbell and Oblinger’s (2007) five steps in the learning analytics process: capture, report, predict, act and refine.

Predictive modelling

Clow outlines the practical differences between predictive modelling and the ‘human’ equivalent of a teacher giving extra help to students they notice may be struggling as thus:

  1. “the output of predictive modelling is a set of estimate probabilities and … many people struggle to correctly understand probabilities” (p.687)
  2. the student data is made available to others (not just the teacher)
  3. the data can “trigger actions and interventions without involving a teacher at all”.

This last point feels significant as it corresponds with many teachers fears that the locus of power is shifting away from the teacher and towards the faceless administrator. It was therefore interesting to read of the Course Signals project at Purdue University. Perhaps integral to the success of the project is the fact that the “the teacher is central to the process and uses their judgement to direct students to appropriate existing resources within the university” (p.688).

This discussion also prompted some other questions:

  1. as predictive modelling ensures efforts are aimed at ‘marginal students’, is this at the expense of other students? (the experience of Signals at Purdue suggests this doesn’t have to be the case)
  2. could an unintended consequence of predictive modelling be a trend towards more conservative choices regarding courses? In other words, would institutions end up prioritising existing courses (because we have data for these) against new courses?

 

Social Network Analysis

It’s hard to think of Social Networks without thinking of The Social Network. Nevertheless, it was interesting to read of the following SNA projects:

What wasn’t discussed here, but is of interest, is if/how students behave differently on ‘professional’ social networks (eg forums where they are being assessed, forums which their tutor can access) and ‘personal’ social networks (eg Facebook). Does ‘editing’ oneself in the former encourage similar behaviour in the latter?

The possibility of a richer (computational) analysis of textual data is an interesting field of study and Clow refers to the Point of Originality tool which uses the WordNet database to identify originality in key concepts. Clow notes “a strong correlation between originality scores in the Point of Originality tool and the grades achieved for the final assessment and also between the originality of their writing  and the quantity of their contributions online” (p.690). However, it is important to remember that correlation does not equal causation.

When Clow suggests that “perhaps the greatest potential benefit [of recommendation engines] lies in more open-ended and less formal learning contexts” (p.691) it’s hard to disagree. However, warnings about the dangers of Filter Bubbles should be heeded here too.

Finally, I was most struck by the following point made by Clow in the Discussion section:

“the opportunity to learn by making mistakes in a safe context can be a powerful learning experience, and not many learners are happy to have their mistakes kept on record for all time” (p.692).

How can we ensure the student data we track and measure, and present to administrators, teachers, and (at times) the students themselves benefits their learning, if by the very nature of the task we are performing, we are creating a relationship of mistrust which compromises the learning at the outset? In other words, the Panopticon (below) doesn’t look to me like the optimal space for learning.

The Panopticon

The Panopticon

Your attendance has in general been poor

The GIF above is a pretty accurate metaphor for how I’ve felt for the last two weeks. One week of no study was all it took for me to fall behind. And so, week 9 proved an opportune time to participate in the weekly ‘Larcing’ about with your data activity.

Tasked with generating at least three reports (and the freedom to choose those ‘themes’ we wished to analyse) I did what I suspect most did: and ran a report for every week of the course so far, selecting all themes. If Moodle has collected this much data on my interaction with the VLE, why would I limit myself to viewing only some of the data? Was this a belated attempt to leverage some control over the process?

So, what did the LARC data tell me and how was it presented?

Your attendance has in general been poor

Oh. What about the following week?

Your attendance has in general been poor and this week you logged on less often than usual

Oh.

Hmm. OK, there were some other words but these were the first and this is what I remember.

Interestingly, it is only possible to run weekly reports in LARC. As such, in order to gain a sense of how I was performing (according to the metrics) over the duration of the course, I had to input the quantitative data into a spreadsheet myself. What did this tell me?

  • My ‘attendance’ never reached above 32% of class average.
  • My ‘interaction’ performed much better, nearly always recording as above 100% of class average.
  • My ‘social engagement’ also appeared to be pretty poor (only twice recording as above class average).

So how did this data, presented in this way, make me feel?

The first thing to mention is the assumptions made by the data and presented back to me don’t quite feel right. Not because it shows me as performing poorly (I expected as such) but because it doesn’t resemble my recollection of my involvement with the course. For example, I haven’t posted in the forums in weeks (as evidenced by the accumulative tally of positive / negative / neutral posts) so why does my ‘social engagement’ score not reflect this? How is LARC calculating social engagement?

I am assuming that ‘interaction’ is calculated by clicks on core and recommended reading. This seems to equate to what I remember.

Is ‘attendance’ calculated simply by the number of times I have logged into Moodle? Surely it’s more complex than that? Does it include the number of clicks performed once logged in? Does it include interaction with the forums? Is there an overlap between the metrics used for measuring attendance, and those used for measuring ‘social engagement’?

You do not really care what others in the class think about you

This is true. But what if I did? No one (and no algorithm) can tell someone how they feel. So why is LARC attempting to do so? What is it hoping to achieve by including this in the ‘feedback’?

A question of trust.

The definition of learning analytics as adopted by the associated First International Conference on Learning Analytics and Knowledge in 2011 is thus:

“the measurement, collection, analysis and reporting of data about learners and their contexts, for purposes of understanding and optimising learning and the environments in which it occurs”.

I would like to argue that optimising learning requires a relationship of trust. When we partake in a multiple choice quiz we trust that there are discrete answers of which a machine is capable of identifying as correct or incorrect. However, when we allow our data traces to be interpreted by a machine into making predictions regarding success, we become understandably cautious. Our relationship with the machine has become distrustful. Perhaps we are frightened that the machine can tell us things about ourselves we would rather not know. Or perhaps we intrinsically understand that Big Data is being done to us. Also, what does it do to the relationship between student and tutor? I must confess to being a little disappointed at the thought that when my tutor reached out to me to check in after a period of inactivity, she might not have done so because she happened to notice my absence, but because she was prompted to by some red traffic light on her dashboard.

One question Hamish McLeod posed on the forum this week was ‘how might the average student react to this information?’ Learning Analytics ‘solutions’ are often presented as universal. But, I would argue, there is no such thing as an average student. Different demographics will often respond differently to this kind of information. For everyone who wants to know how they are doing compared to their peers, there is someone else who doesn’t. And if we simply make this information available and say ‘it’s your choice if you wish to access’ aren’t we transferring the responsibility of presenting that data from the institution to the individual?

Does hidden algorithms = hidden agenda?

The LARC activity for this week asked us, the IDEL students, to reverse engineer the tool to attempt to identify the logic behind the tool. Many of the algorithms used in learning analytics are proprietorial and therefore hidden. But are there other (good) reasons why the algorithms should be concealed? One argument is if they were revealed (to students), would it offer opportunities, like teaching to the test, to ‘game the system’? I’m not sure how convincing this particular argument is. Let’s go on a thought experiment.

I am a student on the IDEL course and I have access to LARC from the start of the course. I generate a report in weeks 1 and 2 and notice that all I have to do to improve my attendance score is to log in to Moodle at least once a day. I do this (because it’s easy and I care about my scores, even if they don’t feed into any summative assessment) but no other change of behaviour is recorded. My reports don’t raise any kind of red flag to my tutor, who continues to feedback on my blog posts.

Has having access to LARC benefitted me (the student) in any way? Would knowing how the attendance score is calculated benefit me in any way? Or would it mean I spend precious, valuable time, trying to improve my score? Time which would be better spent reading course material and interacting on the forums?

As an exercise in trying to understand which metrics were used in the algorithms employed in LARC, I found this weeks activity incredibly useful. But for every metric I could think of, I could think of countless others which aren’t measured (and aren’t measurable). So, bearing in mind we are working with a limited data set, how responsible is it to provide students with this information? How do we know that we are telling students the ‘right’ thing? What is the right thing? Aren’t there as many ‘right things’ as there are people? Is it ethical to capture this data and present it to administrators and tutors but keep it from students? Conversely, is it ethical to present this data to students when it may provoke the opposite of the desired effect of encouraging greater participation and success? In short, is it going to make them feel like the guy in the GIF at the top of this post?

Week 8: Data and Education (visualising IDEL data)

Last week (week 8) we were tasked with visualizing Twitter data from IDEL using TAGS explorer. The task involved searching the hashtag #mscidel. The resulting visualisation looked like this:

screen-shot-2016-11-16-at-17-09-58

I then tried some alternative search criteria and ran the script again. However, the resulting visualisation appeared to confuse the results of both searches. I would need to play more with the script to investigate what the issue was here.

The initial visualisation demonstrated a mismatch in the potential for Twitter data harvesting data and the relatively small scale activity around the #mscidel hashtag. I would imagine a lot more could be gleaned from a MOOC hashtag for example.

It’s also worth bearing in mind that all this tells us of course is who has typed #mscidel in a tweet. It doesn’t tell us who is on the course (although it is unlikely that it would be used by someone not enrolled on the course). It also doesn’t tell us everyone enrolled on the course. Many course participants either won’t have a Twitter account or simply won’t have tweeted using that particular hashtag. And how do we account for a possible hashjacking?

So we know what data we have. But what data do we need? Are we using Twitter to harvest data just because we can? What questions are we trying to answer? Who is we? Do we understand why we are collecting this data? Who is ultimately benefitting from the collection of this data? To whom is the data being made available? Finally, and perhaps most importantly, what are the ethical considerations of this activity? Harvesting data from Twitter appears at first uncontroversial. Twitter is an open platform and each tweet can be considered a publication.  However, Michael Zimmer raises some interesting points in his blog post Is it ethical to harvest public twitter accounts without consent? Can we really assume that those who tweet do so understanding how their data may be used? And even if we can conclude that we don’t require to seek specific consent from Tweeters to harvest their data, how do we suppose this data will be used? I was particularly interested to read of Militello et al.’s (2013) study which showed the contrast between how different groups responded to data (Selwyn 2015 p.71). If Education researchers are to use Twitter APIs, these are the kinds of questions we need to keep at the forefront of our minds.

Week 8: Data and Education (thoughts on key readings)

My studies last week were interrupted by the Presidential election in the USA. Like millions of others, I spent Tuesday evening checking the results as they were first broadcast by the TV Networks. It soon became clear that, not for the first time, the pollsters got it wrong.

As Selwyn reminds us in Data entry: towards the critical study of digital data and education data in its digital form are now being generated and processed on an unprecedented scale (p.64). But is big Data the panancea it is often presented as?

In an age when we have access to more data than ever, how useful is this data?

As discussed in the TED Radio Hour episode Big Data Revolution, data is everywhere. But what is the value of Big Data? And which metrics do we overvalue and which do we undervalue? What does this tell us about ourselves?

If we are to maximise the possibilities of Big Data we must first acknowledge that data can be a blunt instrument. As data analyst Susan Etlinger says in the episode ‘data doesn’t create meaning – people do’. We therefore need to spend more time on our critical thinking skills. An important question Etlinger raises is: did the data really show us this? Or does the result make us feel more successful, or more comfortable?

All core readings for this week explored various considerations around what Big Data means for Education. In The rise of Big Data: what does it mean for education, technology, and media research? (2013) Rebecca Eynon argues that ‘as a community we need to shape the (Big Data) agenda rather than simply respond to the one offered by others’ (p238) and offers three areas requiring particular attention:

  1. what are the ethical considerations surrounding Big Data? Eynon offers a clear example in the shape of using data to predict drop-out rates. If an institution calculates a particular student is likely to drop-out, what do they do with that information?
  2. what data do we have? we can only study data we have or we can collect, therefore the (limited) data we have restricts what we can research (including inferring meaning).
  3. how Big Data can reinforce and even exacerbate existing social and educational inequalities.

Eynon also raises the challenge of how we train (future) academics in this field to ensure ‘we use these techniques to empower researchers, practitioners, and other stakeholders who are working in the field’ (p.240). This point is echoed in Learning in the Digital Microlaboratory of Educational Data Science where Ben Williamson references Roy Pea (Stanford University) who has called for a new specialised field in this area and identifies “several competencies for education data science”. The report also calls for ‘new undergraduate and graduate courses to support its development’.

Williamson then goes on to discuss the educational publisher and software vendor Pearson and their Centre for Digital Data, Analytics and Adaptive Learning. Digital microlaboratories such as these ‘relocate the subjects of educational research from situated settings and psychological labs to the digital laboratory inside the computer, and in doing so transform those subjects from embodied individuals into numerical patterns, data models, and visualized artefacts’. What nuances are lost in this?

I was interested to learn of the startup schools Williamson refers to (AltSchoolKahn Lab Schoolthe Primary School) which utilise ‘data tracking and analytics to gain insights into the learners who attend them, in order to both “personalise” their pedagogic offerings through adaptive platforms and also test and refine their own psychological and cognitive theories of learning’.

Also of interest was how Pearson has partnered with Knewton to create The Newton Adaptive Learning Platform which uses proprietary algorithms to deliver a personalized learning path for each student.

This reminded me of Todd Rose’s presentation at TedX on The Myth of the Average. It also reminded me of German Chancellor Angela Merkel’s recent warning on the dangers of the potential of proprietary algorithms to narrow debate.

The paper which discussed in most detail the implications for Big Data in Education was Selwyn, N. 2015. Data entry: towards the critical study of digital data and education. Learning, Media and Technology. 40(1). Again, we are reminded that ‘as with most sociological studies of technology, [these] researchers and writers are all striving to open up the ‘black box’ of digital data’ (p.69).  Digital sociologists don’t see data as neutral, but rather inherently political in nature. But ‘data are profoundly shaping of, as well as shaped by, social interests’ (p69). Selwyn argues that educational researchers therefore need to be influencing this new area of sociology. What role is digital data playing in the operation of power? (How) does it reproduce existing social inequalities? How does it reconfigure them?

A key question to ask is therefore ‘who benefits from the collection of this data in education contexts’?

Data surveillance (dataveillance) supports data profiling and crucially, ‘predictive’ profiling (p.74) (echoing Eynon’s point about predicting college drop-outs). Digital surveillance is of course helped, and perhaps made more transparent by the increasing use of VLEs in educational contexts. Whilst this is often framed as an opportunity to evaluate the effectiveness of different aspects of a course, this heightened transparency can lead to ‘coded suspicion’ between academic staff, administrators and students (Knox 2010).

In addition to creating suspicion, analysing data is inherently reductive. Nuanced social meaning is easily lost when data is presented as discrete and finite. We therefore need to consider specifically what reductions must we consider in relation to education. Selwyn argues that firstly we must acknowledge that we tend to measure what we can measure most easily. In an education context this means we measure attendance, student satisfaction and assessment results – all of which can be crude instruments.

Finally, all educational researchers need to be familiar with a variety of data tools and analytics models. In his conclusion Selwyn argues that we need to refuse to take digital data ‘at face value’ but rather recognise the ‘politics of data’ in education and act against it (p.79).

—————————————————————-

Key questions to ask in relation to Big Data in Education

  • what data do we have?
  • what data do we need?
  • how is the data collected?
  • how does the harvesting of data affect relationships between faculty, administrators and students?
  • who benefits from data collection?
  • to whom is the data being made available?
  • who is collecting data in education?
  • what skillsets do data researchers need to better understand data?

 

 

Opportunity costs and badges of honour

Reflections on week 7: infrastructures, credentialing and badging

A summary of Edwards, R. 2015. Knowledge infrastructures and the inscrutability of openness in educationLearning, Media and Technology. 40(3).pp.251-264.

What is the opportunity cost of online education? Although a term traditionally used in accounting, this seems a useful analogy here when discussing the main thrust of Edwards’ argument. “Openness alone is not an educational virtue” (p.253) as the pursuit of openness does not equate to additional educational opportunities. A path taken is a path not taken. Therefore, we need to ask ourselves “what forms of openness are worthwhile, and for whom” (my emphasis) (p.253). Except, what of the circumstances when open education does represent an additional opportunity? I’m thinking of when Dr Emma Smith, Professor of Shakespeare studies at the University of Oxford, made her Approaching Shakespeare lectures freely available on iTunes. For little to no extra effort on behalf of the lecturer, a series of OERs was created and distributed. I am struggling to think what the opportunity cost of this would be.

Edwards makes the important point that the positive claims made for ‘open education’ need to be checked with the following:

  • the availability of electricity and bandwidth (and hardware and software)
  • how digital selects data, information and knowledge
  • the worthwhileness of the OERs (do they match participants goals and aspirations?)
  • what is learnt, rather than what is available (much harder to measure)
  • how is knowledge produced?

The paper then goes on to investigate the concept of knowledge infrastructures. Because there is a selection at play with knowledge infrastructures, we need to pay attention to the ontologies developed and deployed: ‘the digital is not a neutral tool for learning, but is an actor in shaping possibilities for education’ (p.259). This is particularly true when considering the increasing important of algorithms in our digital lives.  Edwards argues that algorithms can’t be contained by the framework of current disciplines (eg computer science, sociology). They are inscrutable (Barocas, Hood, and Ziewitz 2013). This means that in answering ‘teach students to code’ to the question of hidden knowledge infrastructures is not a satisfactory one.

At this point in the paper, I was thinking, yep, this is great, but there’s a lot of description in this paper, and very little prescription. As such, I was pleased to see the author close with a reference to Edwards et al. (2013) and their ‘strategies for researching the work of the digital in knowledge infrastructures’.

—————————————————————

“Honor is a mere scutcheon” (Falstaff) Henry IV part I.

I’ve been thinking about this (honour is but a badge, as opposed to a badge of honour) when reading  Halavais, A.M.C (2012) A Genealogy of Badges: inherited meaning and monstrous moral hybrids, Information, Communication & Society, 15:3, 354-373.

Before reading the article, I thought about what came to mind when someone mentioned badges. I thought of this. And this. And this. So, when Halavais opens his paper with ‘badges have baggage’ I am inclined to agree.

The paper starts as an interesting walkthrough of the history of different types of badges:

  • Badge as persona / identity
  • Badge as achievement
  • Badge as member of a group
  • (Because of the history of badges of dishonour, they are rarely found in the online world)
  • Badge as grading of skill. This has advantages for the organisation (readily identifiable skill-set) as well as the individual (incremental rewards rather than having to wait years for mastery)
  • ‘Campaign badge’. An online equivalent of a campaign badge (the overlay of a Facebook profile pic for example) serves two functions: promoting a political cause and signalling user’s interests and attitudes
  • Fake badges – at present (2011) online badges are not valuable enough to bother faking. I shall have to read further to investigate if this is still the case in 2016.

I particularly enjoyed Halavais’ neat summary ‘part of the problem with badges is simply that they continue to look like badges’ (p.367). In other words, they can carry with them both intended, and unintended value leakage.

The author then introduces Jacobs’ argument in Systems of Survival (1992) that the competing values of the guardian class vs the commercial class are complementary on a social scale but when ‘the same actors engage in a combination of values from each syndrome, it produces ‘monstrous moral hybrids’ (p.368). This again reminded me of how Shakespeare explores such issues in his contrasting of the valiant Hotspur and pragmatic Falstaff. Both Hotspur and Falstaff need each other to frame what they are *not* as much as what they are.  I found the argument that ’emergent governance’ and ‘stewardship governance’ (Wenger 2004), should not attempt to exert their interests through the same system, else risk ‘significant dysfunction’ (p.369) to be convincing. It also reminded me of one of the contradictions Knox (2013) highlights in Five Critiques of the Open Educational Resources Movement: ‘In proposing that university approval for qualifications will raise the perception of OER, Macintosh, McGreal, and Taylor (2011) appear to acknowledge the status and value of the institution. Yet, in advancing a model of self-directed OER learning, the pedagogical proficiency that undoubtedly contributes to the prestige of the institution is eliminated’ (p.825).

As a postscript, I notice that Mozilla created Open Badges in 2011 – the same year as the Halavais paper. I should like to write a further post on how, and if, we should review the Halavais paper in light of developments in open badges in the last five years.

Page 1 of 3

Powered by WordPress & Theme by Anders Norén

css.php

Report this page

To report inappropriate content on this page, please use the form below. Upon receiving your report, we will be in touch as per the Take Down Policy of the service.

Please note that personal data collected through this form is used and stored for the purposes of processing this report and communication with you.

If you are unable to report a concern about content via this form please contact the Service Owner.

Please enter an email address you wish to be contacted on. Please describe the unacceptable content in sufficient detail to allow us to locate it, and why you consider it to be unacceptable.
By submitting this report, you accept that it is accurate and that fraudulent or nuisance complaints may result in action by the University.

  Cancel