Usually when I blog I try to think of a funky title to depict me brian dump however today I am feeling less imaginative so the title reflects the blog. It’s about the OnTask pilot 2020 @ the University of Edinburgh.
If you want to hear someone rant about big data and prediction please flick through my previous posts for a flavour however today I want to share some info on what the heck OnTask is and provide some details on the Universities pilot in 2020.
So let’s go back a few years to Horizon report scanning, big educational data, business intelligence, and the desire to do something with all of these things. With the digitification of education (and everything else) we are generating data as daily byproduct from systems that are used to teach or support teaching. Now imagine that data being used by an academic to provide a user with personalized feedback based on rules they have created (e.g. IFTTT scenario). Sounds pretty awesome!
OnTask is open source software developed by Abelardo Pardo (University of South Australia) Its currently used by institutions in Australia and the USA however the University of Edinburgh is the first UK institute to pilotOnTask. The basic premise of OnTask is to provide learners\students with actionable feedback at key points within a course to help them stay OnTask. Some of the benefits can include:
- Increase in course completion
- Improved student experience
- Increase in course activity
- Better understanding of topics (via targeted feedback)
- Reduction in academic workload
- Reduction in transactional distance
I am not going to bore you with the technical stuff however links are at the bottom. To summarise, data can either be uploaded directly into OnTask (exported from an LMS etc) or pulled via an SQL connection. The uploaded data can be merged with other data sets, transformed (with options including derived columns or columns with random numbers) or simply left alone.
From the data we can create personalised emails that contain data within columns (first name, matriculation number, score) or create rules which will be used to provide content based on whether the user meets that rule (e.g. Quiz Score = 0, Quiz Score between 1-3, Quiz Score equal or greater than 4). Links, images, videos and standard text formatting can be added plus Instructors are able to preview the email to check content and check rules based content.
Once the personalised email has been finalised (emails can be previewed to check content and rules are being applied correctly) it can be scheduled or sent immediately.
OnTask will be piloted in 2020 on 3 courses with an evaluation report published in early 2021. It has been used in anger already in 1 course with 600 learners receiving feedback dependant on MCQ activity. Interestingly some learners have responded directly to the email with course feedback or thanking the tutor for the feedback plus we have seen an increase in users participating in the related MCQ.
I will blog are thoughts and findings over the next year however some early lessons Learnt so far:
- OnTask best fits medium to larger courses were academics can struggle to provide personal feedback
- Academic buy-in is key to success
- Feedback text is the gold (sometimes we love to talk tech however the wording, tone, type of feedback is key and can be difficult to write)
- A Lot of upfront work to identify key points within a course, what data is available and whether the data relevant and accurate
- Reflect on whether feedback is useful (does differentiating the total number of posts in a discussion forum tell us anything that forum participation doesn’t?)
- OnTask can be re-used each time the course runs (so its not the same pain again)
- Can be used as frequently as the academic wants (maybe once maybe every week)
Links to further Information:
Apereo Foundation https://www.apereo.org/projects/ontask
OnTask Webpages https://www.ontasklearning.org/tool/
Github Repo https://github.com/abelardopardo/ontask_b