You can see more about the reasons for the dispute from UCU, and from Edinburgh’s own Staff Student Solidarity Network.
For me, one of the main reasons is the way that pay has been eroded since 2009 (which happens to be around the time I started working for the University of Edinburgh).
This chart will be familiar to colleagues in the School of Mathematics, since I stuck it up in our common room above the sink! It shows the starting salary for a Lecturer, and for a Reader (the next grade up), and how these have been increased each year by employers since 2009. I picked these job titles as examples, but the same pattern is true for other types of roles too.
The chart also shows what would have happened if employers increased the salaries each year to keep track with inflation (the RPI version of it):
The key things I take from this chart are:
So – pay has been gradually eroded over the past decade or so, to such an extent that university staff are now effectively demoted by one grade on the salary scale.
If all university staff were demoted by one grade overnight, I’m pretty sure there would be outrage! That’s what I feel looking at this data, showing that we’ve been demoted gradually over many years.
The current dispute is about the pay increase for 2022/23. UCU has asked for an increase of RPI+2% for that year, which would go only a little way to closing the gap. Employers have imposed an increase of 5% and refuse to discuss it any further.
A second reason for me taking part in the strike is to show my disappointment with senior management. I know I’m not alone in this – in the recent Staff Engagement Survey, just 23% of staff agreed with the statement “I have confidence in the university leadership”. In the College of Science and Engineering (where I work), it was just 12% of staff.
The dispute about pay is a national one, and our Principal, Peter Mathieson, has pointed to the affordability of pay increases across the university sector as a reason for not supporting inflationmatching pay rises for his staff.
I reached out personally to the Principal on 30th May to ask:
You point to how some institutions may be unable to afford a larger pay increase than the subinflationary one that has been imposed. What are you doing as a leader in the sector to resolve this? Our sector should be strong enough to support pay rises in line with inflation!
I’ve not asked permission to share his reply, so I won’t – but in essence it was: “what do you propose?” I found this shocking. I thought he was supposed to be providing the strategic leadership, in return for his £400k+ salary! It’s not even like this is a oneoff issue that could have caught management by surprise – as the chart shows, it’s a systemic issue. Why are management content to do nothing about this year after year?
Senior management have also turned down an opportunity to stop the strikes in week 2 of our semester. The local UCU branch offered to cancel the strikes in return for restoring pay that had been withheld from colleagues taking part in the marking and assessment boycott (and who are now being asked to complete the marking since the boycott has ended). The offer from UCU was refused by management. Meanwhile management are painting a different picture for students (emphasis mine):
Those words sound hollow to me, in light of management’s decision to refuse UCU’s offer on 18 September – which has the effect of knocking out 5 working days when colleagues could be getting on with teaching semester 1 courses and dealing with the backlog of marking.
If you’re a student, please let University management know what you think by emailing the Principal, Peter Mathieson (principal@ed.ac.uk) and/or VicePrincipal Students, Colm Harmon (Colm.Harmon@ed.ac.uk).
If you’re a colleague who’s not already in UCU, it’s never too late to join the union!
]]>Roles of eassessment in course design
Q27. How can formative eassessments improve students’ performance in later assessments?
Q28. How can regular summative eassessments support learning?
Q29. What are suitable roles for eassessment in formative and summative assessment?
Q30. To what extent does the timing and frequency of eassessments during a course affect student learning?
Q31. What are the relations between the mode of course instruction and students’ performance and activity in eassessment?
I’m sure that I’ve not explored the full potential of all the Moodle quiz options, but here are some examples of settings that I use in my Fundamentals of Algebra and Calculus course.
For the course materials within FAC, I have put all the feedback settings to the max, including the option to redo individual questions:
The “scores” on those quizzes really don’t matter for anything, and actually most students never even submit the whole quiz to be graded, since they can see the results questionbyquestion as they go through.
Each week there is an assessed quiz that contributes to the students’ grades (see below) – but before they can take that quiz, they need to score at least 80% on the week’s Practice Quiz.
For the Practice Quiz, students can have an unlimited number of attempts – but there is no way to replace individual questions. Instead, students need to complete the whole quiz and submit it to get their score at the end (as preparation for the way it will work in the assessed quiz).
Each week there is a “Final Test” quiz that contributes to the students’ grades. Scores of over 80% are a “Mastery” result, and students need to get at least 7 Mastery results across the 10 weeks to pass the course. The grading scheme is a bit more complicated than that; you can see the full details in my paper about the course design and my post about how to set it up in the Moodle gradebook.
Having the requirement to score at least 80% on the Practice Quiz (which typically has very similar tasks to the Final Test) means that students should be wellprepared to succeed.
The Final Test itself uses more restrictive settings to control when feedback is available, since I wanted to avoid having worked solutions circulating while other students have yet to complete the quiz. In particular, the “general feedback” (i.e., worked solution) and “right answer” are only available after the quiz is closed:
There is only 1 attempt allowed at this quiz, with a time limit of 90 minutes from when the quiz is opened. Students need to complete each week’s quiz by a regular deadline. However, if students don’t meet the Mastery threshold, there is a resit version that becomes available the next day (again, set up using the “restrict access” feature so that it only appears for students who need it).
As I mentioned, I’ve only scratched the surface of what’s possible with the Moodle quiz settings. I know other colleagues have set up quizzes where students can make multiple attempts, and the grade is based on the average of the attempts (so as to incentivise trying hard on the first attempt, but allowing for students to improve if they’re not happy with a bad first attempt). It’s also possible to set penalties within questions, so that you can use the interactive quiz mode (like the course materials example above): that allows students to redo an individual question if they’re not happy with the score, but possibly with a penalty (again, to encourage students to take the first attempt seriously).
]]>Our students have a “University User Name” (UUN) which is an S followed by a 7digit number (e.g. “S1234567”), which can be used to link different datasets together. I need to replace these identifiers with new IDs, so that the resulting datasets have no personal identifiers in them.
I’ve written a simple R script that can read in multiple .csv files, and replace the identifiers in a consistent way. It also produces a lookup table so that I can deanonymise the data if needed. But the main thing is that it produces new versions of all the .csv files with all personal identifiers removed!
]]>One approach to this is to ask proof comprehension questions after giving the proof, but I’ve also tried writing some sequences of questions that lead the students through the proof in a scaffolded/structured way.
Here’s a simple example, of a sketch proof of the Fundamental Theorem of Calculus:
Students can’t see the next part of the proof until they give an answer. Once they have submitted their answer, the next part is revealed:
I’ve used this approach in other places in the course, sometimes with more than one step.
The way to do this in Moodle is by having the quiz settings set to “Interactive with multiple tries”:Then using the little padlock symbols that appear at the righthand side between questions on the “Edit questions” page:
After clicking the padlock, it changes to locked to indicate that students must answer the first question to see the second:
I’ve not done any serious evaluation of this approach, but my intuition is that it’s a good way to direct students’ attention to certain parts of a proof and encourage them to be more active in their reading.
]]>All the Unit Tests are set up as Moodle quizzes, and I needed a way to compute the number of tests completed as Mastery level (and at Distinction level) for each student.
To make matters more complicated, there are 4 different versions of each Unit Test:
Each subsequent attempt replaces the result of previous ones – e.g. if a student with a Mastery result on the first attempt decides to take the Unit Test (Extra Resit) to try to get a Distinction, then they will lose the Mastery result if they do not reach the 80% threshold.
To set this up in the Moodle gradebook, I have given each of the variants an ID, with the pattern:
(where n is the week number).
Then I have added a calculated grade item called “Number of Mastery results”, with a complicated formula to determine this. It is the sum of 10 terms like this:
ceil([[W1FTRD]]/32)*floor([[W1FTRD]]/25.5)+(1ceil([[W1FTRD]]/32))*(ceil([[W1FTR2]]/32)*floor([[W1FTR2]]/25.5)+(1ceil([[W1FTR2]]/32))*floor(max([[W1FT]],[[W1FTR]])/25.5))
where this snippet computes the number of Mastery results in week 1 (i.e. it will return 0 or 1).
Note that the 25.5 appears throughout this expression because that is the threshold for 80% on these tests.
ceil([[W1FTRD]]/32)*floor([[W1FTRD]]/25.5)
means “if they took the Resit Diet version, then use their score on that to decide if they got a Mastery result”(1ceil([[W1FTRD]]/32))*(...)
means “if they didn’t take the Resit Diet version, then use their other scores to decide”This is all quite complicated, I know! It has grown up over time, as the FTR2 and FTRD versions were added after I first set up this approach.
Also, when I first implemented this, our version of Moodle did not support “if” statements – since the Moodle grade calculations can now make use of “if” statements, this calculation could be greatly simplified.
]]>This year I’m supervising three undergraduate projects, and I’ve asked them to use the APA style for referencing in their reports.
It took me a while to find a way of doing this in LaTeX that I was happy with, so to smooth the path for my students I shared this version of the project template, where I’d made all the necessary changes to implement APA style:
https://www.overleaf.com/read/yjkyzmpmkcdm
The key parts are as follows.
In the preamble:
% formatting of hyperlinks \usepackage{url} \usepackage{hyperref} \usepackage{xcolor} \hypersetup{ colorlinks, linkcolor={red!50!black}, citecolor={blue!50!black}, urlcolor={blue!80!black} } % Use biblatex for references  change style= as appropriate \usepackage[natbib=true,backend=biber,sorting=nyt,style=apa]{biblatex} \renewcommand*{\bibfont}{\fontsize{10}{12}\selectfont} % add your references to this file \addbibresource{references.bib}
At the end of the document:
\printbibliography{}
And make sure to add references.bib to your project, with all the bibtex references. I’ve found Mybib.com a really useful tool for this, though I mainly use Zotero as my reference manager (and this can import easily into Overleaf).
]]>This post is focused on the first of these. Developing students’ abilities to read proofs is something that is not often done explicitly – there may be an assumption that students will pick it up by osmosis. There is some research into how to help students to develop these abilities (e.g., Hodds et al.. 2014), and a key part of this is having a good way to measure students’ level of comprehension of a given proof.
MejiaRamos et al. (2012) give a framework for assessing proof comprehension, with 7 different types of questions that can be asked:
Local  Holistic 


You can see some more detail about these different categories in a recent talk by Pablo.
The framework is helpful when trying to write questions to assess students’ understanding of a given proof, as it gives ideas for different types of questions you can ask.
A few years ago, I used this framework to put together some multiplechoice proof comprehension questions for our Year 3 course, Honours Analysis.
My experience of these is that students found them quite hard – the mean score was around 75%, so they are not trivial for students to answer.
Hodds, M., Alcock, L., & Inglis, M. (2014). SelfExplanation Training Improves Proof Comprehension. Journal for Research in Mathematics Education, 45(1), 62. https://doi.org/10.5951/jresematheduc.45.1.0062
MejiaRamos, J. P., Fuller, E., Weber, K., Rhoads, K., & Samkoff, A. (2012). An assessment model for proof comprehension in undergraduate mathematics. Educational Studies in Mathematics, 79(1), 3–18. https://doi.org/10.1007/s1064901193497
]]>I learned about this from a bit of googling, which led to this guide to producing a screenshot as a SVG (scalable vector graphic).
Based on that, here’s an easy way to take a screenshot as a PDF:
For the page I was saving, I found that setting the paper size to A2 gave good results. I also set Margins to “Custom” and made the page slightly narrower. I think you just need to play around with the page size, scaling and margins until you are happy.
I also used the developer tools window to tidy up the page a little, e.g. deleting some irrelevant navigation boxes, and instructoronly tools.
Et voila!
Screenshot_Polynomials323_sketchingcubics
]]>4*(cos(pi/3)+i*sin(pi/3))
an answer of 4*(cos((1/3)*pi)+i*sin((1/3)*pi))
would be marked incorrect!
The issue was that:
I was using the EqualComAss answer test to check whether the student answer (ans1) was equal to the model answer (ta1), and this was failing in the cases above.
The solution I came up with is to add some code to the feedback variables box at the top of the PRT, to replace cos and sin with alternate versions so that Maxima can’t simplify the expressions to cartesian form. I can then use ev(…,simp) to make use of simplification when comparing the expressions:
form_ans1:subst([cos=COSINE, sin=SINE], ans1); form_ta1:subst([cos=COSINE, sin=SINE], ta1); proper_form:is(ev(expand(form_ans1form_ta1),simp)=0);
This will ensure that COSINE(pi/3) and COSINE((1/3)*pi) will cancel out, thanks to the simplification being turned on.
But since Maxima doesn’t know anything about COSINE, it can’t cancel out COSINE(pi/3) and COSINE(5pi/3) (as it would do with cos) if students give their answer with the wrong value for the principal argument.
It was then just a case of replacing the test for EqualComAss(ans1,ta1) in the PRT with a test that AlgEquiv(proper_form, true), and regrading. Out of ~160 attempts this picked up 8 students who deserved full marks!
Update (08/11/2021): One year on, and STACK now has a new feature which makes it easier to grade these answers correctly! The new EqualComAssRules answer test lets you add a list of different algebraic rules so that two answers should count as equivalent if they differ only by those rules – e.g. x and 1*x.
To fix this question, it’s enough to change the first PRT node to the following, using the “Test options” box to specify the list of algebraic rules:
ATEqualComAssRules(ans1, ta1, [ID_TRANS,NEG_TRANS,DIV_TRANS,INT_ARITH]);]]>
We’re using Gradescope to mark 5 of our remote exams at the moment. Here, I’ll outline the process that we’ve used.
As with all our exams, we go through a process to prepare a folder of anonymised PDFs, one for each student.
Gradescope provides two different types of assignment:
Unfortunately neither of these quite fit our situation – we have a set of variable length scripts, but we need to upload them (since we wanted students to have a consistent submission experience across exams, whether or not we used Gradescope to mark them).
Fortunately my colleague Colin Rundel is a wizard with R and he was able to semiautomate the process of uploading each script individually to the “Homework / Problem Set” assignment type. All our marking is done anonymously, so we’re only using the students’ Exam Number in the Gradescope class list, and for each student Colin’s R script uploads their PDF submission.
Once the scripts are uploaded, we still need to identify which questions are on which pages – a process I’ve taken to calling “zoning” since that’s the terminology used in RM Assessor (one of the other tools we’ve been trying).
To do this, we’ve employed several PhD students, who would normally have been helping out with various marking jobs for our 1st/2nd year courses (but those exams were cancelled for this diet).
These PhD students were set up as TAs in the course, and tasked with marking up which questions were on each page, just like the students would normally do in Gradescope. This is surprisingly a difficult workflow in Gradescope, requiring multiple clicks to move between scripts (and there is no summary of which scripts have been “zoned”). To get round this, Colin prepared a spreadsheet with direct links to each script in Gradescope, and the zoners used this to keep track of which ones they had completed (and note any issues). I wrote some very brief instructions on the process (PDF) – this included a short video clip of me demonstrating how to do it, but I’ve redacted that here because it shows student work.
The process in Gradescope is based on using rubrics (see https://www.gradescope.com/get_started). These can work with either positive or negative marking; we have been using the default of negative marking in the exams so far, which is different to our usual practice but seem to work best in this system. Essentially for each question, you develop a set of common errors and the associated number of marks to take off. That way you can then tag responses with any errors that occur, giving more useful feedback about what went wrong.
Each course has worked a little differently, but the basic idea is for the Course Organiser to develop a rubric and make sure it works on the first 1015 scripts.
Other markers can then be assigned a question (or group of question parts) each, and they go through applying the rubric. We’ve asked that they flag any issues with the rubric to the CO rather than editing it directly themselves (e.g. to add a new item for an error that doesn’t appear already, or if it seems that the mark deduction for one item is too harsh given the number of students making the error).
A nice feature is that you can adjust the marks associated to rubric items, and this is applied to all previously marked scripts in the same way.
Feedback from markers so far has been very positive. They have found the system intuitive to use, and commented that being able to move quickly between all attempts at a particular question has meant that they can mark much more quickly than on paper. Gradescope have also done some analysis of data from many courses and found that markers tend to get quicker at marking as they work through the submissions:
Once marking is completed, the CO can look through the marking to check for any issues. The two main ways of doing this are:
Gradescope provides the facility to download a spreadsheet showing the mark breakdown for each script, and also a PDF copy of the script showing which rubric items were selected for each question part. We’ll be able to make those available for the moderation and Exam Board process.
Gradescope is clearly a powerful tool for marking, and I think we will need something like this if we are to do significant amounts of onscreen marking in future.
However it does come with some issues – we had to work around the fact that it is not designed for the way we needed to use it. For long term use it would make sense to have the students tag up which questions appear on which pages, but that would require integration with our VLE and would add a further layer to the submission process for students (and another tool/system to learn to use). I was also concerned to see news that Gradescope crashed during an exam for a large class in Canada, and there are obvious issues about outsourcing such a sensitive function.
]]>