Heavy and light at the same time – the “conflict of efficiencies”
Earlier this month, on the last day of the PebblePad 2024 conference in Edinburgh – as part of my presentation – I had a chance to share my reflections on an interesting issue (interesting to me that is!) which I call “the conflict of efficiencies”.
Having been involved in designing and supporting e-portfolio implementations for almost two decades, I have come to view a tool like PebblePad as a flexible “net”. In my professional practice, this “net” is then spread to ensure we “catch” most of the requirements of a given e-portfolio-based exercise or workflow.
However, the focus of my presentation was not on how we deploy PebblePad at scale to ensure that it can simply support a particular workflow. This time, I was attempting to capture my observations around the act of casting this “net”. Through this act we are designing workflows involving the three main groups of users (students, markers and admins). Each of these groups expects their often heavy requirements to be satisfied through a workflow as light (efficient) as possible for them. For example, markers need it efficient for marking, students need it efficient when uploading submissions, and the admins seek efficiency around setting things up, managing the users’ roles, etc.
My analysis was based on a selection of three large-scale PebblePad-based case studies from our university: (1) multilevel workbooks which host PebblePocket-generated evidence; (2) mass peer-review exercises for 1000s of feedback responses, and (3) the “double-blind” marking of dissertations managed simultaneously over two ATLAS workspaces.
The case studies evidenced how we had responded to theses projects’ requirements by utilising all of the system’ parts as efficiently as possible – to satisfy all of the three groups of users’ needs. However, by catering to all, our “efficient deployment” meant introducing more complexity on the instructional as well as the practical levels. Any setup designed to satisfy everyone risks becoming satisfactory to no one. Clearly, we are dealing here with an interesting “conflict of efficiencies” given the fact that the perceptions of what is efficient differ across the three user groups and even the system itself!
After my PebblePad conference appearance I kept thinking whether this “conflict of efficiencies” is encountered more widely in our field? As learning technologists, we help develop the VLE-based courses, online assessment workflows, digital exercises, etc. They tend to be highly customised and complex (i.e. spanning across multiple groups of students, utilising the desktop and mobile versions, allowing multiple marking methods and dealing with very high number of submissions, etc.). All of these custom configurations are addressing some very specific and often substantial lists of requirements. Therefore, perhaps I could risk a theory that my “conflict of efficiencies” is observable as part of our other systems and services? Especially those systems which are spec’d out by our users as: straightforward to access yet with multiple points of access; never too overwhelming yet very powerful; uncluttered and at the same time with a wide range of options, and so on.
Trying to answer my question, yes, I can definitely observe a tension or even a conflict between the directions in which some of these demands seem to be pulling. For instance, highly bespoke and “all in one place” online offerings can be perceived as efficient by students. At the same time, the drive for the administrative efficiency pushes towards more standardised and replicable collections. Similarly, the demands to simplify the interface to benefit the new markers or new students are in contrast to the push by the administrators who require multiple options to manage and control the submissions and the feedback flexibly. Furthermore, the administrative need for the rigours criteria around all types of deadlines and timestamping does not often blend well with some exercises which allow makers to run their formative assessments more efficiently at the time of their choice.
Once more, such observations seem to be pointing to the challenge which the designers of these workflows face when trying to harmonise the demands for efficiency to suit the main user groups. On the one hand, the scale of any learning technology system’s operational complexity expands in proportion with the complexity of the original requirements. On the other hand, individual groups of users require a different type of efficiency which would allow them to interact with the system in ways that are simple and intuitive for them.
Ending my rambling I am left wondering whether there is a way through which we could easily avoid these dichotomies? Or perhaps, whilst a somewhat frustrating user experience challenge to all of us, this “conflict of efficiencies” shall prove impossible to eliminate entirely?