When teaching moved online due to COVID-19, we had to quickly work out how to deliver our modules online. The main options used to replace in-person classes were:
pre-recorded videos followed by live online tutorials for students to get support while completing exercises;
live online classes offering a mixture of lecturer delivery and student activity.
The first option is good for a module with lots of content delivery, such as when learning new mathematical techniques. In modules with some content delivery but a focus on interaction and discussion, such as mathematical modelling, the second is a good choice.
I felt neither was quite right for my second-year programming module. I opted instead for delivering notes and exercises which students could work through when convenient (which might be in a designed class time or might not) and used my time on the module to write responses to student queries and give feedback on programs written as formative work.
In class students tend to say they’ve done an exercise correctly and because you’re walking round a computer room it can be hard to examine their code in detail. Spending time looking at what they submit as ‘correct’ code in greater detail, it became clear that often there are subtle issues which can be usefully discussed in considered feedback.
Overall, I think this semi-asynchronous delivery was much better use of time and I was able to view more code and give better feedback than I would in-person.
This is part of a special issue of International Journal of Mathematical Education in Science and Technology – Takeaways from teaching through a global pandemic – practical examples of lasting value in tertiary mathematics education. There are loads of articles with useful reflections and good ideas that emerged from pandemic teaching.
If you are interested in pandemic literature in higher education teaching and learning, I’m aware of two other journal special issues you might like:
A while ago I wrote an article based on my work in partially-automated assessment. The accepted manuscript I stored in my university’s repository has just lifted its embargo, meaning you can read what I wrote even if you don’t have access to the published version.
Thinking about assessment, it seems there are methods that are very good at determining a mark that is based on a student’s own work and not particularly dependent on who does the marking (call this ‘reliability’), like invigilated examinations and, to some extent, online tests/e-assessment (via randomised questions that are different for each student). These methods tend to assess short questions based on techniques with correct answers and perhaps therefore are more focused on what might be called procedural elements.
Then there are methods that are probably better at assessing conceptual depth and broader aspects that we might value in a professional mathematician, via setting complex and open-ended tasks with diverse submission formats (call this authenticity and relevance ‘validity’). People are often concerned about coursework because it is harder to establish whether the student really did the work they are submitting (not an unreasonable concern), which impacts reliability.
It is hard to ask students to complete high-validity coursework tasks (that might take weeks to complete) in exam conditions, and diverse submission formats do not suit automated marking, so two ways to improve reliability are not available. The idea with partially-automated assessment is that an e-assessment system can be used to set a coursework assignment with randomised elements which is then marked by hand, gaining the advantageous increase in reliability via individualised questions without triggering the disadvantage of having to ask for submission in a format a computer can mark. The payoff is that the marking is a bit more complex for the human who has to mark it, because each student is answering slightly different questions.
In the article I write about this method of assessment, use it in practice, and evaluate its use. It seems to go well, and I think partially-automated assessment is something useful to consider if you are assessing undergraduate mathematics.
Read the article: Partially-automated individualized assessment of higher education mathematics by Peter Rowlett. International Journal of Mathematical Education in Science and Technology, https://doi.org/10.1080/0020739X.2020.1822554 (published version; open access accepted version).
I’m grateful to Jemma Sherwood and Rob Low for reading an early draft of this and for their comments thereon. All opinions are, of course, my own.
This post is inspired by something that I see crop up now and again in discussions with other Maths teachers. It usually manifests itself as a rallying cry to use ≡ in place of = in identities and reserve = for equations. My standard response is to mutter something about identities being equations and leave it at that. But in the latest round, Jemma Sherwood challenged me, in the nicest possible way, to explain a bit further. This is that explanation.
Although I’m going to state my case here, I’m well aware that there are different opinions. In matters of opinion, such as this, agreement and disagreement is less important than that all sides think. So if what I write seems to you wrong, that’s fine so long as it makes you think about why you think that it is wrong.