I notice in the news is an issue of whether we should have a different name for early maths. It’s actually quite interesting – and quite a problem – the different things we call ‘mathematics’.
You're reading: Travels in a Mathematical World
- pre-recorded videos followed by live online tutorials for students to get support while completing exercises;
- live online classes offering a mixture of lecturer delivery and student activity.
- Restarting the new normal in Teaching Mathematics and its Applications.
- Responding to the COVID-19 pandemic in MSOR Connections, a special issue I just edited with Mark Hodds which collects pandemic-related papers from last autumn’s CETL-MSOR Conference.
An incorrect model of the lottery, and when it doesn’t matter
Recently I came across an interesting idea about little mistakes in counting problems that actually don’t amount to much. In A Problem Squared 030, Matt Parker was investigating the question “What are the odds of having the same child twice?” and made some simplifying assumptions when thinking about DNA combinatorics. He justified leaving out a small number of things when counting an astronomical number of things by going through an example from the lottery.
The current UK lottery uses 59 balls and draws 6 of these, so the one in 45 million figure arises from \(\binom{59}{6}=45,\!057,\!474\), and the probability of winning is a tiny
\[ \frac{1}{45057474} = 0.00000002219387620 \text{.}\]
Matt posits the idea that somewhere along the way we forget to include some tickets.
But let’s say along the way while I’m working it out, for strange reasons I go ‘oh you know what, I’m going to ignore all the options which are all square numbers. You know, I just can’t be bothered including them. Yeah, they’re legitimate lottery tickets, but just to make the maths easier I’m going to ignore them’. And people are getting up in arms, and they’re like ‘you can’t ignore them, they’re real options’.
Adapting my ‘programming for mathematicians’ module for teaching during the COVID-19 pandemic
When teaching moved online due to COVID-19, we had to quickly work out how to deliver our modules online. The main options used to replace in-person classes were:
The first option is good for a module with lots of content delivery, such as when learning new mathematical techniques. In modules with some content delivery but a focus on interaction and discussion, such as mathematical modelling, the second is a good choice.
I felt neither was quite right for my second-year programming module. I opted instead for delivering notes and exercises which students could work through when convenient (which might be in a designed class time or might not) and used my time on the module to write responses to student queries and give feedback on programs written as formative work.
In class students tend to say they’ve done an exercise correctly and because you’re walking round a computer room it can be hard to examine their code in detail. Spending time looking at what they submit as ‘correct’ code in greater detail, it became clear that often there are subtle issues which can be usefully discussed in considered feedback.
Overall, I think this semi-asynchronous delivery was much better use of time and I was able to view more code and give better feedback than I would in-person.
I wrote about my experience delivering this module through the pandemic – the end of one academic year and the whole of the next – with Alex Corner in an open-access article which has just been published as ‘Flexible, student-centred remote learning for programming skills development‘.
This is part of a special issue of International Journal of Mathematical Education in Science and Technology – Takeaways from teaching through a global pandemic – practical examples of lasting value in tertiary mathematics education. There are loads of articles with useful reflections and good ideas that emerged from pandemic teaching.
If you are interested in pandemic literature in higher education teaching and learning, I’m aware of two other journal special issues you might like:
British Science Week mathematicians poster competition
I wrote a mathematics-themed competition for British Science Week, which is a UK-wide event lasting ten days taking place this month.
The competition calls for individuals or groups to research the life and/or work of a mathematician and produce a poster to share their findings. The six mathematicians available to choose from are:
Introducing hexboard – a LaTeX package for drawing games of Hex
Chris Sangwin and I wrote a LaTeX package for drawing Hex boards and games called hexboard. It can produce diagrams like this.
First: why? Then: how do you use it?
MathsCity Leeds opening weekend
One day in February 2014, I was fortunate enough to battle through London during a Tube strike to attend a reception at the House of Commons for MathsWorldUK – an initiative then just two years in development which aimed “to establish a national Mathematics Exploratorium in the United Kingdom … an interactive centre full of hands-on activities showcasing mathematics in all its aspects for people of all ages and backgrounds”.
That initiative took a huge leap forwards last week with the launch of MathsCity Leeds, which my son and I visited on its opening weekend.
Partially-automated individualised assessment of higher education mathematics
A while ago I wrote an article based on my work in partially-automated assessment. The accepted manuscript I stored in my university’s repository has just lifted its embargo, meaning you can read what I wrote even if you don’t have access to the published version.
Thinking about assessment, it seems there are methods that are very good at determining a mark that is based on a student’s own work and not particularly dependent on who does the marking (call this ‘reliability’), like invigilated examinations and, to some extent, online tests/e-assessment (via randomised questions that are different for each student). These methods tend to assess short questions based on techniques with correct answers and perhaps therefore are more focused on what might be called procedural elements.
Then there are methods that are probably better at assessing conceptual depth and broader aspects that we might value in a professional mathematician, via setting complex and open-ended tasks with diverse submission formats (call this authenticity and relevance ‘validity’). People are often concerned about coursework because it is harder to establish whether the student really did the work they are submitting (not an unreasonable concern), which impacts reliability.
It is hard to ask students to complete high-validity coursework tasks (that might take weeks to complete) in exam conditions, and diverse submission formats do not suit automated marking, so two ways to improve reliability are not available. The idea with partially-automated assessment is that an e-assessment system can be used to set a coursework assignment with randomised elements which is then marked by hand, gaining the advantageous increase in reliability via individualised questions without triggering the disadvantage of having to ask for submission in a format a computer can mark. The payoff is that the marking is a bit more complex for the human who has to mark it, because each student is answering slightly different questions.
In the article I write about this method of assessment, use it in practice, and evaluate its use. It seems to go well, and I think partially-automated assessment is something useful to consider if you are assessing undergraduate mathematics.
Read the article: Partially-automated individualized assessment of higher education mathematics by Peter Rowlett. International Journal of Mathematical Education in Science and Technology, https://doi.org/10.1080/0020739X.2020.1822554 (published version; open access accepted version).