This is a story about a particularly awful bit of science reporting. It’s taken me a while to sort out what’s going on. First I thought it was bad maths, then I thought it was bad science, and now I think it might be bad journalism. But it might be none, all, some, or a superset of those. I didn’t really want to write this post because crowing about other people’s apparent failures isn’t great, but we took a team decision to cover it because of entertainment reasons. ((Unlike the BBC, we are not funded in a unique way, so we have no public service remit. I think that’s right.)) I’m going to make an attempt to tell you what I’ve found, a little bit of what I think, and then we can rejoin Team Positive Mental Attitude when it’s all over.
You're reading: Columns
- Immediate feedback. This is linked to learning from mistakes, confidence and motivation. It can also prioritise procedural learning over conceptual understanding.
- Detailed, personalised feedback. Though there is much disagreement in what I have read whether a human, who can respond to individual student work, or a computer, which will tirelessly generate worked examples using the context of the question asked, will in practice provide this.
- Individualised assessment. This is achieved through randomisation of questions and is linked to repeated practice, deterring plagiarism, allowing students to discuss the method of a piece of work without the risk of copying or collusion.
- Assessing across the whole syllabus. For example, computers can’t mark every topic.
- Testing application of technique. Whether students can apply some procedure.
- Assessing deep or conceptual learning. For example, open-ended or project work may require a detailed manual review to mark. This is linked to graduate skills development, etc.
- Easy to write new questions. Assume it is easy for a lecturer to write questions that students can answer (it isn’t, but we’re talking principle here). Difficulty is introduced by having to second guess an automated system, or having to second guess students to program misconceptions.
- Quick to set assessments. Assume that writing a test manually takes time. By quickly, I really mean choosing items from a question bank.
- Quick to mark assessments. Assume that marking by hand is not quick, perhaps unless the assessment is very short and student answer format very prescribed, in which case the assessment is limited. This is perhaps linked to problems of consistency and fairness when using multiple markers.
- Easy to monitor students. Clearly marking individual work from every student by hand will give great insight, but here I refer to the ability to gain a snapshot of how individuals and the cohort are doing as a whole with a concept, perhaps very soon after a lecture that introduced that concept has taken place.
- Perception of anonymity. I’ve read that some students are happier to make their mistakes if only a computer knows. This can reduce stress.
- Testing mathematical writing. Clearly requires hand-written work.
- Testing computer skills. Clearly requires use of a computer.
Submergence01 by Squidsoup
[vimeo url=http://vimeo.com/57412634]
Submergence01, by Squidsoup.
via NotCot.org
This is an ex-parrot. ‘Is plumage has dimension 1
A paper by Lorenzo Pérez-Rodríguez, Roger Jovani and François Mougeot in Proceedings B, “Fractal geometry of a complex plumage trait reveals bird’s quality“, claims that the measurement of the fractal dimension of a red-legged partridge’s chest plumage is a good indicator of its health.
I know what you’re thinking: another ‘non-mathematicians pick trendy term to describe something rather different’ story, but actually the authors do quite a good job of explaining and justifying their method. I’m convinced!
Plumage patterns are the product of reaction-diffusion systems which probably don’t really produce fractal dimension, but the researchers needed a fairly easy and consistent way of measuring the complexity of patterns. A healthy bird can produce more melanin, which can produce more complicated patterns. For the level of detail needed, the researchers say that the box-counting method of computing fractal dimension is a quick way of measuring the effect they’re looking for.
Paper: Fractal geometry of a complex plumage trait reveals bird’s quality, by Lorenzo Pérez-Rodríguez, Roger Jovani and François Mougeot, in Proceedings of the Royal Society B.
via Slashdot
Art by Owen Schuh
Manchester MathsJam January 2013 Recap
The first MathsJam of the year was well attended. Despite not being on our usual table (there was no jazz band on this week, so we were allowed a bigger table further into the pub) everyone found us ok, and a few people brought baked goods – always a precursor to an excellent MathsJam.
We started off with some quick mental arithmetic brainteasers: how many straight cuts do you need to make to slice a flat square cake into 196 equally sized square pieces? Several people got the answer quite quickly, while others tried to cheat by stacking cake pieces and moving them around between cuts. No cheating!
Advantages of assessment – please discuss
I write to share and invite discussion of something I presented at a conference at Nottingham Trent University last week.
I have been thinking a lot about assessment methods and their advantages and limitations for a chapter I am writing for my PhD thesis. For example, I could set a paper test and mark it by hand, as indeed I set one last week and will be marking it when I finish this post, and this allows me to give a personal touch and assess students’ written work but one downside is that I can’t return marks to students very quickly. I could return marks immediately if I used automated assessment, but then setting the assessment would be more difficult and I may be limited in the range of what I could assess. And so on.
I have been trying to classify these advantages and their paired limitations. My thinking is that by viewing different assessment methods as balanced sets of advantages and limitations we can justify different approaches in different circumstances and, particularly for my PhD, explore the advantage/limitation space for any untapped opportunities, which I won’t go into now (but ask me).
Here is my current list of potential advantages that assessment could access. These advantages are each something that I think that some assessment method can offer. My question is: what am I missing? I would be pleased to receive your thoughts on this in the comments.
For example, then it might be possible to offer ‘Easy to write new questions’, ‘Assessing deep or conceptual learning’ and ‘Testing mathematical writing’ through a traditional paper-based, hand-marked assessment, but this would preclude, for example, ‘Immediate feedback’.
Similarly, a multiple-choice question bank might offer ‘Quick to set assessments’ and ‘Quick to mark assessments’ at the expense of ‘Assessing across the whole syllabus’ and ‘Assessing deep or conceptual learning’.
And so on. I have loads of these for different assessment types.
My question really is, is there anything missing from my list that might be delivered by an assessment method?
The perfect formula for mathsiness
It’s mid-January, which means it’s time for the tabloids to trot out their annual “this is the most miserable day of the whole year!” story — before they spend the rest of the year blaming immigration, youth and political correctness for problems they’ve spent the last year stoking up.
Ahem.


