Today my phone told me that the app Photomath has an update and now supports handwriting recognition. This means I can write something like this:

and Photomath does this with it:

Well. My immediate reaction is to be quite terrified. Clearly this is a fantastic technical achievement and a wonderful resource, but my thoughts go straight to assessment. I remember when I heard Wolfram Alpha was released, I was working to input questions a lecturer had written into an e-assessment system and realised that all the questions on the assessment I was inputting could be answered, with zero understanding, by typing them into Wolfram Alpha. Actually, not quite zero understanding, because at least you had to be able to reliably type the question. Now Photomath closes that gap (or will do soon – of course, it’s not yet perfect).

However, a lot of water has passed under the bridge since I was inputting questions into an e-assessment system. I’m a lecturer at Sheffield Hallam University now, where students who don’t arrive knowing about Wolfram Alpha are told about it, because students are encouraged to learn to use any technology available to them. Indeed, this year I was involved with marking a piece of coursework where engineering students were asked to show by hand how they had worked out their solutions and provide evidence that they checked their answer by an alternative method, usually by Wolfram Alpha screenshot.

It if often the case that lecturers use computers when setting assessments (beyond typesetting, I mean), even when they don’t expect students to use them in answering. I asked this question in a survey for my PhD and even about half of people who don’t use e-assessment with their students still use computers when setting questions (to check their answers are correct, perhaps). (Link to PhD thesis, see section 3.4.5 on p. 60.) Perhaps we should encourage our students to embrace technology in the same way.

In the academic year that is about to start, I am to teach on the first year modelling module. This is where our first year mathematics degree students get their teeth into some basic mathematical models, ahead of more advanced modelling modules in the second and final year. If you accept that a lot of mathematics is a process of: understand and formulate the problem, solve it, then translate and understand that solution – then this sort of technology only helps with the ‘solve it’ step. In the case of modelling, taking a real world situation, interpreting that as a mathematical model and extracting meaning from your solution are difficult tasks of understanding which these technologies do not help with, even as they help you get quickly and easily to a solution.

So, should I view Photomath as a terrifying assault on our ability to test students’ ability to apply mathematical techniques? Probably I should view it instead as a powerful tool to add to the mathematician’s toolkit, which hints at a world where handwritten mathematics can be solved or converted to nicely typeset documents, and so allow my students to gravitate from the tedious mechanics of the subject to greater ability to apply and show off their understanding. Probably.

]]>Oh blimey pic.twitter.com/OdKS1MmY1N

— Peter Rowlett (@peterrowlett) September 4, 2016

Year 1, Semester 1: I had three two-hour exams. One was 9am on Monday, the second was 9am on Tuesday and the third was 4.30pm on the same Tuesday.

Year 1, Semester 2: I don’t have this exam timetable, for some reason. (The real question is why I still have five out of six, not why I’m missing one!)

Year 2, Semester 1: six two-hour exams over two weeks. Week 1 started fairly well, with exams on Monday 9am, Wednesday 4.30pm and Friday 4.30pm, then the fourth was Saturday 9am, so I finished at 6.30pm on Friday and took another at 9am the following morning. The remaining two were on the following Tuesday, at 9am and 4.30pm.

Year 2, Semester 2: another six two-hour exams over two weeks. The first week was Tuesday at 4.30pm, Wednesday at 9am, Wednesday at 4.30pm and Thursday at 4.30pm. Notice I am given a whole 22 hours off between the 3rd and 4th, a comparative luxury! Then the last two were Tuesday and Wednesday the following week, both at 9am.

Year 3, Semester 1: much more relaxed this time, five exams mostly 2.5 hours on Monday at 1.30pm, Wednesday at 9am and Thursday at 9am one week and Monday 9am and Wednesday 9am the following week.

Year 3, Semester 2: three 2.5 hour exams, two on Friday, 9am and 4.30pm and the other on the following Monday at 9am.

So it seems I was expected to either do minimal revision before each exam or to do revision in advance of the exam period and simply retain a good level of knowledge and practice for, say, six hours of exams on three different subjects in a 34-hour period (Y1, S1) or eight hours of exams on four different subjects in a 50-hour period (Y2, S2).

This doesn’t change my sympathy with students who feel their exams could be more spread out. This is important so that they have plenty of time for revision and can fairly represent themselves, refreshed and at their best. It strikes me that with the sort of exam schedules I had, and with the weightings given to exams, if a student woke with a cold that lasted a few days, that could seriously damage half a semester’s work.

I’m trying to tweak what I’ve written above so it doesn’t sound whiny – that isn’t my intention, I’m aware that others have it worse. I’m reminded of the bit from my George Green talk (listen here), where when Green sat the Cambridge Tripos in 1837, this was a five-day examination, 9-11.30 and 1-4pm on Wednesday-Saturday and Monday, that determined the order of merit for the Bachelor’s degree!

Another memory confirmed by these papers is that the Monday 9am exam at the end of year 3 served as both the end point of my degree and also my 21st birthday. One thing I am genuinely surprised by is that I didn’t take a 3 hour exam on any of these timetables – I’ve definitely claimed in recent years during a conversation on exam lengths to have regularly taken three hour exams. Funny thing, memory.

]]>I thought it might be interesting (to me, at least) to list the types of assessment I’ve been involved in marking in the 2015/16 academic year.

These are not all of my invention (i.e. some are things I made up in teaching I ran, others are pieces I delivered as part of some else’s design). In no particular order (numbers are approximate):

- 120 short individual tests (four tests times thirty students) — a series of short, unconnected questions;
- 16 multiple-choice tests;
- 32 group activities (four activities times eight groups) — students had to solve a slightly open-ended question as a group and I marked them on the written description of their solution and how well they had communicated and worked as a group during the task;
- 266 short individual courseworks — well, one was not particularly short, but they were all a series of short, unconnected questions;
- 30 in-depth individual courseworks — this had a series of connected and increasingly open-ended questions to investigate a topic;
- 6 group essays — students worked in groups to research history of maths topics and wrote their findings as a short (500 word) essay plus a brief (100 words) account of their estimation of the reliability of the sources they used; they did this formatively weekly for half a term before handing one in summatively;
- 25 individual history of maths essays — topic of student’s choice (with agreement);
- 15 group presentations accompanied by two-page handouts — this was to describe the findings of an open-ended group investigation;
- 25 group project plans and minutes of 75 group meetings — for the above investigation;
- 99 self- and peer- reflections on contribution to group work — for the same;
- 36 reflective personal statements discussing career plans, skills relevant to those and ethical issues;
- 10 individual presentations — interim reports on final year projects;
- 6 dissertations — final reports of year-long final year projects, each with a corresponding viva;
- 4 group presentations — to report on findings of a semester-long, open-ended group investigation;
- 16 group posters — to report on the above investigations;
- 1 group report — report of the same;
- one quarter of the questions on 200 group-marked exam scripts (two exams).

Project title:The contribution of multi-disciplinary problem-solving interventions to undergraduate employability skills development

Description:Universities are increasingly keen to emphasise employability skills development. For example, Sheffield Hallam University is ambitious to deliver academically-challenging programmes with an emphasis on professional practice. Graduate professional qualifications including Chartered Engineer, Chartered Mathematician and Chartered Scientist highlight the importance of teamwork, communication and interpersonal skills, both with specialists and non-specialists. This project will explore the contribution of cross-disciplinary working to this agenda by developing learning and teaching interventions with multi-disciplinary groups of undergraduates. The research will focus on the processes by which undergraduate students acquire, apply and disseminate knowledge from different disciplines to solve complex problems.

There are two routes for funding – one is a Graduate Teaching Assistantship, meaning up to six hours contact with undergraduates per week, which is how I think I would prefer to do an educational PhD.

There are a lot more details about the scheme and how to apply.

]]>Where do old issues of MSOR Connections live online these days? @peterrowlett?

— Christian Perfect (@christianp) November 26, 2015

It’s complicated, but here is what I know.

Volumes 1-12 (actually 0-12, if you include the ‘Maths, Stats and OR’ newsletter published in 2000 as volume 0) were published by the Maths, Stats and OR Network, which I worked for in its dying days. At that point, the website previously at mathstore.ac.uk was archived by the Plymouth International Centre for Statistical Education at icse.xyz/mathstore. It’s still there, so you can still get volumes 1-12 (published 2001-2012) via its Newsletter archive, which acts as a by-issue index of individual PDFs.

MSOR Connections was relaunched as a peer-reviewed journal by the Higher Education Academy in 2013. These were online at journals.heacademy.ac.uk, and indeed that is currently still where the DOI links direct you, but that site was taken down earlier this year in favour of the Knowledge Hub. So if you know the name of an article, you can find it there – though I’m not sure there is a contents listing of issues.

However, there’s a catch. When I spent some time earlier this year comparing the online archives with my printed copies, I found that not every article is available. Volume 13 appears entirely available in the Knowledge Hub. For volumes 1-12, my fairly blunt approach was basically to look at the articles on mathstore and then, if the number of PDFs differs from the number of articles in my print copy of that issue, investigate why. Mostly that happened because articles were combined in the same PDF, but there were a few times (to my surprise) where the mathstore version missed some articles. In such cases, I was able to find most of the missing articles in the HEA Knowledge Hub. (There are also articles not in the Knowledge Hub that do appear on mathstore; it’s a mess.) Most frustratingly, I couldn’t find the following articles in PDF on either archive:

- ‘The False Revival of the Logarithm’ by Colin Steele 7(1):17-19 (I have found an author pre-print);
- ‘PowerPoint Accessibility within MSOR Teaching and Learning’ by Sidney Tyrrell 7(1):26-29;
- ‘Have You Seen This? RExcel – An interface between R and Excel’ by John Marriott 7(1):43;
- ‘Book Review – SPSS for Dummies’ by Arthur Griffin by Sidney Tyrrell 8(4):38-39.

These are not on the mathstore site or the Hub, but appear in my print copies. If you can locate electronic copies of any of these I would be pleased to hear it.

Volume 13 was the only volume published before the HEA finished publishing MSOR Connections and agreed to release the title back to a group coordinated by sigma and the University of Greenwich. I am one of the editors of MSOR Connections in its current form, and you should find volume 14 (published in 2015) onwards indexed on the Greenwich journals website.

]]>The format, wholly original and not in any way ripped off by Colin and Dave from anywhere else, saw two teams compete by giving correct and incorrect definitions of a word for the other team to determine who was telling the truth and who was bluffing. Team members challenged the other team to ‘call my bluff’, as it were.

There were three rounds, in which the teams defined first a mathematician, then a constant, then a theorem. Colin’s team included Dominika Vasilkova along with The Aperiodical’s own Christian Lawson-Perfect, with Elizabeth A. Williams and Nicholas Jackson opposing them on Dave’s team.

**Ways to listen**: Listen online. Download. Get the podcast via RSS or via iTunes.

If you enjoy this, you might like other episodes of Wrong, But Useful. At least that’s what Colin’s WordPress thinks:

]]>@icecolbeveridge insightful stuff from WordPress here. pic.twitter.com/Eq8tQheUeJ

— Peter Rowlett (@peterrowlett) November 23, 2015

Get the Being a Professional Mathematician podcast in RSS format.

Get the Being a Professional Mathematician podcast on iTunes.

The wider project includes resources and suggestions for using this audio in teaching undergraduates, inclunding the booklet Being a Professional Mathematician.

Enjoy!

]]>Last week, I decided I would discuss myths and inaccuracies. Though I am aware of a few well-known examples, I was struggling to find a nice, concise debunking of one. I asked on Twitter for examples, and here are the suggestions I received, followed by what I did.

Jason Dyer, @Brobuntu and Rob Eastaway all three suggested an article, Gauss’s Day of Reckoning, which discusses the tale of Gauss as a boy quickly summing the first 100 integers. @Brobuntu also mentioned the story of Hippasus and the Pythagoreans and “the worn story” of Euler debating Diederot, but without sources debunking them. Dan Wood also mentioned the former of these, of the Pythagoreans ordering the death of a student who proved $\sqrt{2}$ to be irrational.

@haggismaths suggested a blog post, Logic and Madness?, that debunks the idea that thinking about the continuum hypothesis drove Cantor mad.

Nicholas Jackson suggested Galois “frantically scribbling maths the night before his fatal duel”, and the article Genius and Biographers: The Fictionalization of Evariste Galois giving a detailed debunking. @haggismaths also suggested this as a story “much romanticised by Bell”, linking to the same article and also suggesting that “the section on Wikipedia about Galois’ death and final hours is not bad”. Thony Christie made the same suggestion and said the bebunking should be covered by Boyer, by which I guess he means A History of Mathematics.

Rob Eastaway suggested his blog post about the Golden Rectangle and Donald Duck.

@theoremoftheday suggested “something on how Nobel did NOT shun mathematicians cos one cuckolded him”, linking to Why is there no Nobel in mathematics?.

John Read suggested “whether Euclid was Greek, African or made up from a collective of people”, although without a debunking source.

So what did I go with? I had planned to give the students a page from E.T. Bell’s Men of Mathematics and a short article debunking it, as a reading exercise. Lacking a short debunking, I instead made a short lecture giving a typical story of Galois, quoting Bell:

he had spent the fleeting hours feverishly dashing off his scientific last will and testament, writing against time to glean a few of the great things in his teeming mind before the death which he foresaw could overtake him. Time after time he broke off to scribble in the margin ‘I have not time; I have not time,’ and passed on to the next frantically scrawled outline. What he wrote in those desperate last hours before the dawn will keep generations of mathematicians busy for hundreds of years.

Then I gave a brief debunking from Genius and Biographers: The Fictionalization of Evariste Galois, by Tony Rothman (1982) including:

It is unclear how far one can go in forgiving Bell. . . . I believe consciously or unconsciously Bell saw his opportunity to create a legend. The details which are absent in his account . . . are those details which lend a concreteness and a humanness to Galois’s life which a legend must not have. Unfortunately, if this was Bell’s intent, he succeeded.

I also included some discussion from Mathematical Myths by G.A. Miller (1938), who writes that some readers of Bell will think that errors of detail are unimportant, but that

there are others who will be very much annoyed by errors of detail, and whose interest in the book will be greatly diminished when they become convinced that they cannot assume that the author took a reasonable amount of care to avoid misleading remarks even when they are striking.

I also included the story of Archimedes leaping from the bathtub shouting ‘Eureka!’ in order to discuss the place of legends in folklore and the value this can have. For a discussion, I used Life on the Mathematical Frontier: Legendary Figures and Their Adventures by Roger Cooke, who writes:

What is valuable in the story is the picture of the sudden flash of inspiration that mathematicians sometimes experience. Whether true or not, this story will continue to be told because it amuses people and because it expresses some folklore concerning a legendary figure.

To encourage my students to consider such issues when writing their own work, I ended with a quote from Miller suggesting that

]]>the reader should realize that he is in danger of contributing toward the spreading of mathematical myths when he quotes from these writings without verifying the accuracy of statements contained therein.

This journal was published by the Maths, Stats and OR Network 2001-12, then by the Higher Education Academy in 2013. The first new issue for two years, published by a volunteer group coordinated and supported by **sigma** and the Greenwich Maths Centre, is volume 14 issue 1.

This issue includes articles about maths support, active learning of game theory, support for numerical reasoning tests in graduate recruitment, an implementation of the Maths Arcade, and an article about the new maths learning space at Sheffield Hallam University at which I am going to work later this month, written by my new head of department (you can just about see my office-door-to-be in figure 2).

Submissions are encouraged, which could be case studies, opinion pieces, research articles, student-authored or co-authored articles, resource reviews (technology, books, etc.), short update (project, policy, etc.) or workshop reports and should be of interest to those involved in the learning, teaching, assessment and support of mathematics, statistics and operational research in higher education.

]]>There is a second type of DLHE survey, which is longitudinal. This surveys graduates 3.5 years after graduation, and the 2010/11 longitudinal data has just been released. This deserves some investigation and I don’t have time right now, but I did notice a couple of tables that make me proud of my subject.

The first reports the proportions of graduates who are in jobs rated as ‘professional’ and ‘non-professional’. These data are taken from Table 8 of the 2010/11 DLHE longitudinal data set. I’ve chosen all levels (postgrad and undergrad) and ordered the data by percentage in professional jobs (descending). I’ve highlighted mathematical sciences, which includes maths, stats and operational research.

Level of qualification obtained, mode of study and subject area 2010/11 | Total professional | Total non-professional |
---|---|---|

All levels | ||

Medicine & dentistry | 98.8% | 1.2% |

Veterinary science | 92.9% | 7.1% |

Subjects allied to medicine | 92.5% | 7.5% |

Architecture, building & planning | 91.8% | 8.2% |

Education | 87.7% | 12.3% |

Mathematical sciences |
86.5% |
13.5% |

Computer science | 86% | 14% |

Engineering & technology | 84.6% | 15.4% |

Physical sciences | 83.3% | 16.7% |

Law | 81.7% | 18.3% |

Social studies | 79.9% | 20.1% |

Business & administrative studies | 77% | 23% |

Biological sciences | 76.4% | 23.6% |

Combined | 73.5% | 26.5% |

Languages | 72.9% | 27.1% |

Historical & philosophical studies | 72.5% | 27.5% |

Mass communications & documentation | 71.6% | 28.4% |

Creative arts & design | 67.2% | 32.8% |

Agriculture & related subjects | 55.8% | 44.2% |

The second table is this one showing whether graduates felt the subject they studied was a formal requirement, important or helpful in gaining their current job. These data are from Table 15 of the 2010/11 DLHE longitudinal data set. Again, I’ve chosen all levels and I’ve ordered the table by those that felt their subject was not important (ascending). Again, I’ve highlighted maths.

Level of qualification obtained and subject area 2010/11 | Formal requirement’, ‘Important’ or ‘Not very important but helped’ |
Not important |
---|---|---|

All levels | ||

Veterinary science | 97.3% | 2.7% |

Medicine & dentistry | 96.4% | 3.6% |

Subjects allied to medicine | 93.6% | 6.4% |

Education | 91.7% | 8.3% |

Architecture, building & planning | 87.8% | 12.2% |

Engineering & technology | 87.7% | 12.2% |

Mathematical sciences |
87.5% |
12.5% |

Computer science | 84.7% | 15.3% |

Law | 81.5% | 18.5% |

Business & administrative studies | 81.1% | 18.9% |

Physical sciences | 78.5% | 21.5% |

Social studies | 77.1% | 22.9% |

Biological sciences | 76.8% | 23.2% |

Agriculture & related subjects | 75.9% | 24.1% |

Mass communications & documentation | 73.2% | 26.8% |

Creative arts & design | 70.7% | 29.2% |

Combined | 68.7% | 31.3% |

Languages | 68.6% | 31.3% |

Historical & philosophical studies | 57.8% | 42.2% |

Looking at these tables fairly naively, I’d say there are some subjects represented which are really a profession for which you require a degree (medicine, education, architecture, engineering, law). A student might decide before coming to university “I want to be a doctor” and then take medicine. That’s okay, provided you know at that stage what you want to do with your life (I didn’t). Clearly not everyone who takes these subjects goes into the associated profession, but it is reasonable to expect a large number to do so, and therefore a high proportion in professional jobs.

Then there are subjects that I guess are aligned to a job sector, but less closely to a particular job. I’d put Physical sciences, Biological sciences and Computer science into this category. I suppose we’d expect a moderate number to progress from these into the associated job sectors, but many to go into more general employment.

Finally, there are subjects that are extensions of subjects done in school that I imagine are taken out of interest or ability in the subject, but which don’t align to a particular job or job sector. Here is where I’d put maths. We might expect that these students have less of a specific job goal in mind, so may end up further down the tables. And this is why I am proud of maths — as we tend to tell applicants, maths leads to lots of different jobs, and graduates 3.5 years into their career seem to be doing very well. I’d say maths is the top subject not aligned to a particular profession on both proportion in a professional job and proportion saying the subject was helpful or important in gaining their current job.

Well, I think it’s interesting, anyway. Kids: choose maths! ;)

]]>