You're reading: Columns
- Does this help me cope with my email? (No, but I feel less like I’m doing something wrong when I’m overwhelmed.)
- Is this normal?
- Is there anything I can do about it?
- Should I have taken the time used to write this post to clear some of my backlog?
- Should I have written a post about something less navel-gazing?
- What did I do with all my free time in 2004-6 when I had 10,000 fewer emails per year to deal with?
Oh, email
I feel, increasingly, that I am not coping well with my email. I just found some data to illuminate this. So here is what has happened to the raw number of emails sent and received over the past eight years.
This year so far (up to yesterday, the 29th October) I have 19357 emails, more than a thousand emails more than I did in 2010 or 2009. The mean is 65 per day, which seems about right. I get fewer than this at weekends and during some parts of the year, but I have noticed before that many workdays I send and receive about 90. These numbers seem terrifyingly large.
Even more annoying about this is that at the end of 2010 I signed off a load of email discussion lists, removed myself from some quasi-spam circulars sent by companies I’d bought things from and turned off a lot of email alerts from Twitter, Facebook, etc., in order to reduce the volume of email I was processing. The 2011 data includes this reduction.
Oh, and none of this is spam – these numbers exclude both the spam box and any spam emails that I manually removed from my inbox.
Looking at the 2011 data: Let’s assume each email takes me a minute to deal with (read or write; some are just discarded in a few seconds, others might take a few minutes; it’s a crude average). Then 19357 minutes is a little over 322 hours. Assuming a 7.5 hour working day (haha), this is 43 days. There have been 215 weekdays so far this year. Allowing for holiday and sick leave (I’ve had a bad year for the latter), this means I’m spending something like a quarter of my time processing email.
Some questions that arise from this analysis:
How many barleycorn in a giant slipper?
On Thursday this week you, as I, may have awoken to giant slipper news. The story, that Tom Boddingham ordered a size 14.5 slipper but was sent a size 1,450 after manufacturers failed to spot a decimal point in his order, was widely reported. I first saw it on the Telegraph site but they’ve done their Ministry of Truth Records Department thing and now the link leads to a story telling you about the hoax so here’s a link to the same story at the Mirror: “Gigantic XXXXXXXXXXXXXXXXL slipper delivered to man who ordered a size 14.5 but got a size 1,450“. The Mirror story also guesses a number of X’s but I don’t recall this from the original so will ignore it as the Mirror’s addition. This tells us, in a ‘quote’ from Mr. Boddingham, that the slipper “measures 210 x 130 x 65cms”.
The original story said that the factory in China thought the shoe was for a shop display, so thought nothing of the strange order. It does not explain why a shop display would order a 7ft slipper using shoe size terminology or how the extra materials or shipping costs were met.
There’s a proper bit of journalism over at the Guardian, “Was the 7 foot ‘monster slipper’ really a PR stunt?“, which covers much of what is wrong with the article. The PR company say the story was from the retailer and they assume it was true. The retailer said it’s definitely true. This also has some discussion of Joseph Jennings, who apparently works for the company, and his visual similarity to Mr. Boddingham, pictured with the giant slipper article.
I feel slightly strange about the idea that missing the decimal point out of 14.5 would naturally lead to 1450. I suppose it might be that the ordering system always takes two decimal places and it was really 14.50 that was ordered. I don’t know why failing to spot a decimal point has to be a “translation error” by the Chinese, and not just an overlooking. This might be reasonable as a translation error if we remember that different cultures use decimal and thousands markers differently, with some using comma for the decimal mark and dot for the thousands separator (not China, though). Even so, the dot isn’t three places from the end. In China, according to Wikipedia (so it must be true) the dot is sometimes placed every four digits, but no mention of two.
Anyway, happy this is a hoax and ready to leave it I idly decided to check the numbers. According to Wikipedia, UK shoe size is based on the length of the “last“, the foot-shaped template over which the shoe is manufactured, measured in barleycorn, a basic Anglo-Saxon unit used as the legal definition of the inch in many medieval laws in England and Wales. There is no precise relation between the last and the eventual shoe size (this is why two styles of shoe claiming to be the same size can be so different and why I can’t tell if I’m a 14 or 15; a real pain because shops don’t generally stock those sizes but I don’t know which to order online) but we can safely assume the size is similar to that of the eventual shoe.
Wikipedia gives UK adult shoe size as 3 x the length of the last in inches – 25. A barleycorn is about one third of an inch, which is where the 3 comes from, and the -25 takes out the child sizes. Adult size one is then based on a last measuring 26 barleycorn or a bit under 9 inches.
Size 14.5, then, would be based on a last measuring about 13 inches:
3 x n – 25 = 14.5
39.5/3 = n = about 13
Using the same formula, size 1450 would be based on a last of 1475 barleycorn, which is about 492 inches, or 41ft:
3 x n – 25 = 1450
1475/3 = n = about 492
The article claims the giant slipper is 210cm long, which is a little under 7ft. The funny thing is that the 14.5->145 error (which seems, to me, more natural) would give a last of almost 5ft, a more reasonable size for a man-sized slipper.
3 x n – 25 = 145
170/3 = n = about 57
Of course, the reason I find this sort of thing interesting is the willingness newspapers have to report stories like this without fact-checking or, seemingly, common sense-applying. I’m also annoyed that they don’t bother to check even the most checkable facts – the arithmetic. That the arithmetic doesn’t add up, given all else that is wrong with the story, shouldn’t really surprise me, but still I can’t help feel that if you’re going to write this up then two minutes with a calculator would give you a flaw, even if no other alarms were raised.
La somme des hypothèses by Vincent Mauger
ARCHEOLOGY/DIET by STUDIOLAV
Experimental checking
Yesterday I mentioned the importance of experimentally checking mathematical results in a piece over at Second Rate Minds. You may know that Second Rate Minds is the writing exercise blog on which Samuel Hansen and I take turns writing and editing each others pieces while we enjoy playing with different styles. This time I decided to try to find a press release in the morning and have it written up into a piece within the day (the transatlantic time difference and working hours of my editor notwithstanding). From my experience covering mathematics in the news on the Math/Maths Podcast, I am aware that time and again we see the same story almost word-for-word on different websites, all sourced from the same press release. I didn’t just want to repeat the press release, but tried to give it my own spin. The result I found confirmed an assumption behind diffusion and I wrote of the importance of work that relates the assumptions behind computations to the scenario being modelled.
Also this week, I arrived home from the IMA East Midlands Branch talk and wrote an account of this over on the IMA Blog “IMA Maths Blogger”. In this talk Prof Chris Linton at Loughborough gave an engaging account of the discovery of Neptune, which was found following a mathematical prediction based on irregularities in the orbit of Uranus. At the end, Chris also mentioned a prediction of an extra planet, Vulcan, used to attempt to explain a discrepancy in the orbit of Mercury. In fact Mercury’s orbit was different from the predictions made by Newton’s Laws because of a limitation of those laws in describing physical reality and one of the first tests of Einstein’s relativity was that it predicted the orbit of Mercury correctly. Given the recent Nobel Prize awarded to Saul Perlmutter, Brian Schmidt and Adam Riess “for the discovery of the accelerating expansion of the Universe through observations of distant supernovae“, observations that led to the theory of dark energy, Chris left us with the thought: is dark energy a Neptune situation (something out there we can’t yet see) or a Vulcan situation (a limitation of current theory)?
This morning I listened to an interview with Brian Schmidt on the Pod Delusion in which he described the process of discovering the unexpected result. Attempting to check whether data on supernovae showed the universe expanding at a constant rate or slowing down, Schmidt explained they found something altogether more unexpected:
That data showed, jeez, the universe wasn’t slowing down at all, it was speeding up. And so that was a real crazy thing to be confronted with. It didn’t make a lot of sense. It seemed just impossible so that was a pretty scary time when we first saw that result … Initially you just start looking for problems and checking and rechecking everything but after a while you’ve done everything you can and nothing’s obviously wrong so we opened it up to the team and said ‘okay guys, we’ve got this crazy result. Any test you want to us to do we’ll test. We think we’ve tested them all at this point but anything you want to do’. And the group came up with all sorts of things to think about so we went through and worked more but at some point it slowly sunk in that the universal acceleration that we were seeing just wasn’t going to go away. So it took a few months but we did everything we can several times, had several people do it, and everyone just got the same answer.
Imagine my surprise later on today when I heard the same story again, this time from Marcus du Sautoy on his BBC documentary on the recent ‘faster than light’ neutrino discovery. Talking about research which appears to show neutrinos travelling fractionally faster than the speed of light, Marcus said:
Under our current understanding of the universe, this just isn’t possible. The researchers themselves were pretty shocked by the results. They spent many months looking for mistakes. They brought in outside experts. They pored over the figures hundreds of times, searching for an error … but they couldn’t find any mistakes, so they decided to publish.
Of course, the first result has led to major developments in astrophysics over the last decade, while the later remains to be verified. Still, when you hear about teams rushing to be the first to publish some result, and when people seemed so quick to dismiss the neutrino result, it’s quite striking to hear how long the original researchers spent privately checking their results.
I think this checking of observations against theory – confirming a theory, rejecting one or finding its limits – and the process behind doing so is interesting and points to the importance of relating your theory back to the real world context it is modelling. You might justify the theory or you might not. Either way, who knows, you may discover something remarkable.
Parmenides I by Dev Harlan
[vimeo url=https://vimeo.com/30108920]


