This week, the Freakonomics blog covered research by Stockholm University’s Kimmo Eriksson, which found that including a mathematical equation in the abstract of a research paper made scholars from different fields judge the research to be ‘of higher quality’, even though the equation is unrelated to the work and also complete nonsense. The study included 200 participants, although the amount by which the equation increased the perceived ‘quality’ of research varied between disciplines, and in fact caused a slight decrease for people working in mathematics or science subjects.
You're reading: Posts Tagged: research
Insect numeracy standards overtake KS2
In what must surely now be described as a classic maths news item, yet another species of animal has joined the ranks of things which can determine rank. This time it’s the humble fruit fly’s turn to tap its hoof the correct number of times, as these articles in The Metro and Nature (the two standard science references for their respective ends of the credibility spectrum) describe. Props to The Metro for an excellent headline pun.
Dance Your PhD: Cutting Sequences on the Double Pentagon
As a mathematician (and not just any kind of mathematician – a PURE mathematician), I heard of the “Dance Your PhD” contest and immediately burst out laughing. As much as there is some nice pure mathematical dancing out there (see, for instance, this series of videos of different numerical sorting algorithms interpreted through dance), the idea that someone’s mathematical PhD research could be conveyed via bodily gyration was both fantastical and hilarious.
EPSRC very quietly relents on maths funding
The EPSRC has silently updated its table of “areas in which fellowships are available” to include “intradisciplinary research” in mathematical sciences at all career stages. According to a post by Timothy Gowers on Google+, this “means in practice pretty much all of maths.”
P-p-p-publicise a paper!
We love hearing about new maths but keeping up with the literature is difficult. It’s also quite hard to tell if something outside your field of expertise is noteworthy or not. So we want your help directing our attention towards new and noteworthy research, whether it’s on the arXiv or in peer-reviewed journals or just a rumour someone’s worked out something big.
We’re going to call the column Phil. Trans. Aperiodic., and Nathan Barker, who is currently finishing his PhD at Newcastle University, has kindly offered to run it. He’s going to do a fairly regular, fairly serious round-up of the articles you submit.
So, if you’ve seen some good research lately (or you’ve written some, and you’re really really sure it’s good), please go over to the Phil. Trans. Aperiodic. submission page and fill in our form.
Experimental checking
Yesterday I mentioned the importance of experimentally checking mathematical results in a piece over at Second Rate Minds. You may know that Second Rate Minds is the writing exercise blog on which Samuel Hansen and I take turns writing and editing each others pieces while we enjoy playing with different styles. This time I decided to try to find a press release in the morning and have it written up into a piece within the day (the transatlantic time difference and working hours of my editor notwithstanding). From my experience covering mathematics in the news on the Math/Maths Podcast, I am aware that time and again we see the same story almost word-for-word on different websites, all sourced from the same press release. I didn’t just want to repeat the press release, but tried to give it my own spin. The result I found confirmed an assumption behind diffusion and I wrote of the importance of work that relates the assumptions behind computations to the scenario being modelled.
Also this week, I arrived home from the IMA East Midlands Branch talk and wrote an account of this over on the IMA Blog “IMA Maths Blogger”. In this talk Prof Chris Linton at Loughborough gave an engaging account of the discovery of Neptune, which was found following a mathematical prediction based on irregularities in the orbit of Uranus. At the end, Chris also mentioned a prediction of an extra planet, Vulcan, used to attempt to explain a discrepancy in the orbit of Mercury. In fact Mercury’s orbit was different from the predictions made by Newton’s Laws because of a limitation of those laws in describing physical reality and one of the first tests of Einstein’s relativity was that it predicted the orbit of Mercury correctly. Given the recent Nobel Prize awarded to Saul Perlmutter, Brian Schmidt and Adam Riess “for the discovery of the accelerating expansion of the Universe through observations of distant supernovae“, observations that led to the theory of dark energy, Chris left us with the thought: is dark energy a Neptune situation (something out there we can’t yet see) or a Vulcan situation (a limitation of current theory)?
This morning I listened to an interview with Brian Schmidt on the Pod Delusion in which he described the process of discovering the unexpected result. Attempting to check whether data on supernovae showed the universe expanding at a constant rate or slowing down, Schmidt explained they found something altogether more unexpected:
That data showed, jeez, the universe wasn’t slowing down at all, it was speeding up. And so that was a real crazy thing to be confronted with. It didn’t make a lot of sense. It seemed just impossible so that was a pretty scary time when we first saw that result … Initially you just start looking for problems and checking and rechecking everything but after a while you’ve done everything you can and nothing’s obviously wrong so we opened it up to the team and said ‘okay guys, we’ve got this crazy result. Any test you want to us to do we’ll test. We think we’ve tested them all at this point but anything you want to do’. And the group came up with all sorts of things to think about so we went through and worked more but at some point it slowly sunk in that the universal acceleration that we were seeing just wasn’t going to go away. So it took a few months but we did everything we can several times, had several people do it, and everyone just got the same answer.
Imagine my surprise later on today when I heard the same story again, this time from Marcus du Sautoy on his BBC documentary on the recent ‘faster than light’ neutrino discovery. Talking about research which appears to show neutrinos travelling fractionally faster than the speed of light, Marcus said:
Under our current understanding of the universe, this just isn’t possible. The researchers themselves were pretty shocked by the results. They spent many months looking for mistakes. They brought in outside experts. They pored over the figures hundreds of times, searching for an error … but they couldn’t find any mistakes, so they decided to publish.
Of course, the first result has led to major developments in astrophysics over the last decade, while the later remains to be verified. Still, when you hear about teams rushing to be the first to publish some result, and when people seemed so quick to dismiss the neutrino result, it’s quite striking to hear how long the original researchers spent privately checking their results.
I think this checking of observations against theory – confirming a theory, rejecting one or finding its limits – and the process behind doing so is interesting and points to the importance of relating your theory back to the real world context it is modelling. You might justify the theory or you might not. Either way, who knows, you may discover something remarkable.