As of Wednesday, 27th September, the BBC has launched a large-scale mass participation data gathering project called Pandemic. The aim of the project is to collect data about how people move around and interact with each other, and who they come into contact with. And they need you!
You're reading: Posts Tagged: data
Watch this bold decision-maker score 100 at the “is this prime?” game
Fan of the site Ravi Fernando has written in to tell us about his high score at the “is this prime?” game: a cool century!
I’ve been a fan of your “Is this prime?” game for a while, and after seeing your blog post from last May, I thought I’d say hi and send you some high scores. Until recently, my record was 89 numbers (last March 12), which I think may be the dot in the top right of your “human scores” graph. But I tried playing some more a couple weeks ago, and I found I can go a little faster using my computer’s y/n buttons instead of my phone’s touch screen. It turns out 100 numbers is possible!
Watch in amazement:
But, to the delight of prime fans everywhere, he didn’t stop there:
Today I even got 107 – good to have a prime record again.
Well done, Ravi!
Now is a good time to point out that the data on every attempt ever made at the game is available to download, in case you want to do your own analysis: at time of writing, there have been over 625,000 attempts, and 51 is still the number that catches people out the most.
Urgent review of Government calculations underway
According to an article behind the Times paywall which I haven’t read, an “urgent review is under way into the reliability of some of the Government’s most crucial calculations in the wake of the West Coast Main Line shambles“.
The part of the article above the paywall reports that checks for ensuring the accuracy of models for climate change, income distribution, benefit claims and farming subsidies are all included in the audit. Website PoliticsHome expands on the basic Times link, saying that “every Government department has been required to draw up a list of ‘business critical’ models that they rely on to do their jobs”.
If you subscribe to The Times, you can get the rest of the story on its website: Maths check across Whitehall after West Coast rail line fiasco.
New UK economic and social data service to launch in October
The UK Data Service, due to launch on 1 October 2012, is funded for five years by the Economic and Social Research Council (ESRC) and aims to “support researchers in academia, business, third sector and all levels of government” by providing “a unified point of access to the extensive range of high quality economic and social data, including valuable census data”.
2011 Census figures released
The RSS are reporting that the first figures from the 2011 Census have been released. The big headline is population growth – an increase of 3.7 million (7.1%) since 2001. Other than that,
other key figures in the release show that the percentage of people aged 65 and over was the highest seen in any previous census, standing at 16.4 per cent. The median age of the population was 39 and there were 3.5 million children under five years of age.
While all regions have experienced population growth, the highest was in London, which gained more than 850,000 residents, an increase of 11.6 per cent.
Data from Northern Ireland’s census was also released today, revealing that its population is also the highest it’s ever been, at 1,810,900. The first release of Scottish data from the 2011 census is scheduled for December 2012.
Source: First figures from the 2011 Census released.
Find out more: First release of 2011 census data from the Office of National Statistics.
Experimental checking
Yesterday I mentioned the importance of experimentally checking mathematical results in a piece over at Second Rate Minds. You may know that Second Rate Minds is the writing exercise blog on which Samuel Hansen and I take turns writing and editing each others pieces while we enjoy playing with different styles. This time I decided to try to find a press release in the morning and have it written up into a piece within the day (the transatlantic time difference and working hours of my editor notwithstanding). From my experience covering mathematics in the news on the Math/Maths Podcast, I am aware that time and again we see the same story almost word-for-word on different websites, all sourced from the same press release. I didn’t just want to repeat the press release, but tried to give it my own spin. The result I found confirmed an assumption behind diffusion and I wrote of the importance of work that relates the assumptions behind computations to the scenario being modelled.
Also this week, I arrived home from the IMA East Midlands Branch talk and wrote an account of this over on the IMA Blog “IMA Maths Blogger”. In this talk Prof Chris Linton at Loughborough gave an engaging account of the discovery of Neptune, which was found following a mathematical prediction based on irregularities in the orbit of Uranus. At the end, Chris also mentioned a prediction of an extra planet, Vulcan, used to attempt to explain a discrepancy in the orbit of Mercury. In fact Mercury’s orbit was different from the predictions made by Newton’s Laws because of a limitation of those laws in describing physical reality and one of the first tests of Einstein’s relativity was that it predicted the orbit of Mercury correctly. Given the recent Nobel Prize awarded to Saul Perlmutter, Brian Schmidt and Adam Riess “for the discovery of the accelerating expansion of the Universe through observations of distant supernovae“, observations that led to the theory of dark energy, Chris left us with the thought: is dark energy a Neptune situation (something out there we can’t yet see) or a Vulcan situation (a limitation of current theory)?
This morning I listened to an interview with Brian Schmidt on the Pod Delusion in which he described the process of discovering the unexpected result. Attempting to check whether data on supernovae showed the universe expanding at a constant rate or slowing down, Schmidt explained they found something altogether more unexpected:
That data showed, jeez, the universe wasn’t slowing down at all, it was speeding up. And so that was a real crazy thing to be confronted with. It didn’t make a lot of sense. It seemed just impossible so that was a pretty scary time when we first saw that result … Initially you just start looking for problems and checking and rechecking everything but after a while you’ve done everything you can and nothing’s obviously wrong so we opened it up to the team and said ‘okay guys, we’ve got this crazy result. Any test you want to us to do we’ll test. We think we’ve tested them all at this point but anything you want to do’. And the group came up with all sorts of things to think about so we went through and worked more but at some point it slowly sunk in that the universal acceleration that we were seeing just wasn’t going to go away. So it took a few months but we did everything we can several times, had several people do it, and everyone just got the same answer.
Imagine my surprise later on today when I heard the same story again, this time from Marcus du Sautoy on his BBC documentary on the recent ‘faster than light’ neutrino discovery. Talking about research which appears to show neutrinos travelling fractionally faster than the speed of light, Marcus said:
Under our current understanding of the universe, this just isn’t possible. The researchers themselves were pretty shocked by the results. They spent many months looking for mistakes. They brought in outside experts. They pored over the figures hundreds of times, searching for an error … but they couldn’t find any mistakes, so they decided to publish.
Of course, the first result has led to major developments in astrophysics over the last decade, while the later remains to be verified. Still, when you hear about teams rushing to be the first to publish some result, and when people seemed so quick to dismiss the neutrino result, it’s quite striking to hear how long the original researchers spent privately checking their results.
I think this checking of observations against theory – confirming a theory, rejecting one or finding its limits – and the process behind doing so is interesting and points to the importance of relating your theory back to the real world context it is modelling. You might justify the theory or you might not. Either way, who knows, you may discover something remarkable.