You're reading: Blackboard Bold

The Hidden Maths of Eurovision

Every year, the Eurovision Song Contest brings with it fresh accusations that the results are affected more by politics than music. But how much of the outcome is in fact determined by mathematics?

On Sunday the Independent ran a story reporting on a ‘voting controversy’: the Austrian entry (who won the overall contest), despite being placed first in the UK’s results, received fewer telephone votes from the British public than the Polish entry. This is because a country’s popular vote is taken alongside the deliberations of a jury of music industry insiders to determine their overall rankings. The ‘story’ was triggered by the fact that the Eurovision organisers have this year taken the creditable step of releasing the rankings of each country’s jury and public votes on their website (in universally-loved .xls format). These figures reveal a rather mundane truth behind the ‘news’: Austria came 3rd with the judges and 3rd with the public, while Poland came 1st with the public but dead last with the judges. Since ameliorating the more eccentric tendencies of a public vote is presumably the reason the juries exist in the first place, this ‘controversy’ turns out to be somewhat of a non-story. The story was repeated in The Times and The Guardian.

But a salient fact was missing from all the coverage. Given that this is a story explicitly about how the public’s and the judges’ opinions combine to give the final result, it seems strange that no mention is given to the mechanism by which this is done. Indeed no explanation was given during contest itself. Some of the news reports and the contest itself spoke merely of a “50% split” between the two methods. Even the jaunty video provided on the Eurovision website (below) mentions nothing except “combining your country’s televote and jury vote”. It’s as if everybody considers it self-evident how this done, or that it’s not even a question, and ‘combining with a 50% split’ constitutes a full explanation in itself.

A very brief explanation of the contest for anyone unfamiliar with its delights: twenty-six countries compete in the final, and the winner is decided on the judgements of forty-odd countries who vote for which of the 26 they prefer (if they are one of the 26 they vote between the 25 others).

When slightly more detail on the system is provided, it’s mentioned that it’s the rank orders of the panel and public results that are ‘combined’. Getting a ranking from the phone voting is easy, though it throws up some issues: for instance, a bald rank order makes no distinction between a winner who garners 80% of the vote and one who squeezes into first with 10%. Generating a ranking from the six judges’ opinions is more complicated, but we’ll gloss over it for now since it’s just a more complicated version of the main problem: what does it mean to ‘combine’ the two rankings into one final result?

There is no single method for combining two orderings of preference into one list that reflects both sets of preferences equally. Indeed it is in general impossible to come up with a method that will never throw up a troubling result under one of various differingly contrived circumstances.

Having found no mention of a methodology after almost four minutes of dedicated Googling, I decided to have a crack at reverse-engineering the system from what results are detailed on the website. For each country, the rankings from each juror are provided, along with the combined judges’ ranking after the individual rankings have been passed through the Euromatic Patented Ranking Aggregator Machine (EPRAM). The phone rankings are also given, as well as the overall final ranking after EPRAM has combined the final judges’ ranking with that of the phone vote.

From a brief glance at the website it’s easy enough to break through the conspiracy of silence and determine that Eurovision uses, as you may have guessed, what I’ll call the ‘Strictly Come Dancing protocol’. The sub-rankings for the 25 or 26 countries being judged are considered as numbers from 1 to 25 or 26, and for each country the mean of their sub-rank values is taken — the overall rankings are taken from these averages. So a country placing 3rd with the judges and 8th with the public has an “average rank” of 5.5 and is beaten by a country taking both the fifth-place spots.

CountryUK Judges’ RankingUK Phone Vote Ranking“Mean Rank”Overall UK Ranking
Austria3331
Malta1532
The Netherlands724.53
Sweden586.54
Finland2116.55
Spain41076
Iceland1549.57
Denmark1369.58
Greece14710.59
Switzerland9131110
Poland2511311
Hungary12141312
Norway11171413
Russia10181414
Slovenia62314.515
Ukraine18121516
Romania22915.517
Azerbaijan8241618
Germany16201819
France23151920
Belarus19191921
Italy172219.522
Armenia24162023
San Marino202120.524
Montenegro21252325

This is, I would guess, the system that most people would naturally implement if they were asked to combine two sets of rankings, since it starts with the intuitively obvious step of adding together the two numbers you’re given. I speculate that the Eurovision organisers, and the producers of Strictly Come Dancing, did not even consider the idea that alternate systems were even possible.

The SCD protocol might sound reasonable enough, but it’s worth considering in a bit more detail. Taking the mean is something you do with numbers that denote some quantity. It’s not really meaningful to average a set of ordinal numbers. Certainly Stanley Smith Stevens would not be happy with you if you did.

It also fails to account for the importance we place on the different placings within an ordering. Ask a man who won £10 yesterday and £70 today if he’d have been equally happy getting £40 both days and I’m sure he probably would. Ask a runner who came 1st yesterday and 7th today if she’d be happy with two fourth-place finishes and I doubt you’d get the same answer. First place means you’re the best. Fourth and seventh are both just also-rans.

We can see this at work in the UK results. Austria wins the UK vote with a 3rd and a 3rd, while Malta gets the runner-up spot with a 1st and 5th. (This is a tie by the “mean ranks”; ties are evidently broken by the phone vote. So much for a 50/50 split.) Likewise Finland came 4th among the panel of judges despite gaining four top-3 placements from the five-strong group, more than any other country. I challenge you to find a person who would say that 1st-2nd-3rd-3rd-13th is not a better set of results than 2nd-2nd-3rd-4th-9th.

Compare this with the system I thought up in thirty seconds while reading the Independent’s story. The countries are ranked according to their best rank out of the public and judges’ orderings, with ties then broken by whose lower rank is best. So Malta and Finland come top for each winning one of the sub-contests, but Malta’s 1st/5th pips Finland’s 1st/25th. Austria finishes 5th in the UK ranking, instead of first — a big difference from changing a system you may not have realised even needed to be chosen.

Note that this system is also a ’50/50′ split of the judges’ and public opinions. It also eliminates ties and better reflects the cachet placed on finishing first. It certainly has its own problems, and the existing system could be repaired to iron out some of its flaws. You could, for instance, assign points values to the different ranks, with bigger points differences at the top than the bottom, and add up these scores. You might recognise this idea, since this is what Eurovision does to combine the individual country rankings into the final result. Indeed, the irony is that the scoring system at this final stage, with its douze points and nul points is perhaps the most famous aspect of the contest. But elsewhere, this little piece of mathematics is either glossed over or assumed not to exist in the first place. And it could have a big impact on the final outcome.

Since there was a spreadsheet just sitting there, I decided to calculate the revised rankings if the system I outlined above were to be used for aggregating the individual judges’ rankings and then for combining that with the public vote. Disappointingly for my incipient career as an investigative journalist, the top 4 remains unchanged. Unsurprisingly Poland are the biggest beneficiary, their strong showing with the public shooting them up from 14th place to 5th.

Here is my spreadsheet in case you fancy a fun hour of checking my SUMIFS: