The talk summaries and slides from last November’s MathsJam conference are now online!

MathsJam is a monthly maths night that takes place in over 30 pubs all over the world, and it’s also an annual weekend conference in November. The conference comprises 5-minute talks on all kinds of topics in and related to mathematics, particularly recreational maths, games and puzzles.

The talks archive has now been updated with the 2015 talks – there’s a short summary of what each talk was about, along with any slides, in PPT and PDF format, and relevant links.

]]>

I’ve been looking forward to this one: cities in the mathematical domain. This is the kind of applied maths I can really get behind.

Samuel starts with Mike Batty of University College, London’s Centre for Advanced Spatial Analysis discussing how cities grow and organise themselves. The structure is frequently fractal; how does one calculate the dimension of a city?

From a top-level view of cities, he moves on to a low-level description of one of the biggest problem in cities: traffic (another thing that fascinates me). We get a glimpse of traffic waves, and the unfairness that the person responsible for the average jam doesn’t suffer from the effects. And we learn that Gábor Orosz (University of Michigan) tests his hypotheses using robots as well as simulations.

The third segment focusses on Thomas Woolley of St John’s College, Oxford, who is a guide for Maths In The City, a series of walking tours around Oxford and London. Woolley gives an overview of the stories he and his colleague tell about, for instance, why bees do it (build hexagons, obviously) and the science behind Christopher Wren’s genius.

Segment four features Lisa Schweitzer of USC Price, who uses maths and statistics to tackle the social aspects of urban planning. My take-away was her line about using maths where appropriate — if it’s the correct tool to make a point, then she’ll use it, and if not, not. Oh, and some nice little dig at physicists, and justifications for pure maths (as if justification were needed).

Last up is a fun one: Samuel Arbesman, author, complexity scientist, and SimCity superfan. It’s a nice link back to the first story talking about the fractal dimension of a city; Arbesman suggests that one way to measure the complexity of a city — or at least, a SimCity — is to use the size of a saved game as a proxy. It’s roughly linear in population. It ends, as all SimCity segments should, with a monster being unleashed on a city.

As with pretty much every episode this series, the content is excellent but perhaps a little over-Samueled; a handful of places where the host explained things that might have been better explained by guests and at least one where the explanation was immediately paraphrased by the expert. A two-host show (for example, RadioLab) can get away with this by having one host explain to the other; it can also get away with dissing its writer’s poor jokes in a way that Samuel tries to.

These are relatively minor quibbles, though: one of the key measures of mathematical media for me — podcasts or articles or talks, or anything — is that it makes me want to play with the problems. Obviously, this episode cheated slightly by picking a topic I was interested with to start with, but it still left me wondering how I’d model traffic flow around my area and use it to avoid jams.

**Listen** to Relatively Prime: Principia Metropolica at relprime.com. While you’re there, catch up on Season 1.

*Colin was given early access to Season 2 of Relatively Prime, in return for writing reviews of each episode. Furthermore, Samuel is Aperiodipal numero uno and most of us chipped some money into the Relatively Prime Kickstarter, too. Just so you know.*

新年好, everyone! It was Chinese New Year on Monday, starting the year of the monkey. I didn’t really pay attention last year, so I didn’t know that it had been the year of the goat. I also wasn’t aware until just now when I looked it up that next year will be the year of the rooster.

But, now that I know those facts, I know that the Chinese year starting in 2027 CE will be the year of the goat, the next one will be the year of the monkey, and the one after that will be the year of the rooster. That’s because this calendar works on a 12-year cycle, rotating through a fixed list of animals. (There’s more to it than that, but that’s what I’m going to think about). It goes as follows:

蛇 | snake |

牛 | ox |

馬 | horse |

兔 | rabbit |

龍 | dragon |

狗 | dog |

豬 | pig |

鼠 | rat |

羊 | goat |

虎 | tiger |

雞 | rooster |

猴 | monkey |

Now that I’ve looked up all twelve animals,I know that the rabbit comes four years before the goat, and I can work out that I went to Singapore in 1999, because I know I went more than five but less than 20 years ago and I can remember seeing a frankly unwise number of rabbits encaged underneath a massive rabbit statue.

So, it’s a pretty good calendar: time is divided up into periods of 12 years, and you can fairly easily work out when something happened in your lifetime because it’s very easy to estimate how many multiples of 12 years ago it happened, and by remembering where in the cycle you were.

*But what if we made it more complicated?*

At this point, if the Chinese zodiac is significant to you, I’ll have to ask you to set that aside for a moment because I’m going to mess about with it, for maths reasons.

虎 (tiger) always follows 羊 (goat). You never get 龍 (dragon) next. Maybe I’d like that to happen: every pair of signs in the zodiac should occur in the cycle the same number of times. Is that possible?

My first thought was that I’d need a de Bruijn sequence. Given an alphabet of $k$ symbols, a de Bruijn sequence is a cyclical sequence of those symbols containing every possible subsequence of length $n$. I’ve got $k = 12$ zodiac signs, and I’d like to see every sequence of $n=2$ symbols. Job done, right?

Well, I don’t want to see the same symbol twice in a row. As much as I love 猴 (monkey), I reckon it’d start to drag after a while. Since we’re looking at subsequences of length 2, if you’ve got a de Bruijn sequence, you can just remove repeated digits and get what we’re looking for.

That’s one way of doing it. What I actually did was to not realise that fact, and go about working out my own algorithm. It goes as follows:

- Start by writing out each cyclic permutation of the numbers $1$ to $n$ in a grid.
- Begin by writing the number $1$, then repeat:
- In the row corresponding to the last number you wrote down, write down the first number which hasn’t been crossed out, and then cross it out.
- The last number you write down will be a $1$, starting the cycle again.

For 5 symbols, that process looks like this:

This works for any number of symbols^{(citation needed)}, and you can construct a de Bruijn sequence from it by just doubling up the first occurrence of each symbol.

A nice thing about the grid presentation is that you can easily see that the sequence on $n$ symbols has length $n (n-1)$.

So I’m ready to put together my new calendar, upending centuries of careful observance for the sake of a mathematical whimsy. Here’s Python code which produces the sequence of years:

def cycle(seq): n = len(seq) nexts = {seq[i]: [seq[(i+j)%n] for j in range(1,n)] for i in range(n)} last = seq[0] while len(nexts[last]): yield last last = nexts[last].pop(0) sequence = list(cycle(['鼠','牛','虎','兔','龍','蛇','馬','羊','猴','雞','狗','豬'])) print(' '.join(sequence))

And that produces

鼠 牛 虎 兔 龍 蛇 馬 羊 猴 雞 狗 豬 鼠 虎 龍 馬 猴 狗 鼠 兔 蛇 羊 雞 豬 牛 兔 馬 雞 鼠 龍 羊 狗 牛 龍 猴 豬 虎 蛇 猴 鼠 蛇 雞 牛 蛇 狗 虎 馬 狗 兔 羊 豬 兔 猴 牛 馬 豬 龍 雞 虎 羊 鼠 馬 鼠 羊 牛 羊 虎 猴 虎 雞 兔 雞 龍 狗 龍 豬 蛇 豬 馬 牛 猴 兔 狗 蛇 鼠 猴 龍 鼠 雞 蛇 牛 雞 馬 虎 狗 馬 兔 豬 羊 兔 鼠 狗 羊 龍 牛 狗 猴 蛇 虎 豬 猴 馬 龍 虎 鼠 豬 雞 羊 蛇 兔 牛 豬 狗 雞 猴 羊 馬 蛇 龍 兔 虎 牛

This cycle has a period of $12 \times 11 = 132$ years – barring advances in medical science, nobody would see the whole thing. And could you work out when something happened just by knowing which sign was associated with the year and roughly how long ago it happened? No! I’ve ruined the best thing about the Chinese calendar.

In conclusion, if my new calendar were to be adopted, there’d be a lot more headaches, and pretty much no benefits.

]]>Some of the best mathematical teasers are those which originate in a real-world problem – although the problem for pure mathematicians is that that happens much less often than it does for applied mathematicians, who are presented with interesting real-world problems all the time. That’s why it’s especially nice when a more pure one pops up, and that’s exactly what happened to mathematician Jacob E Goodman, back in 1975.

Goodman was trying to sort a pile of towels without having anywhere else to put them – he could only lift the top part of the stack, flip it over, and put it back down onto the same stack. He realised that this was exactly the same question as the mathematical problem of sorting a sequence by repeatedly taking different lengths of the first section (a prefix) of the sequence and reversing its order.

Sorting algorithms – ways to systematically put a set of objects into an order – are a hugely interesting mathematical area, and this was a particular subset of the question – you’re only allowed to take the first, or top, section of the pile, and reverse it. Goodman wanted to share this interesting puzzle he’d found, and wrote about it for the American Mathematical Monthly.

He realised a more accessible presentation of the problem was as a stack of differently sized pancakes, sent out by a careless chef in random order, and a harried waiter (Goodman submitted the article under the pseudonym Harry Dweighter, LOL) who can only fix this by using his one spare hand to put a spatula under the top section of pancakes and flip the whole thing.

As a simple example, a stack of three pancakes could come out in any one of six possible orders – shown below. One of these is of course the correct order, and two of them can be fixed in exactly one flip. A further two of the combinations require at least two flips to fix, and there’s one tricky one which takes three flipping operations to repair.

This means the ‘pancake number’ for three pancakes, is three – that’s the minimum number of flips needed to correct the most disordered stack of pancakes. (This is one of those tricky maximum minimums that are sometimes difficult to get your head around – it’s the largest number of flips you’ll need if you always do it in the most efficient possible way).

The question then became, what is the pancake number generally? It turns out this is a tricky question. We still don’t have a general formula for the pancake number for $n$ pancakes; we know the pancake number for various $n$, but the question remains unanswered for values of $n$ as small as 20 – the pancake number for 20 pancakes is still an open problem.

Size of pancake stack | Pancake Number (OEIS A058986) |

1 | 0 |

2 | 1 |

3 | 3 |

4 | 4 |

5 | 5 |

6 | 7 |

7 | 8 |

8 | 9 |

9 | 10 |

10 | 11 |

11 | 13 |

12 | 14 |

13 | 15 |

14 | 16 |

15 | 17 |

16 | 18 |

17 | 19 |

18 | 20* |

19 | 22* |

20 | unknown |

Many people have piled in to the problem – from maths giants like John Conway, to Bill Gates (yes, actually that Bill Gates). So far, we’ve managed to establish an upper and lower bound on the pancake number – for $n$ pancakes, $p(n)$ is between $\frac{15n}{14}$ and $\frac{18n}{11}$ (the upper bound being originally proved as $\frac{5n+5}{3}$ by Bill Gates and Christos Papadimitriou, in 1979 (PDF), and improved to its present value in 2009 by researchers at the University of Texas).

There is a simple algorithm for sorting a given stack – if you put your spatula under the largest pancake that’s not in its proper place, flip it to the top, then on your next flip, put it where it should be, and repeat these two moves, this will require at most $2n-3$ flips – although it’s unlikely to be optimal.

In 2015, the problem of determining the minimal number of moves for a given stack was proven to be NP-hard by a team at the University of Nantes. As well as being a lovely open problem, it’s a great example of a food-based (or towel-based) mathematical problem which has real utility – it’s recently been shown to have applications in parallel processing, providing an effective routing algorithm between processors.

As usual, CLP has been inspired to make a web gadget, so you can play with pancakes:

It gets worse though. As well as being terribly lax about the order in which he plates up the pancakes, the chef has now also burnt every single one of them on one side. The waiter’s sorting task has become much more complex, as each pancake needs to be put in order, but with the burnt sides still facing down.

This problem is known as the Burnt Pancake Problem, and also has its own celebrity special guest – Futurama and Simpsons writer David S Cohen published a paper in 1992, On the Problems of Sorting Burnt Pancakes, with his Berkeley colleague Manuel Blum.

Cohen and Blum worked out the Burnt Pancake Number has an upper bound of $\frac{3n}{2}$, and a lower bound of $2n-2$. They conjectured that the most difficult configuration to sort in this case is the one in which the pancakes are in the correct size order, but each of them is upside down, and proved that this can be sorted in $\frac{23n}{14}$ flips (later improved by the team in Texas to $\frac{3(n+1)}{2}$). Since any solution to a burnt stack is also a solution to the equivalently ordered unburnt stack, proving this conjecture would actually improve the bound on the unburnt problem.

This problem clearly has something fascinating about it, given all these celebrity researchers. But it even turns out that humans are not the only creatures to have worked on the pancake problem. A team of (human) researchers in 2008 worked out a way to use bacteria, capable of cutting out and flipping around sections of DNA inside living cells, to compute answers to the burnt pancake problem.

The team used E. Coli bacteria, and DNA recombination, on sections of DNA that represented simple stacks of pancakes (burnt on one side, since DNA has an orientation). The sequences were designed so that bacteria which had correctly sorted its DNA into the right order would become antibiotic resistant, and the huge numbers of bacteria present effectively performed massively parallel processing to crunch through the possibilities and find the most efficient sequence of flips. As a method of computation, it’s good, but it does produce antibiotic resistant bacteria, so it’s probably not the best.

So, if you’re serving pancakes for dinner today, and you find they all come out different sizes (or if you burn all of them on the bottom), don’t despair – see if you can flip them all into a perfect stack, using mathematics.

Pancake Sorting, at Wolfram Mathworld

Flipping pancakes with Mathematics, Simon Singh, The Guardian (Nov 2013)

A slightly hypnotic visualisation of pancake sorting, on YouTube

The Pancake Problems, at Douglas B West’s index of open problems in graph theory and combinatorics

]]>Before Christmas, the benign megasurveillance bods at GCHQ released a set of festive puzzles, in the form of a Christmas card and associated website. An initial nonogram puzzle led to a sequence of increasingly fiendish teasers, and solvers of the final set of puzzles were invited to email in their answers, with the correctest winning a fancy paperweight, signed book and, GCHQ were at pains to stress, not an Imitation-Game-style secret job offer.

GCHQ have now given provided the full solution and vague biographical details of the three people closest to a complete set of correct answers. The Guardian have broken the spies’ code of silence, naming and indeed interviewing one of the winners, mathematician and former Fifteen To One winner David McBryan.

The puzzles are still available for anyone who want to have a go. Aperiodical readers may be particularly interested in Part 4 which consists of three integer sequences (reviews forthcoming), but all parts of the competition include a fair smattering of maths.

A Christmas card with a cryptographic twist for charity – the starting point for the puzzles

Director GCHQ’s Christmas card puzzle – how did you do? – post announcing the results, including a link to the solutions PDF

]]>For about 40 minutes of this week’s episode of Relatively Prime (Number 5 of 8, already? Good heavens!), Samuel Hansen looks like he’s managed to escape from his shameful, borderline criminal, past in Las Vegas. But he’s pulled back in for one last job, which is a debacle, of course.

So, we start the second half of the season with a few non-mathematicians calling in to explain what mathematicians do all day: wear down enormous pencils, furrow brows, pick theorems off of the conveyor, and miaow. This isn’t all that far off of what I do, come to think of it. A strong opening, neatly setting the scene and asking the question the rest of the show is to answer.

The first interviewee is Professor Anna Haensch of Duquesne^{1} University in Pittsburgh, co-host of The Other Half podcast, who gives a description of the life of an academic. I only lasted a few years in academia, but it sounds like a pleasant enough job if (like Haensch) you’re cut out for it. She seems incredibly organised, and balances the various strands of her job in a way I’m slightly envious of.

Contrast that to the second interviewee, Kristin Lauter, Principal Researcher and Research Manager for the Cryptography Group at Microsoft Research. Lauter seems to be attempting to spin more plates than can physically be spun, rattling from meeting to conference to interview, managing interns and employees, overrunning, missing emails… oh, and running an important national organisation while shipping five to ten papers a year. That sounded like a stressful routine.

Two good interviews: Samuel stayed discreetly out of the way and jumped in only when needed (he could easily have edited out one of these moments, but I thought the comedy of it justified leaving it in.)

And then… a revival of Combinations & Permutations, even though (according to the blurb) no-one asked for it. I’m pulling that face, you know the one. The one where I don’t want to sound over-critical, but know I’m at best going to damn with faint praise. A discussion with C&P regulars Nathan Rowe, Brandon Metz and Sean Breckling, apparently contemporaries of Samuel’s at grad school he’s slightly lost touch with over the last few years.

In principle (there goes that face again) it’s a good idea: catch up with three early-career mathematicians with something in common and see where they’ve wound up. In practice, it descends into four buddies interrupting each other in gales of laughter about in-jokes.

It’s a pity, because what they’re up to is pretty interesting, from slot machine design to X-ray scanning; it’s just hard to separate the low-down from the high-jinks. As a rule of thumb, if you feel it necessary to apologise to previous guests for the segment to come, it might be worth considering whether it’s up to par.

Overall, a bit of a mixed bag that started with great promise but dropped away sharply for me.

**Listen** to Relatively Prime: Other Duties As Assigned at relprime.com. While you’re there, catch up on Season 1.

*Colin was given early access to Season 2 of Relatively Prime, in return for writing reviews of each episode. Furthermore, Samuel is Aperiodipal numero uno and most of us chipped some money into the Relatively Prime Kickstarter, too. Just so you know.*

- pronounced du-KAYN, and spelt at random

Puzzlebomb is a monthly puzzle compendium. Issue 50 of Puzzlebomb, for February 2012, can be found here:

Puzzlebomb – Issue 50 – February 2016

The solutions to Issue 50 will be posted at the same time as Issue 51.

Previous issues of Puzzlebomb, and their solutions, can be found at Puzzlebomb.co.uk.

]]>On top of the usual disclosures, I should add that Dave Gale and I interviewed Samuel Hansen this week for our Wrong, But Useful podcast, which you might like to listen to for a deeper insight into Samuel’s brain.

During the conversation, he warned me I wouldn’t like Episode 4 of the new Relatively Prime, “Diegetic Plots, Chapter 1”. I don’t know if that was expectation management or an elaborate double bluff, but the joke’s on you, Hansen: I jolly well *did* like it, so there!

Now, I didn’t know what diegetic meant, even after it was explained to me. Luckily, Wikipedia has my back:

Diegesis/ˈdaɪəˈdʒiːsəs/ is a style of fiction storytelling that presents an interior view of a world in which:

1. details about the world itself and the experiences of its characters are revealed explicitly through narrative

2. the story is told or recounted, as opposed to shown or enacted.

Even after that, the geekiness of the joke still passes me by, but no matter! For a podcast subtitled “Stories from the Mathematical Domain”, it makes sense to explore the world of literature — or, more precisely, poetry and prose.

We start with a poem by Gizem Karaali entitled *The Colors of Math*. I’m no poetryologist and don’t have an intelligent comment to make about this, except that there’s not enough about anger in descriptions of maths. Frustration? Sure. Apathy? Whatever. But anger?

There’s plenty of anger in the second segment, my first “I need to hear this again and probably again” part of this season: a reading of Ted Chiang‘s *Division by Zero*, the story of a mathematician who disproves the consistency of arithmetic and her baffled husband, written in the style of a bogus proof.

Third up is a piece of Samuel’s own work, The Patent World Order, an enjoyable sidetrack into a dystopian future where any sufficiently hard maths is patented. (I can sense his smugness at the name of the legal firm, by the way. Sense it from here.)

We’ve then got two excerpts from Stuart Rojstaczer‘s *The Mathematician’s Shiva* and an interview with the author. The book tells the story of Rachela, the greatest mathematician of her generation; the interview tells the story of where the story comes from.

Lastly, JoAnne Growney reads her poem *A Taste of Mathematics*, about a mathematician contemplating the billionth digit of π and how maths relates to jalapeños. If there’s not enough anger in mathematical writing, there are *certainly* not enough jalapeños.

Taking on the (literal) literature of mathematics is a bit of a sidestep from Relatively Prime’s habitual domain, but I think it’s an intriguing one; while I’ve questioned some of Samuel’s editorial judgement elsewhere in this season, it’s only fair to give him a high five when I think he’s got it right. For me, Relatively Prime is at its best when it’s unpredictable; of all the adjectives I could think of to describe Samuel and his work, boring is a long way down the list.

**Listen** to Relatively Prime: Diegetic Plots, Chapter 1 at relprime.com. While you’re there, catch up on Season 1.

*Pythagoria* is a puzzle game for PCs. It’s the same idea as Naoki Inaba’s *Area Maze*: you’re shown a geometric construction, not drawn to scale, and you have to work out a missing length or an area.

Each puzzle is constructed so that it can be solved without ever dealing with fractions, though what exactly that means is up for debate. Whatever it means, it keeps you from breaking out pen and paper to solve a problem algebraically, when you know there should be a way of doing it in your head.

*Pythagoria* elaborates on the Area Maze concept a little, by adding circles and right-angled triangles to the mix, sometimes requiring a little bit of Pythagoras – hence the name. The “only whole numbers” rule is never broken though, so you can always work out that the circle segment you want is half of a rectangle, or that you only need to look at its radius, for example.

There are 60 puzzles to work through, which isn’t very many. The game only costs £1.59, but even for that money you’ll want a few hours of entertainment. I’ve just worked through the whole game in about three hours. That information’s useless to you without a way of judging how good at this sort of thing I am, so I’ll add another data point – Rock, Paper, Shotgun’s John Walker said in his review that he got stuck about halfway through, let down by what seems to be a lack of practise in geometry.

A puzzle games lives and dies by its interface: if anything’s unclear it instantly becomes frustrating instead of challenging, and if you can’t get to information you need easily, or entering your solution once you’ve found it is too difficult, it reduces the enjoyment considerably. For the most part, Pythagoria gives you everything you need very clearly. The only real problem I had was in levels where there are several areas labelled with question marks, and it wasn’t clear if that means they’re the same and you need to find the area of one of them, or as it turns out is the case, you need to work out the total area.

The answer to each puzzle is a whole number between 1 and 9, so entering your solution involves just pressing one of the buttons at the bottom of the screen.

The controls could do with a bit more work – while mouse input works nicely, I’ve got a fancy touchscreen laptop and Pythagoria clearly hasn’t been written with that in mind. Buttons are hard to press – it sometimes takes a couple of taps to make something happen – and the pencil tool doesn’t draw smooth strokes, so eventually I gave up and copied the puzzles out on paper so I could annotate them more easily. The eraser button removes everything you’ve drawn, which is annoying if you just made a little slip-up.

Now that I’ve finished all the levels, I’d like to go back to some that I got more through fluke than understanding, but it’s tricky to find them – the level select screen just shows a grid of numbers. I’d prefer to see little thumbnails of the puzzles.

In the end, I don’t miss my £1.59. I’ve had a pleasant morning of puzzling, user interface problems notwithstanding. I reckon *Pythagoria* would do a lot better on mobile than as a desktop game.

*Review by non-mathmo John Walker at Rock, Paper, Shotgun.*

Alex Bellos posted an absolutely fiendish Area Maze puzzle on his blog last year.

There are free, official Area Maze apps for Android and iOS.

]]>In a remarkable example of us being psychic (or, what’s also known as ‘a coincidence’), our recently posted introduction to the game of Go has been made more topical by actual Go-related news.

The game of Go has long been considered a difficult game for artificial intelligences to play – much more so than chess, which has plenty of computer players. A Wired article from 2014 describes Go as ‘the ancient game that computers still can’t win’. As well as having a much larger set of possible games ($10^{761}$, as opposed to $10^{120}$ in chess), Go also has highly complicated strategy, compared to its simple rules, and moves made early on in the game can result in important changes to the state of the board further down the line.

However, that doesn’t mean people haven’t been working on the problem, and Google has now announced that they’ve created an AI which can beat a high-ranking human Go player. They’ve used a combination of a tree search algorithm, and techniques involving neural networks – feeding the program large amounts of information about human players’ moves, then letting it play millions of virtual games to hone its skills. Using the Google Cloud Computing Platform for the huge amounts of processing power needed, they’ve managed to finally train a program so it’s good enough to beat a decent standard of human player.

The team responsible have published a paper in Nature (the PDF is available directly from Google though) which outlines their methods. The project, called AlphaGo, has an official page on Google Deepmind.

AlphaGo, on Google Deepmind

AlphaGo: using machine learning to master the ancient game of Go, on the Google Official Blog

Mastering the game of Go with deep neural networks and tree search, in Nature

]]>