They say that $\pi$ is everywhere. (They say that about $\phi$ too, but I’m not buying it.) I thought it would be interesting to discuss the most unexpected place I’m aware it’s ever appeared.

This is the Mandelbrot Set:

(All the images in this post are interactive, so you can zoom in on areas of interest or zoom out to get context from a close-up. You can also view the Mandelbrot Set in fullscreen or read the source code.)

It’s formed by taking a point $z_0$ in the complex plane, and repeatedly iterating such that $z_{n+1} = z_n^2 + z_0$. The black area — the actual Mandelbrot Set — is the set of points $z_0$ such that the series converges. The other points are coloured by how quickly the series diverges. Conventionally, it’s the number of iterations before $z_n \geq 2$, because $2$ is roughly infinity, right? For a fuller discussion of the algorithm, see Coulton et al’s definitive work on the subject.

In 1991 Dave Boll was investigating whether the ‘neck’ of the Mandelbrot Set (see below) was infinitely thin.

While doing so, he found that approaching the point from the side — which is to say from above or below in the picture above — produces this interesting data:

$z$ | Number of iterations |
---|---|

$-0.75 + 0.1i$ | $33$ |

$-0.75 + 0.01i$ | $315$ |

$-0.75 + 0.001i$ | $3143$ |

$-0.75 + 0.0001i$ | $31417$ |

$-0.75 + 0.00001i$ | $314160$ |

This seems to be building digits of $\pi$. Having found an interesting phenomenon around the neck, he did what any normal-minded person would do and applied the same technique to the bum of the Mandelbrot Set:

This time he had to approach from a direction we will politely refer to as “from the right”:

$z$ | Number of iterations |
---|---|

$0.26$ | $30$ |

$0.2501$ | $312$ |

$0.250001$ | $3140$ |

$0.25000001$ | $31414$ |

That looks like digits of $\pi$ again.

What happens on the real line stays on the real line — you can apply Mandelbrot’s iteration to a real $z_0$ as many times as you like and you’ll never get a complex $z_n$ — so the bum is easier to analyse. (This is where the term ‘analyse’ comes from.) Gerald Edgar has done so for us. He observes that if $z_0$ is just over 0.25, then $z_n$ increases very slowly until it reaches $\frac{1}{2}$, after which it rapidly diverges. That means it doesn’t matter much whether we draw our cutoff at $z_n \geq 2$ or $\geq 5$ or whatever — the number of iterations will be pretty close to the one where $z_n \approx \frac{1}{2}$. That makes sense, because the cutoff of $2$ was arbitrary, and we shouldn’t get $\pi$ from an arbitrary process.

To find the magical $n$ that builds $\pi$, then, he defines $y_n = z_n – \frac{1}{2}$ and sets about finding a zero. It’s obvious that

$$ y_{n+1} = z_{n+1} – \frac{1}{2} $$

And since $z_{n+1} = z_n^2 + z_0$, that becomes

$$ y_{n+1} = z_n^2 + z_0 – \frac{1}{2} $$

Which, if we substitute $y_n = z_n – \frac{1}{2}$ back in, gives us

$$ y_{n+1} = \left(y_n + \frac{1}{2}\right)^2 + z_0 – \frac{1}{2} \\

= \left(y_n^2 + y_n + \frac{1}{4}\right) + z_0 – \frac{1}{2} $$

Which we can rewrite as

$$ y_{n+1} = \left(y_n^2 + y_n + \frac{1}{4}\right) + \left(\frac{1}{4} + \epsilon\right) – \frac{1}{2} \\

= y_n^2 + y_n + \epsilon $$

where $\epsilon$ is the distance we are from the point we shall politely call “$z_0 = 0.25$”.

Since $z_n$ is increasing slowly in this region, we can say without offending too many pure mathematicians that $$\frac{\text{d}y_n}{\text{d}n}\approx y_{n+1} – y_n \\

= y_n^2 + \epsilon $$

That’s a differential equation, and we know how to solve those. (We use Wolfram|Alpha.)

$$ y_n = \sqrt{\epsilon} \tan \left(n \sqrt{\epsilon} + c \right) $$

This is zero — which is to say that $z_n$ starts to grow uncontrollably — when $\tan{n\sqrt{\epsilon}}=0$, which is to say when $n\sqrt{\epsilon}=\pi$, which is exactly what we saw earlier:

$\epsilon$ | $n\sqrt{\epsilon}$ |
---|---|

$0.01$ | $3.0$ |

$0.0001$ | $3.12$ |

$0.000001$ | $3.140$ |

$0.00000001$ | $3.1414$ |

It will never build $\pi$ exactly, because we made some approximations and computing the iterations without losing any precision is a huge task even for modern computers, but it’s nice that there’s a theoretical justification for this quite unexpected apparition of $\pi$.

This is incredibly awesome. I want to thank you, it’s a very clear explanation. Would it be the same if we used the point z0=(-0.75,X) which Dave Boll had used? I mean, we would have a different differential equation but it’s basically the same thing, right?