They say that is everywhere. (They say that about too, but I’m not buying it.) I thought it would be interesting to discuss the most unexpected place I’m aware it’s ever appeared.
This is the Mandelbrot Set:
(All the images in this post are interactive, so you can zoom in on areas of interest or zoom out to get context from a close-up. You can also view the Mandelbrot Set in fullscreen or read the source code.)
It’s formed by taking a point in the complex plane, and repeatedly iterating such that . The black area — the actual Mandelbrot Set — is the set of points such that the series converges. The other points are coloured by how quickly the series diverges. Conventionally, it’s the number of iterations before , because is roughly infinity, right? For a fuller discussion of the algorithm, see Coulton et al’s definitive work on the subject.
In 1991 Dave Boll was investigating whether the ‘neck’ of the Mandelbrot Set (see below) was infinitely thin.
While doing so, he found that approaching the point from the side — which is to say from above or below in the picture above — produces this interesting data:
This seems to be building digits of . Having found an interesting phenomenon around the neck, he did what any normal-minded person would do and applied the same technique to the bum of the Mandelbrot Set:
This time he had to approach from a direction we will politely refer to as “from the right”:
That looks like digits of again.
What happens on the real line stays on the real line — you can apply Mandelbrot’s iteration to a real as many times as you like and you’ll never get a complex — so the bum is easier to analyse. (This is where the term ‘analyse’ comes from.) Gerald Edgar has done so for us. He observes that if is just over 0.25, then increases very slowly until it reaches , after which it rapidly diverges. That means it doesn’t matter much whether we draw our cutoff at or or whatever — the number of iterations will be pretty close to the one where . That makes sense, because the cutoff of was arbitrary, and we shouldn’t get from an arbitrary process.
To find the magical that builds , then, he defines and sets about finding a zero. It’s obvious that
And since , that becomes
Which, if we substitute back in, gives us
Which we can rewrite as
where is the distance we are from the point we shall politely call “”.
Since is increasing slowly in this region, we can say without offending too many pure mathematicians that
That’s a differential equation, and we know how to solve those. (We use Wolfram|Alpha.)
This is zero — which is to say that starts to grow uncontrollably — when , which is to say when , which is exactly what we saw earlier:
It will never build exactly, because we made some approximations and computing the iterations without losing any precision is a huge task even for modern computers, but it’s nice that there’s a theoretical justification for this quite unexpected apparition of .
This is incredibly awesome. I want to thank you, it’s a very clear explanation. Would it be the same if we used the point z0=(-0.75,X) which Dave Boll had used? I mean, we would have a different differential equation but it’s basically the same thing, right?
I wrote a paper that proves David Boll’s discovery. Check out https://www.worldscientific.com/doi/abs/10.1142/S0218348X01000828