This article is the last of a series of 3 on the topic of Turing patterns:

  • In the first one, I quickly sketched out the main steps in solving the diffusion equation, and I touched on how to interpret its various parameters.
  • In the second one, I defined most of the tools that we are going to use in this one to gather all the conditions necessary for the apparition of Turing patterns in a dynamical system.
  • In this one, will try to show how al the elements mentioned in the two previous articles articulate when solving a system of reaction-diffusion equations.

This series of articles is dedicated to Carlos Gracia Lázaro and Mario Floria, who were both gentle, masterful and inspiring mentors to me. They pointed me towards the elegance of linear systems, and I grew better a scientist thanks to their guidance.


Turing patterns is a fascinating phenomenon that gives birth to a plethora of designs found in nature, like fish stripes or chemical stability regions. As complex systems amateurs, it is very natural that we want to know more about them: where do they arise from? Can we figure out a set of conditions for their existence? etc.

After learning all about the mysteries of diffusion and taking our first shot at a system of reaction-diffusion equations (RD equations), we will now put our science shoes and do the thing: figure out the conditions on a model’s parameter for Turing patterns to emerge, and study the influence of this parameter on the resulting figure.

The battle plan

We have now seen all the ideas that we will use in our quest for Turing patterns. The time has come to put our battle plan together and figure out exactly what it is that we are about to do.

What are we after, really? We know that Turing patterns are a non-homogeneous stable solution of the RD equation, and that they arise from a perturbation in a homogeneous landscape. We want to know the conditions for such a perturbation to survive for long times (i.e not to end up like heat on a metal plate in the first simulation of this article, but rather like the beautiful patterns in the Gray-Scott model in the second simulation).

  1. The first step will be to look for a solution of the RD equations around a homogeneous equilibrium: can we express \(u(X,t)\) and \(v(X,t)\) under a tractable form? We will see that indeed we can.
  2. Once we have the general solution, we will look into the conditions of stability of the perturbation, expressed as a Big Sum©: which periodicities do not decay to 0 for long times? Such periodicities are non-homogeneous stable solutions of the RD equations: since they are not flat, it means they draw a pattern, and since they’re stable, it means the pattern is here to stay!
  3. In doing so, we will gain even more insight into the conditions for a Turing pattern to emerge. At this point we will be ready for a full example.

I have tried to keep the math in check so far, but now there’s no sneaking around, no fancy intuitions, no jedi mind tricks that can keep us safe any longer. There is no way but through.

The quest for Turing patterns

Part I. Solving the RD equations: a real gun show

Our initial system of RD equations is:

\[\frac{du}{dt} = f(u, v) + c_u \Delta u\] \[\frac{dv}{dt} = g(u, v) + c_v \Delta v\]

A Turing pattern is a stable non-homogeneous equilibrium emerging from a stable homogeneous equilibrium due to a perturbation. Initially, our system is therefore completely uniform, and \(u\) and \(v\) are constant across the whole domain: they are equal to \(u^*\) and \(v^*\) everywhere. Since \(u\) and \(v\) are constant everywhere, there is no diffusion. We get to write \(\Delta u\vert_{u^*} = 0\) and \(\Delta v\vert_{v^*} = 0\). From this, it immediately comes that, since \(\frac{du}{dt}\vert_{u^*}=0\) and \(\frac{dv}{dt}\vert_{v^*}=0\) (because we are in a stable equilibrium), \(f(u^*, v^*)=0\) and \(g(u^*, v^*)=0\). No diffusion, no reaction, no evolution.

Now is the time to bring out the Big Guns© and make these two baddies talk. The first Big Gun© is called linearization (which uses another Big Gun© that you might have heard about if you ever took a calculus class: Taylor expansion).

It’s quite a straightforward tool. It states that for any function \(h\) that can be derived on \(x\):

\[h(x + dx) \approx h(x) + \frac{dh}{dx} dx\]

(If you look at it for long enough, you will eventually realize that it’s simply the derivation formula in disguise.)

In the previous equation, \(dx\) is a small change in \(x\): we will call it a perturbation. We can apply the same principle to our set of equations: if we slightly modify the concentrations of \(u\) and \(v\) around the equilibrium, then we get to learn how our system behaves around that point (sort of like what we did with the Jacobian, except this time diffusion is taken into account). I will only detail the calculations for \(u\) but the process is the exact same for \(v\). So let’s call \(u_p\) and \(v_p\) our perturbations, I have:

\[\frac{d (u + u_p)}{dt}\vert_{u^*} = f(u^* + u_p, v^* + v_p) + c_u \Delta (u + u_p)\vert_{u^*}\]

Using linearization on \(f\), and since for all functions \(a\) and \(b\) I have \(\frac{d(a + b)}{dx} = \frac{da}{dx} + \frac{db}{dx}\), the above equation becomes (all terms evaluated at the point \((u^*, v^*)\)):

\[\frac{du}{dt} + \frac{d u_p}{dt} = f(u^*, v^*) + c_u \Delta u + \frac{\partial{f}}{\partial{u_p}}u_p + \frac{\partial{f}}{\partial{v_p}} v_p + c_u \Delta \delta u\]

Since \(\frac{du}{dt}\vert_{u^*} = 0\), \(f(u^*, v^*)=0\), and \(\Delta u \vert_{u^*} = 0\), the above equation becomes:

\[\frac{d u_p}{dt} = \frac{\partial{f}}{\partial{u_p}}u_p + \frac{\partial{f}}{\partial{v_p}}v_p + c_u \Delta u_p \qquad \text{- eq. 1}\]

This last equation looks strangely familiar: it still has a time derivative on one side, a Laplacian on the other, and a bunch of interaction terms… Gasp ! It’s another reaction-diffusion equation! Except that this time, it tells me about the propagation of \(u_p\), the perturbation that we brought. What we want to know is: will this perturbation disappear (and in doing so, bring us back to homogeneity and eternal flatness), or is it here to stay (and in this case, take us to a new equilibrium, a pattern)? For this we need to solve this new equation and get a proper expression for \(u(X, t)\) and \(v(X, t)\), there’s no escape.

For this, we’ll call a second Big Gun©: the Fourier Series and their variable separation trick, like we saw in the first half of the article. We will assume that the perturbation we just introduced can be expressed as a Big Sum© like so:

\[u_p(X, t) = a_0 + \sum_n \alpha_n(t) \omega_n(X) \qquad \text{- eq. 2.1}\] \[v_p(X, t) = b_0 + \sum_n \beta_n(t) \omega_n(X) \qquad \text{- eq. 2.2}\]

I choose to work with Dirichlet boundary conditions, so there will be only \(\sin\) in the sum (it would have been only \(\cos\) with Neumann conditions, but the rest of the process would have been the same):

\[\omega_n(X) = \sin(\frac{n \pi X}{L})\]

It’s a reasonable assumption to make as long as our systems are physically bounded in space and in real value: a valley has a finite surface and can only contain a finite amount of foxes and rabbits, same for a cell and chemical concentrations etc. Reminding us of how we solved the Heat equation, introducing eqs. 2.x in eq. 1 and focusing on a single value of \(n\) (i.e a single wavelength) eventually results in:

\[\frac{d \alpha_n }{dt} = \frac{\partial{f}}{\partial{u_p}}\alpha_n + \frac{\partial{f}}{\partial{v_p}}\beta_n + c_u \lambda_n \alpha_n\] \[\frac{d \beta_n }{dt} = \frac{\partial{g}}{\partial{u_p}}\alpha_n + \frac{\partial{g}}{\partial{v_p}}\beta_n + c_u \lambda_n \beta_n\]

I removed the \((t)\) to make it more digest, but we should not forget that \(\alpha_n\) and \(\beta_n\) are functions of time: they are precisely the functions that we want to know about. If they tend to 0 for long times, then the spatial variations associated with wavelength \(\frac{n \pi }{L}\) are not viable and will eventually disappear (remember how abrupt variations of temperature tend to disappear first because of diffusion: same idea, except now it’s diffusion + reaction). We are interested in the values of \(n\) that don’t result in \(\alpha_n\) and \(\beta_n\) to tend to 0: these values of \(n\), that are the ones that survive indefinitely, are contained in the mathematical expressions of the stable patterns. Can we figure out what they are? We’re almost there!

Time to call for the third Big Gun©: the vectorization of the previous system of equations. It’s basically considering both the previous equations as a single equation that can be expressed in vector form. It looks like this:

\[\frac{d}{dt} \begin{pmatrix} \alpha_n \\ \beta_n \end{pmatrix} = \begin{pmatrix} \frac{\partial f}{\partial u_p} & \frac{\partial f}{\partial v_p} \\ \frac{\partial g}{\partial u_p} & \frac{\partial g}{\partial v_p} \end{pmatrix} \bullet \begin{pmatrix} \alpha_n \\ \beta_n \end{pmatrix} + \lambda_n \begin{pmatrix} c_u & 0 \\ 0 & c_v \end{pmatrix} \bullet \begin{pmatrix} \alpha_n \\ \beta_n \end{pmatrix}\]

Pretty cool, uh? You can verify, by developing the dot products, that the first dimension of this vectorial equation is indeed the first equation of our system. Same thing for the second dimension and the second equation. You might have recognized our old friend the Jacobian in the middle. By factorizing and setting \(D = \begin{pmatrix} c_u & 0 \\ 0 & c_v \end{pmatrix}\), the diffusion matrix, we get the following nicer expression:

\[\begin{pmatrix} \alpha_n\omega_n \\ \beta_n\omega_n \end{pmatrix} = \begin{pmatrix} \alpha_n^0\omega_n \\ \beta_n^0\omega_n \end{pmatrix} \exp( [J + \lambda_n D]t )\]

Just like before I removed the explicit expression of variables but \(\alpha_n\) and \(\beta_n\) are functions of time, \(\omega_n\) is a function of space and \(\alpha_n^0, \beta_n^0\) are constants that describe the amplitude of the perturbation at \(t=0\).

Phew! We’re there. Battle plan, step 1: check ! We see that the time evolution of each frequency component of \(u_p\) and \(v_p\) follows an exponential evolution, which depends on its eigenvalue \(\lambda_n\). Our destiny is now in the hands of the content of the exponential: \(J + \lambda_n D\). Will this expression result in an exponential decay that will eventually kill the term associated to \(n\) in the Big Sum©? Or will it result in a non-zero limit for longer times, in which case \(sin(\frac{n \pi}{L})\) will live forever in the Big Sum©, and be a part of the glorious, ever-sought after Turing pattern?

Part II. Stability Analysis: A balancing act

To help us answer this, we’ll invoke the Jacobian. Remember that Turing’s intuition is to start with a stable homogeneous equilibrium (so, no diffusion since it’s uniformly flat). Therefore around \((u^*,v^*)\), the conditions that we borrowed from the theory of linear dynamical systems are met, and since there is no diffusion, they simply translate to:

\[\mathrm{Tr}(J) < 0\] \[\det(J) > 0\]

Our hope is that, by adding a perturbation that can diffuse, we get to trigger an instability around \((u^*,v^*)\) that would take us to a different, non-homogeneous equilibrium (a Turing pattern!). So really, we are trying to show that by adding the diffusion term, now at least one of the following conditions is false:

\[\mathrm{Tr}(J + \lambda_n D) < 0\] \[\det(J + \lambda_n D) > 0\]

Since \(\lambda_n\) is negative or null (remember: \(\lambda_n = - (\frac{n \pi}{L})^2\)) and since the sum of the diagonal terms of the Jacobian are negative (by virtue of \(\mathrm{Tr}(J) < 0\)), the first condition is always true (dammit!). So we need the second condition to be true! And just like that: battle plan, step 2: check !

We now have a set of three conditions to be satisfied at once: two conditions on the Jacobian that have to be satisfied before the diffusion process even starts, and a third one without which all the periodicities \(n\) in the Big Sum© will eventually disappear, leaving us with a homogeneous (= uniformly flat) solution:

\[\mathrm{Tr}(J) < 0\] \[\det(J) > 0\] \[\det(J + \lambda_n D) > 0\]

But let’s not stop here. Let’s have a deeper look at that third condition and what it means for the Jacobian: there is deep insight to be gained from it if we reformulate it a bit.

\[\mathrm{Tr}(J) < 0\] \[\det(J) > 0\] \[\frac{\partial{f}}{\partial{u}} c_v + \frac{\partial{g}}{\partial{v}} c_u > 2 \sqrt{c_u c_v \det(J)}\]

The first condition and the third condition, taken together, tell us that we need \(\frac{\partial{f}}{\partial{u}}\) and \(\frac{\partial{g}}{\partial{v}}\) to be of opposite signs: We need their sum to be negative so at least one has to be negative, but we also need their sum weighted by their diffusion coefficients to be positive, so at least one of them has to be positive.

The second condition on the other hand, implies that we need the off-diagonal terms of \(J\) to be of opposite signs as well: for \(\sqrt{\det(J)}\) to exist, we need \(\det(J)\) to be positive. \(\det(J)\) is the product of the diagonal terms minus the product of the off-diagonal terms, and we just saw that the product of the diagonal terms is negative. Therefore the only solution for \(\det(J)\) to be positive is for the off-diagonal terms to be of opposite signs.

So in order to see a Turing’s pattern appear, we need the signs of the Jacobian matrix to be one of 4 combinations:

\[\begin{pmatrix} + & - \\ + & -\end{pmatrix} \quad \begin{pmatrix} + & + \\ - & -\end{pmatrix} \quad \begin{pmatrix} - & + \\ - & +\end{pmatrix} \quad \begin{pmatrix} - & - \\ + & +\end{pmatrix}\]

It’s a very elegant result, that tells us that for a Turing pattern to emerge, we need one of the two quantities to be a self-activator (more of \(u\) means increased generation of \(u\), like the cheaters) and the other to be a self-inhibitor (more of \(v\) means decreased generation of \(v\), like the sheeps). Independently, we need one quantity to activate the other (more \(u\) means an increased generation of \(v\), like healthy people and covid cases) and one quantity to inhibit the other (more \(v\) means slower generation of \(u\), like the chemical B that inhibits chemical A). All combinations of these two conditions are possible, each corresponding to a specific matrix above.

Part III. Endgame: trials of the pattern

Take a deep breath, go have a walk, have a drink of water: we did it. We just built our first lightsaber: the trio of conditions for Turing patterns to emerge. Now, it’s time that we put it to the test and go all chop-chop with it. We will solve a system of RD equations with it: the Brusselator. Under this 80s sci-fi villain movie name we find a set of two relatively simple equations (using the nice set of parameter that Begoña Peña, a researcher at the university of Zaragoza, give us in this paper; \(c_v=8, c_u=1, a=4.5\)):

\[\frac{du}{dt} = a - (b+1)u + u^2v + \Delta u\] \[\frac{dv}{dt} = bu - u^2v + 8 \Delta v\]

We will put our methodology to the test: by testing our three conditions on this system of equations around a homogeneous equilibrium, we will bring a perturbation and look for the periodicities \(n\) that can survive the diffusion process and establish a new non-homogeneous equilibrium (i.e a Turing pattern).

Step 1. Find a homogeneous equilibrium

Remember: homogeneous means flat everywhere, so no diffusion. And equilibrium means \(\frac{dv}{dt}=\frac{du}{dt}=0\). Therefore we are looking for the values \((u^*, v^*)\) that satisfy:

\[a - (b+1)u + u^2v = 0\] \[bu - u^2v = 0\]

I won’t develop the calculations here; it’s your lightsaber after all! We find \(u^* = a\) and \(v^* = \frac{b}{a}\).

Step 2. Check that this homogeneous equilibrium is stable

The Jacobian evaluated at \((u^*, v^*)\) is:

\[J|_{(u^*, v^*)} = \begin{pmatrix} b-1 & a^2 \\ -b & -a^2 \end{pmatrix}\]

Bingo! There is a chance that the diagonal terms (and the off-diagonal terms) are of opposite signs if \(b>1\): we are looking at an inhibitor-activator system when it is the case. It is worth our time pursuing the calculations under this assumption. With \(b \in ]1, +\inf[\), the first and second conditions on the Jacobian are met:

\[\mathrm{Tr}(J|_{(u^*, v^*)}) = b - 1 - a^2 < 0\] \[\det(J|_{(u^*, v^*)}) = 1 > 0\]

Step 3. Bring a perturbation and check if and how it can break the homogeneous equilibrium.

So \((u^*, v^*)\) is a stable, homogeneous equilibrium. Small perturbations are meant to me absorbed by it and it should, in theory, be able to maintain itself indefinitely. That is, unless we behave very inappropriately and bring a perturbation containing a periodicity \(n\) that breaks this equilibrium (i.e that verifies):

\[\det(J + \lambda_n D) < 0\]

\[\Leftrightarrow b > (\frac{a}{2\sqrt{2}} + 1)^2\]

Let’s define the critical value of \(b\) as \(b_c = (\frac{a}{2\sqrt{2}} + 1)^2\).

Ah-ha ! So now we are left with a condition for the survival of periodicity \(n\) that only depends on \(b\), since \(a\) is fixed. OK, so now we know that:

  • if \(b \leq 1\), the system is not an activator/inhibitor system, it cannot sustain any Turing patterns.
  • if \(b <b_c\), the system is an activator/inhibitor system around \((u^*,v^*)\) but if a perturbation is brought, it will not result in a Turing pattern.
  • if \(b = b_c\), then the perturbation associated with \(\lambda_{min} = - \frac{(8b-a^2-8)}{16}\) will be allowed to propagate by diffusion and survive, thus forming a Turing pattern.
  • if \(b > b_c\), then more and more values of \(\lambda_n\) can propagate and last indefinitely in the system, generating more and more intricate patterns.

Step 5. Rejoice !

Now that we worked hard, we deserve that kick of serotonin humans get when they understand something. For this, we will look at the behaviour of the Brusselator model in action, and see that all our hard work will not have been in vain since we can explain what is happening in there using our newfound knowledge. IHere I simulate the Brusselator with the parameter values that we just used. Each cell is colored as a function of its concentration: pure blue means only chemical \(u\), pure yellow means only chemical \(v\), and an in-between concentrations are represented by in-between colours. If you decide to change the value of \(b\) you need to reset the simulation (either of the two buttons is fine) to see the effects in action.

Current Border Conditions

Dirichlet: the world goes beyond the boundaries of the window
Neumann: the world stops at the boundaries of the window Periodic: simulates a sphere (left/top border = right/bottom border)

\(b\) Slider

Tune the value of \(b\) to allow Turing patterns to emerge.
Ranges from \(0.8b_c\) to \(2b_c\).

Play/Pause

Pause or resume the animation

Homogeneous reset

Reset the grid to a homogeneous stable equilibrium.

Random reset

Reset the grid to random values of u and v averaged around the homogeneous stable equilibrium.

Going home: the aftermath

To wrap it up:

  • We started with an equation describing a simple diffusion phenomenon;
  • We solved it using Fourier series, a tool introduced by Fourier in 1807;
  • We introduced an interaction term, describing the reaction between two quantities;
  • We showed, using linearization around a stable homogeneous equilibrium, under which conditions can this equilibrium be broken;
  • We expressed such conditions in the case of the Brusselator and examined the effect of parameter \(b\) on them;

And now we’re here. It’s important to realize that what we have done in the past two articles is no small feat; in the 40s, when Turing introduced the idea that a homogeneous equilibrium could be broken by adding a diffusing perturbation, his idea flew largely under the radar. Nowadays, linear systems, instabilities and symmetry-breaking are hotter topics than they were back then, and a whole branch of scientific works forked from Turing’s paper and intuition. There is no doubt that if Alan Turing had lived longer than he has, he would have made even further significant contributions to this poetic piece of work.

Although biologists sill struggle to prove it, Turing’s patterns could be behind many more natural designs than we already know; unfortunately experiments are hard to set up, and so far we must rely on resemblance between simulated models and observed motifs on plants and/or animals. Turing’s original work referred to the breaking of symmetry in an embryo, and he argued that it is through a reaction-diffusion mechanism that a symmetrical sphere of cells evolves from a ball to the distinctive shape of many vertebrae on earth, with 4 limbs, a tail and a head. In a way, we could all be Turing patterns !

I’ll see you in the sky above
In the tall grass in the ones I love
You’re gonna make me lonesome when you go

Bob Dylan - Lonesome when you go