Posted on December 13, 2019

In this post, I am going to give several examples of Petri nets, and also demo a new tool for investigating Petri nets that I have developed.

This Petri net is the simplest non-trivial Petri net:

We have one type of molecule (as a side note, “type of molecule” is a bit of a mouthful, so we will adopt the standard convention of calling these “species”) and one reaction. The most famous example of this sort of thing is uranium decaying. At any given moment, each molecule of uranium has a fixed probability of decaying into something else (again, I don’t know chemistry). The important part is that it’s no longer uranium, we don’t care what it turns into.

I remember when I first learned about uranium decay I was mystified by the concept of “half-life”. No matter how large your pile of uranium, the amount of time it takes for the pile to halve in size is a fixed constant. This seemed like an incredible coincidence to me. Why shouldn’t larger piles halve slower or faster than smaller piles? If we run the Petri net model, we can figure this out.

Suppose that an atom has a probability 1/2 of decaying in t seconds. Then we can get a rough idea of the behavior of N atoms in the following way. Start with N coins. Flip all of them and discard any that come up tails. Then repeat until all of the coins are discarded. Each round of the game simulates t seconds of a pile of uranium. We can calculate on average how much uranium is left after round k by multiplying the probability that a given coin is left after round k by N. This probability is the probability that k consecutive coin flips all come up heads, which is 1/2^k. Therefore, by the law of large numbers there will be about N / 2^k coins left after round k. This models the fact that there are on average N/2^k atoms of uranium left after kt seconds.

Now we can see that the fact that larger piles and smaller piles take the same time to half in size is not an “incredible coincidence”, it is an elementary consequence of the fact that the uranium atoms don’t interact with each other, and the law of large numbers.

Classically, this type of process is described by a differential equation \frac{dP}{dt} = -rP This differential equation says that the rate by which the population is decreasing is proportional to the current size of the population. The solution to this equation is P = P_0e^{-rt} where P_0 is the population at time t=0. Now we get to see the fun part of this article: I wrote a program that numerically solves the “master equations” for any Petri net and allows you to play with different rates and initial populations. Here is the instantiation for this particular Petri net.

This solution also has half-lives, because if t_h is the time required for it to get down to half of its size, then P(t+t_h)=P(t_h)P(t)=\frac{1}{2}P(t) However, here the population is a continuous value that evolves deterministically, whereas in the Petri net model, the population is a discrete value that evolves stochastically.

We can get from the Petri net model to the continous model by letting P(t) = E[A(t)] where A(t) is a random variable that represents the number of atoms at time t. This shows that the Petri net model knows more about the behavior of this model than the continous model. Specifically, the Petri net model knows that eventually the population will be exactly 0, whereas the continuous model always gives a population that is greater than 0. Using the Petri net model we can compute not only the expected time it takes for a population to halve in size, but also the variance of this time. For large populations, this variance might be negligible, but for small populations there could be nontrivial variance, and this might be important in applications.

However, we have not formally defined the random variables that characterize the Petri net models. It’s not too hard to do this, but I would like to give more examples before diving in to the formalism.

Decay happens when each molecule has a fixed chance of dying over each time interval. On the other hand, growth happens when each molecule has a fixed chance of creating a new molecule over each time interval. In the land of Petri nets, this is a reaction where one molecule goes in and two molecules come out.

This model describes, for instance, the behaviour of a Petri dish full of bacteria that reproduce by dividing in half. Now, as the population of A becomes larger, at any given moment it is more likely for one of the elements of A to split into two. Therefore, the larger the population of A, the faster the population grows. If we take the expectation of the population, we can write down a differential equation for this behavior as before. In this case, the rate of growth for A is proportional to the size of A, and the equation is therefore \frac{dP}{dt} = rP with solution P(t) = P_0e^{rt}

This equation is the time reversed version of the previous equation! However, the Petri net for growth does not look like the opposite of the previous Petri net. This would be

This describes linear growth; at any instant there is a fixed chance of adding another cell to the population. Although the differential equations for exponential growth and decay look like opposites, it turns out that you can’t just naively reverse the process for exponential decay to get the process for exponential growth. This is because there is a fundamental difference between inputs and outputs in a Petri net. Inputs control the rate of reactions, and outputs don’t. Therefore, if you simply swap inputs and outputs in a Petri net, the behavior will not simply time-reverse. Maybe this says something deep about the nature of causality and time, but I don’t know what. Anyways, let’s do another example

There is a standard model that scientists use to describe the growth of a population in an environment with carrying capacity. The Petri net for this model looks like

There are two processes here; birth and death. Birth happens at a rate proportional to the size of the population, and death happens at a rate proportional to the *square* of the size of the population. Therefore, at small population sizes, the birth process happens more often, while at larger population sizes, death happens more often. The differential equation describing this is \frac{dP}{dt} = r_GP - r_DP^2 and a solution to this looks like

Notice how at the beginning, the curve looks exponential, but then it smoothly tapers off. As t \to \infty, the population is constant as birth and death are precisely balanced.

The assumptions behind this model are that death happens from some competition between two cells in the population. However, from a biological perspective, this is not the only process that could limit the population.

For instance, here is another set of assumptions that also models carrying capacity. Suppose that in order for a cell to divide, it requires some food that is consumed in the process of dividing. Food is regenerated at a fixed rate. Moreover, cells eventually die. The Petri net representing this is

And again, here is the applet so you can play with solutions.

Before going on, fiddle with the parameters for a bit and see if you can find parameters that give it oscillatory behavior. Think about why this might happen.

One salient feature of this model is that if we begin with a large amount of food, but the rate of food production isn’t that high, there will be an initial growth spurt followed by a large die-off as population can’t be sustained. In other words, wake up sheeple! The logistic growth equation carries some dangerously false assumptions about growth, and does not account for population spikes!

Another, slightly different model for constrained growth involves a recycling process for the food.

This model describes smoother growth than the previous model (at least, I haven’t found any oscillatory behavior), and this is because the more food consumed, the faster recycling happens. However, there still is potential for a population spike at the beginning. Another interesting feature of this model is that P+F is a *conserved quantity*. Every time a unit of food is eaten, a unit of poop is produced, and every time a unit of poop is recycled, a unit of food is produced.

The last Petri net for today is a classic. In my first blog post on Petri nets, I talked about the Lotka-Volterra predator prey model. At this point, you are old hat at reading processes off of Petri nets, so I’m just going to give it to you:

Now, you can see that this Petri net is built out of pieces that we’ve seen before. But we could also add other pieces to it. For instance, what if the rabbit population was constrained by some linearly increasing food supply, or a recycling food supply? What if wolves fought each other periodically? What if wolf-poop was an essential fertilizer for the grass that rabbits eat? Well, you can answer these questions yourself soon, when I develop an interface for making your own Petri nets and investigating them using the same graphs and sliders as you have seen in this post.

Oh, and if you want to see the code used for these graphs, it’s up on Github. Warning: as of now it’s a bit of a mess, because I just wanted to get it to work for this blog post. There is no documentation.

EDIT (2019-12-28): The Petri net editor is now live!