# Two Dimensional Syntax

This post is a follow-up to my post on Petri nets. You should probably go read that post first, because then this post will make more sense, but if you don’t I won’t tell anyone.

## Petri Net Histories

In the post on petri nets, we investigated a graphical way of displaying a system of transitions between groups of molecules. We also talked about how we can use this system of transitions to evolve a “test tube” computationally. In this post, we will investigate a graphical representation of a “history” of the evolution of a test tube. It looks something like this:

This is called a “string diagram.” To interpret it, we view each horizontal slice of the diagram as a picture of the test tube at an instance in time. Each line in the slice represents a molecule. In the above diagram, red lines are O_2 molecules, green lines are H_2 molecules and blue lines are H_2O molecules. So therefore, at the beginning we start out with two O_2 molecules and four H_2 molecules.

Each black dot represents the reaction 2H_2 + O_2 \to 2H_2O. The lines connected below each black dot are the inputs to the reaction and the lines connected above are the outputs. Each white dot represents the opposite reaction.

Now, right now you might be thinking that he promised new two-dimensional *syntax*. This is just a graph, and graphs are old news. (Depending on your background, you might have interpreted “graph” in the last sentence as graph in the sense of “bar graph” or “line graph”, or you might have interpreted “graph” in the sense of nodes and edges. Both interpretations are applicable!) What makes string diagrams *syntax* is that you can compose them. For instance, I could add to this graph a couple other molecules who are just doing their own thing.

Each of the components of the graph below are individually valid histories of separate worlds of molecules. Putting them next to each other just means that we are considering both of the worlds together. However, there is another way of composing diagrams, and that is by putting them on top of each other.

Notice that it only made sense to stack these two diagrams because they started and ended with compatible strings. In other words, molecules can’t disappear.

Now that you have an intuition for what these diagrams mean, I am going to start to talk about how to work with them mathematically. We start by giving a description of the diagram-slices. In order to do that, we give each slice a value in a set A. However, we also require that this valuation captures the ability to compose diagram slices by sticking them next to each other, as in the following picture where each slice of the first wire has valuation a and each slice of the second wire has valuation b.

When we compose the two wires, and thus make each slice a composite slice, we should somehow compose the valuations. To do this, we introduce a binary operation \otimes on A, and we say that the valuation of a slice of the above diagram is a \otimes b. (We pronounce \otimes as “with”, as in “a with b”. Yes, this is the same symbol as tensor products. Yes, this does have something to do with tensor products. No, we will not go into the correspondence in this post.) In fact, we can view the composite of the two wires as a single wire, and label it a \otimes b, as in the following diagram.

Going back to the Petri net example, our set of slice valuations is the set of molecule-keyed, integer-valued dictionaries. We compose two slices by adding the two dictionaries together component-wise. The valuation of each wire is a dictionary with 1 in the slot corresponding to the molecule that the wire is representing, and 0 everywhere else. This means that the valuation of a slice only matters on the number of wires of each type in the slice. Thus, in the following diagram,

any slice of the right hand side has a valuation is equal to the valuation of any slice of the left hand side.

It is beginning to get tiresome to say “the valuation of a slice” all the time. Therefore, we are simply going to identify slices with their valuation; we assume that we are working with a fixed way of assigning valuations. With this identification, I will now say that “any slice of the left hand side is equal to a slice of the right hand side”. Pictorially, this is not true. However, these diagrams are just visualizations of math, and it is entirely possible that one object could be visualized in two different ways.

Now, remember that these slices are only half of the picture! In fact, they are a one-dimensional subspace of the picture! However, they are important because any diagram starts and ends with a slice. Moreover, you can see that it only makes sense to compose two diagrams horizontally if they start and end with the *same* slice. Therefore, if we want to talk about composing diagrams, it makes sense to keep track of their beginning and ending slice.

With this in mind, for every pair of slices a and b we give valuations of diagrams with a on the bottom and b on the top in a set P(a,b). Again, we will identify a diagram with its valuation. So really what we are saying is that for each pair of slices, we have a set of stuff that goes between them, represented by diagrams. We call this stuff that goes between slices “processes”. Generically, if a and b are slices and f \in P(a,b), we represent f as

In order for our math to match up with our diagrams, we need to have a way of composing these processes. Remember, there are two ways of composing diagrams. In the first way, we have f \in P(a,b) and g \in P(b,c) and we stack the two diagrams on top of each other, like so.

Therefore, we should have a process f;g \in P(a,c) that represents this vertical composition. We pronounce the semicolon as “then” as in “we do f and then we do g”, or “f then g”.

However, there is another way of composing diagrams, namely horizontal juxtaposition.

By our definition before, the bottom slice of this diagram is a \otimes c and the top slice is b \otimes d. To represent the horizontal composition, we take a process f \otimes g \in P(a \otimes c, b \otimes d). Again, we pronounce this as “with”.

There is one last component that we need to formalize. Namely, what is just a plain wire?

It must be some process in P(a,a), because it starts and ends with a. However, it is a special process, because putting it on top of another process doesn’t change that process. Therefore, we require that there is a special element of P(a,a) called 1_a, such that for any f \in P(a,b), 1_a;f = f = f;1_b (this is why we call it 1 — it is the identity with respect to the “then” operator). If we write out this identity in string diagram notation it becomes easy to remember. Essentially, the length of the wire doesn’t matter, only the sequence of beads on the wire.

Now that we have formalized what string diagrams mean, we can give an interpretation in terms of Petri net histories. For any two states a and b, which you should recall are just dictionaries mapping molecule types to integers, P(a,b) is the set of sequences of events that start in state a and end in state b. For each transition t in the Petri net, we have an “atomic” process in P(I_t,O_t), where I_t is the dictionary describing the input to the transition, and O_t is the dictionary describing the output. We call this “atomic” because it can’t be broken down further. Using these atomic processes, we can build up more complicated processes. In the original example, we have two atomic processes, which represent the same reaction going forwards and backwards. Note that instead of labelling the wires and beads, we just give them different colors.

You can see that we have built up the more complicated diagrams in the beginning by horizontal and vertical composition of these atomic processes along with identity wires. Moreover, this corresponds to actual mathematical operations on the slices and process sets; playing with these pictures allows us to discuss any mathematical object that can be described in terms of slices, processes, and composition. Instead of writing out arcane formulas like (1_F \otimes \eta) ; (\mu \otimes 1_F), we can draw arcane diagrams like:

Here \eta is a process that goes from nothing to G \otimes F, and \mu is a process that goes from F \otimes G to nothing. So what’s happening is that a particle of type G gets created at \eta and destroyed at \mu. What this diagram *looks like*, however, is a particle going forward in time to \mu and then going backwards in time to \eta. If this were a science fiction movie, we might say that G is the antiparticle of F, and it going forward in time is the same as F going backwards in time. This is science fiction because I don’t actually know what that means. But the diagram sure is suggestive, and I certainly would not have thought of anything like that if I just looked at (1_F \otimes \eta) ; (\mu \otimes 1_F). As a side note, this diagram also represents one half of an equation characterizing adjoint functors in category theory. If that makes no sense to you, don’t worry about it. But if you have seen adjoint functors before, and you know what I am referring to, now when you see adjoint functors you might also think about time travelling particles. I think that’s kind of neat.

My point is, having a graphical way of writing down processes allows us to use our overdeveloped visual cortex to figure things out intuitively, and also serves as a vehicle for connections between different subjects that might be invisible in traditional notation. I hope you find this as cool as I do!

Next time, I will present a completely different interpretation of string diagrams, stay tuned!