# Categorifying Halting

Posted on August 29, 2019

Note: unlike other posts where I try to explain something to a reader with little background, this post is an exposition of a specific problem in a specific domain, and it will be unintelligeble unless you are comfortable with category theory, computability theory, and have read specific background material.

Anyways, there was a recent n-category cafe post on categorifying Turing machines in the form of Turing categories. To me, one of the central purposes of Turing machines is to formalize and solve the halting problem, so I was surprised that there was no discussion of the halting problem in the post, or in any of the literature that I (briefly) skimmed. So I thought I would categorify the halting problem. The fact that Turing categories have been around for at least 10 years means that it’s unlikely that nobody’s ever thought of this before, so I probably missed something, but for fun I will pretend that this post is original.

To state the Halting problem in a Turing category, we first give a definition for what a halting detector would look like, or in the modern parlance, a “loop snooper.” This would be a total arrow \mathrm{ls} : A \times A \to 2 such that the following diagram commutes, where n : 2 \to 1 is defined by n(1) = \ast, n(0) = \bot (using the universal property of coproduct) and ! is the unique total map to 1.

Commutativity of this diagram states that \mathrm{ls} sends a program to 1 whenever it doesn’t halt, and 0 when it does halt. Suppose that such a \mathrm{ls} existed in a Turing category. Then define an arrow G' : A \to 1 as the composition of the following morphisms, where \sim(1) = 0, \sim(0) = 1.

In other words, when applied to an element a of A, G' nonterminates if a applied to itself terminates, and terminates if a applied to itself nonterminates. Now, by the Turing property, there is an arrow h_{G'} : 1 \to A such that the following diagram commutes.

The paradox comes when we look at the arrow G = G' \circ h_{G'} : 1 \to 1. If G is total, then G' applied to G' must not terminate, by definition of G'. However, that implies that G is not total. Similarly, if G is not total, then G' applied to G' must terminate, which implies that G is total. I think I’m missing some bit of formalization, I’m not sure if I’m implicitly assuming something that is not true of general Turing categories, but I think it should go something like this. I’m linking to this post from the comments section on the turing category post, so hopefully someone should be able to help me out with this.

Site proudly generated by Hakyll with stylistic inspiration from Tufte CSS