Nontermination

Consider the expression , which we will refer to as for brevity. Let’s try evaluating : Evaluating never reaches a value! It is an infinite loop!

What happens if we use as an actual argument to a function? Consider this program: If we use CBV semantics to evaluate the program, we must reduce to a value before we can apply the function. But never evaluates to a value, so we can never apply the function. Thus, under CBV semantics, this program does not terminate. If we use CBN semantics, we can apply the function immediately, without needing to reduce the actual argument to a value: This difference highlights the fact that evaluation orders, like CBV and CBN, can differ in terms of termination: expressions that diverge under one evaluation order may terminate in another.

Recursion

demonstrates that we can write nonterminating -terms, but so far they are not very useful. Can we write recursive functions in the -calculus that do actually terminate and produce a value, as we can in most “real” programming languages? It seems difficult at first because all functions are anonymous—there is no name that a function can use to refer to itself. Surprisingly, however, it is possible to make functions that behave recursively. Roughly, our strategy will build on the “self-replicating” behavior to let functions invoke copies of themselves an unbounded number of times.

Let’s consider how we would like to write a factorial function: In slightly more readable notation, this is just: Here, as in the definition above, the name FACT is simply meant to be shorthand for the expression on the right-hand side of the equation. But FACT appears on the right-hand side of the equation as well! Trying to expand the definition would yield an infinite term. This is not a definition, it’s a recursive equation.

Recursion Removal Trick

We can perform a “trick” to define a function FACT that satisfies the recursive equation above. First, let’s define a new function FACT′ that looks like FACT, but takes an additional argument . We assume that the function will be instantiated with FACT′ itself. Note that when we call , we pass it a copy of itself, preserving the assumption that the actual argument for will be FACT′. Now we can define the factorial function FACT in terms of FACT′ .Let’s try evaluating FACT on an integer. So we now have a technique for writing a recursive function : write a function 𝑓′ that explicitly takes a copy of itself as an argument, and then define

Fixed point combinators

There is another way of writing recursive functions: we can express the recursive function as the fixed point of some other, higher-order function, and then take its fixed point. We saw this technique earlier in the course when we defined the denotational semantics for while loops.

Let’s consider the factorial function again. The factorial function FACT is a fixed point of the following function. Recall that if 𝑔 is a fixed point of , then we have . So if we had some way of finding a fixed point of , we would have a way of defining the factorial function FACT.

There are a number of “fixed point combinators,” and the (infamous) combinator is one of them. Thus, we can define the factorial function FACT to be simply , the fixed point of . The combinator is defined as It was discovered by Haskell Curry, and is one of the simplest fixed-point combinators. Note how similar its defnition is to .

We’ll use a slight variant of the combinator, , which is easier to use under CBV. (What happens when we evaluate under CBV?). The combinator is defined as Let’s see it in action, on our function . Define FACT to be and calculate as follows:

There are many (indeed infinitely many) fixed-point combinators. Here’s a cute one: where To gain some more intuition for fixed-point combinators, let’s derive a fixed-point combinator that was originally discovered by Alan Turing. Suppose we have a higher order function , and want the fixed point of . We know that is a fixed point of , so we have