On this page:
4.1 First, some review
4.2 Getting back to business
4.3 Some closing notes

4 Day 3: Moving forward, albeit slowly

Alright. It’s been a few days since I finished up the last section. I ran back to Racket for a while so I could feel safe amongst my brackets and parentheses, but I can’t ignore Haskell that much longer. Before I started this project, I think I viewed Haskell as little more than a pure, lazy language with typeclasses for genericism which also happened to leverage monads to perform side effects. Now I’m feeling a little less sure of myself.

4.1 First, some review

I posted this to Reddit and Hacker News, and while the Hacker News discussion ended up, erm, shall we say... "mixed", the Reddit comments were quite helpful.

I think the most exciting this to find out is that I was actually on the right track trying to get my custom Applicative instance to work. The correct implementation of Functor was the following, helpfully provided to me by tomejaguar.

newtype BinaryFunction a b c = BinaryFunction (a -> b -> c)

instance Functor (BinaryFunction a b) where

  fmap a (BinaryFunction b) = BinaryFunction (\x y -> a (b x y))

Of course, looking back on this, this feels painstakingly obvious. If pure hoists the function so that it will ignore the first two arguments passed to it, this just doesn’t pass those arguments altogether. With that in place, my original attempt at Applicative should work fine.

instance Applicative (BinaryFunction a b) where

  pure f = BinaryFunction (\_ _ -> f)

  (BinaryFunction f) <*> (BinaryFunction g) = BinaryFunction (\x y -> f x y (g x y))

Now, admittedly, when I was trying to implement BinaryFunction, that original version using Compose was sitting in the back of my head. After all, this is starting to look suspiciously similar, isn’t it? Add some record syntax to BinaryFunction, and you’d basically have the exact same thing. Indeed, as mjmrotek pointed out, Compose is just a more general version of my attempt.

That said, I still don’t really understand how Compose works. Here’s the definition for Compose itself, copied from the source.

newtype Compose f g a = Compose { getCompose :: f (g a) }

Just for completeness, let’s look at the types for the constructor and field accessor.

> :t Compose

Compose :: f (g a) -> Compose f g a

> :t getCompose

getCompose :: Compose f g a -> f (g a)

What exactly does f (g a) mean? I guess both f and g are higher-kinded types which each take a single type parameter. So then what is Compose min?

> :t min

min :: Ord a => a -> a -> a

> :t Compose min

Compose min :: Ord a => Compose ((->) a) ((->) a) a

Alright, so f and g are both (a ->), and a is just a. Composing these together like in the constructor would yield (a -> (a -> a)), which is precisely the type I would expect.

Now I can look at the instance for Functor.

instance (Functor f, Functor g) => Functor (Compose f g) where

    fmap f (Compose x) = Compose (fmap (fmap f) x)

In the provided case, the type of f is (a ->) and the type of x is a -> a -> a. For functions, fmap is just (.), so that becomes (.) ((.) f) x. Hmm, those two immediate usages of composition are a little confusing.

> :t fmap (+1) (Compose max)

fmap (+1) (Compose max)

  :: (Ord a, Num a) => Compose ((->) a) ((->) a) a

> :t getCompose $ fmap (+1) (Compose max)

getCompose $ fmap (+1) (Compose max)

  :: (Ord a, Num a) => a -> a -> a

I think I need to get my head around successive/nested composition a little bit more. Haskell’s "everything takes one value and returns one value" style, which effectively implements auto-currying, can make composition a little more confusing than I’m used to.

Let me break this down a little more.

> :t (.) ((.) (+1)) max

(.) ((.) (+1)) max :: (Ord c, Num c) => c -> c -> c

> :t (.)

(.) :: (b -> c) -> (a -> b) -> a -> c

> :t ((.) (+1))

((.) (+1)) :: Num c => (a -> c) -> a -> c

The type of (.) is, of course, obvious, though looking at it did remind me something important. I should probably be thinking about function types like a -> a -> a as a -> (a -> a), which would obviously satisfy the type a -> b where b is just a -> a.

With this in mind, what is (.) (+1)? Well, (+1) is just a unary function, so it makes filling in the type for the first argument of (.) obvious. The result of the expression is a function that takes a function that operates on any input and converts it to a number, which is then passed to the incrementing function, which creates the final result.

To keep things clearer as I try to juggle types in my head, I’ll replace that expression with \f x -> (f x) + 1. Substituting that into the original expression yields max . (\f x -> (f x) + 1). Again, the composition takes the result of the second argument, then passes it to the first argument. Therefore, the resulting function will take two numbers, find the maximum, and then increment the result.

This explains the behavior of that wacky ((.).(.)) function that someone commented about on the reddit thread.

> :t ((.).(.))

((.).(.)) :: (b -> c) -> (a -> a1 -> b) -> a -> a1 -> c

This actually seems to be function composition for a unary function with a binary function, which is sort of cool. In Racket, it would just be (compose f g), even if g required two arguments, but in Haskell, all functions are really unary—there’s no luxury of functions which cannot be partially applied.

What happens if I add more dots?

> :t (.).(.).(.)

(.).(.).(.)

  :: (b -> c) -> (a -> a1 -> a2 -> b) -> a -> a1 -> a2 -> c

The pattern does, indeed, continue. That’s sort of nice to know. I feel like I should be able to logically understand that of course that’s what composing composition does, but I definitely cannot do that yet.

Now that all that’s figured out, what about Applicative for Compose?

instance (Applicative f, Applicative g) => Applicative (Compose f g) where

    pure x = Compose (pure (pure x))

    Compose f <*> Compose x = Compose ((<*>) <$> f <*> x)

Wow. I have no idea what that does.

Okay, pure (pure x)) is obvious (and I almost wish it were written Compose . pure . pure..., I think I’m already losing myself to Haskell’s convenient composition), but what about that crazy mess for <*>?

> :t (\f x -> (<*>) <$> f <*> x)

(\f x -> (<*>) <$> f <*> x)

  :: (Applicative f1, Applicative f) =>

     f (f1 (a -> b)) -> f (f1 a) -> f (f1 b)

Wow! I have no idea how that’s implemented, but the type actually seems to make some sense! Given an instance of Applicative wrapped in another Applicative from a -> b, then providing a yields b. I still have lots of questions, though. In this example, what is f? What is f1?

This is one of those situations where I wish I could somehow have GHC tell me the types instantiated with something. I’m not even really sure what that means, though, since the types could clearly be anything. Whatever. Let me just reason it out myself.

I think where this is sort of tripping me up is how the Applicative instance is for Compose f g. Obviously, this is because Applicative wants a type of kind * -> *, so the Compose type constructor needs to be partially applied. But what does that mean? In the original type Compose f (g a), what is a?

If Compose is created with (+), then a is simply any type that is a member of the Num typeclass. The a type corresponds to the function’s return value while f and g correspond to its inputs. I guess more generally, f could be a function type, but g could be some entirely unrelated functor.

So back to the original Applicative instance. The mixing of infix and prefix operators is still something I find a little confusing, so let me make everything prefix to make things more clear.

> :t \f x -> ((<*>) ((<$>) (<*>) f) x)

\f x -> ((<*>) ((<$>) (<*>) f) x)

  :: (Applicative f1, Applicative f) =>

     f (f1 (a -> b)) -> f (f1 a) -> f (f1 b)

Same thing, just all prefix. Working from the inside-out, what the hell is (<*>) (<$>)?

> :t (<*>) (<$>)

(<*>) (<$>) :: Functor f => ((a -> b) -> f a) -> (a -> b) -> f b

Hmm. I still don’t really get how this works.

You know what? Screw it. I could probably spend hours dissecting this tiny, insignificant piece of code. How could such a small snippet be such a headache? I think I can explain this away with some relatively high-level conceptual reasoning.

In the Applicative instance for Compose, both of its first two type parameters are Applicative, so the value contained is wrapped in two layers to Applicative containers. (This "container" metaphor is wrong, and I already know it to be wrong, since the containers simply seem to be manifested by the types and don’t actually box anything, but I get that. I think my mental model is sound, at least for that piece.) Therefore, the type for <*> for Compose is obvious—it takes an applicable value wrapped in the two layers of Applicatives, a value that can be supplied to the first value (also wrapped in the same two layers), and finally produces the result of that application (still wrapped in the same two layers).

With functions, this idea of "application" is fairly explicit, since it literally means function application. With other types, though, it could mean various other things. The name Applicative is starting to make some sense to me—it generalizes the idea of function application to other types. Sort of neat, but not quite within my grasp to understand in full just yet.

Perhaps I’ll figure it out in time. Now, I think, is not that time.

4.2 Getting back to business

Phew. That was a lot of confusing, abstract thinking about types. I think I’m ready to go back to something simple. Something practical. Alright, random college class, what have you got to show me?

Aha! Writing a programming language interpreter. Should be downright easy in comparison to what I’ve just dealt with.

Alright, so the assignment provides a bunch of datatypes to represent the AST of an arbitrary imperative programming language. The assignment is about manipulating those ASTs. Seems simple enough. The first exercise deals with manipulating States, which represent variable bindings in our mini-language.

A State is actually simply a String -> Int, and an empty state maps anything to 0. This allows us to write an extend function that extends a given state, making this task relatively straightforward.

extend :: State -> String -> Int -> State

extend s identifier value =

  \identifier' -> if identifier == identifier'

    then value

    else s identifier'

Neat and simple. Defining the empty state is obviously trivial.

empty :: State

empty _ = 0

Now we can implement an evalE function, simply to evaluate expressions. It will take a State for resolving variable lookups.

evalE :: State -> Expression -> Int

evalE st (Var s) = st s

evalE _  (Val i) = i

evalE st (Op e op e') =

  case op of

    Plus   -> v + v'

    Minus  -> v - v'

    Times  -> v * v'

    Divide -> v `div` v'

    Gt     -> bTi $ v >  v'

    Ge     -> bTi $ v >= v'

    Lt     -> bTi $ v <  v'

    Le     -> bTi $ v <= v'

    Eql    -> bTi $ v == v'

  where v  = evalE st e

        v' = evalE st e'

        bTi b = if b then 1 else 0

Using case for the operator pattern-matching makes that relatively clean. Next up, implementing desugaring! "Desugaring" is, after all, the lesser cousin of macros, so this is something I can get behind.

desugar :: Statement -> DietStatement

desugar (Assign s e)    = DAssign s e

desugar (Incr s)        = DAssign s (Op (Var s) Plus (Val 1))

desugar (If e s s')     = DIf e (desugar s) (desugar s')

desugar (While e s)     = DWhile e (desugar s)

desugar (Sequence s s') = DSequence (desugar s) (desugar s')

desugar Skip            = DSkip

desugar (For pre cond post s) =

  DSequence (desugar pre)

            (DWhile cond (DSequence (desugar s)

                                    (desugar post)))

Mmm, my parentheses, how I’ve missed you. Implementing this reminds me how much I like that Lisp expressions are basically just ASTs already.

Anyway, now we can create a simple statement evaluator.

evalSimple :: State -> DietStatement -> State

evalSimple st (DAssign s e) = extend st s (evalE st e)

evalSimple st (DIf e s s') = if evalE st e /= 0 then evalSimple st s

                                                else evalSimple st s'

evalSimple st (DWhile e s) = if evalE st e /= 0 then evalSimple (evalSimple st s) (DWhile e s)

                                                else st

evalSimple st (DSequence s s') = evalSimple (evalSimple st s) s'

evalSimple st DSkip = st

Quite nice; maybe even pretty! Of course, none of these are really demonstrating any of Haskell’s unique features—all of these could be implemented more or less the same in any functional programming language. This assignment, especially, gives me a very Racket-y vibe. It would definitely be simple to port this almost verbatim to Racket using its extensive pattern-matching facilities.

Now the final bit is to implement a run function to run programs. For some baffling reason, the homework assignment includes this note:

Note: run should be defined in terms of desugar and evalSimple. It should not be implemented from scratch.

After all that, whoever would implement run from scratch must be out of their mind. I’m almost amused to imagine that some poor student would be too clueless to understand how to implement run using the functions that have already been implemented considering how stupidly easy its implementation is.

run :: State -> Statement -> State

run st = evalSimple st . desugar

And that’s it! An extremely simple interpreter for an AST in Haskell. As I stated above, nothing very special going on here, but a cool example of Haskell’s simple functional programming features. And honestly, it was a nice break after all that nonsense about Applicative and Compose.

4.3 Some closing notes

I’m sorry if this was a relatively boring day in comparison to the last one. I almost feel a little guilty for throwing in the towel with understanding the Compose implementation, but I’m in over my head on that one. Perhaps for another time.

Anyway, I’d like to note a few extra things that either happened in between the previous day and this one or things I’ve come across while writing this.

First of all, I had a few people recommend avoiding $. I used it in one spot while writing this assignment, but in general, I’ve made a decision to avoid using it unless it really improves the clarity of my code. At first I was a little afraid to sprinkle parens everywhere considering my Lisp experience, but I think a balance has become a little more apparent.

I’ve come to a similar conclusion with point-free code. I did, for example, write run using point-free style, but in most cases, it doesn’t feel like it’s conducive to writing readable code. Even with run, it wasn’t truly completely point-free.

run :: State -> Statement -> State

run st = evalSimple st . desugar

I could implement that with (. evalSimple) . desugar, but... why? That’s completely useless. On the other hand, expressing the second parameter as being related to composition of evalSimple and desugar makes logical sense.

Anyway, I think that’s it for now. Honestly, at this point, I’m just excited to get to monads, since I think they’re a really cool concept, and I think I mostly understand how they work. Soon!