## Archive for the ‘Haskell’ Category

Wednesday, August 13th, 2014

Consider the following function:

```oddness :: Maybe Int oddness = do let a = Just 1 :: Maybe Int b <- a return b```

Perfectly fine, albeit contrived, redundant, etc. Bear with me. Now consider what happens if we change the value of `a`:

```oddness :: Maybe Int oddness = do let a = Nothing :: Maybe Int b <- a return b```

This looks odd, because it looks like we’re extracting the value from `a` into `b`, and then passing it to `return` – it looks like there’s some `Int` we extract from `Nothing`, and calling `return` converts it back to `Nothing`.

But of course, we know that this do-notation desugars to something like:

```oddness :: Maybe Int oddness = let a = Nothing :: Maybe Int in a >>= \b -> return b```

And recall the definition of `(>>=)` for `Maybe`:

```instance Monad Maybe where (Just x) >>= k = k x Nothing >>= _ = Nothing```

So what’s happening here is that what’s on the right of `(>>=)` remains unevaluated, and `Nothing` is the result. Mystery solved.

### Functional Rainbow

Monday, December 29th, 2008

Thursday, August 9th, 2007

If you’re one of the approx 3 people left in the world who:

I suggest you get over there. It’s quite long, and it gets a bit difficult to follow without slides around 30 minutes into part 2, but it’s well worth watching. SPJ is an entertaining presenter, partly at least because he seems to be a bit of a stereotypical university lecturer type: very enthusiastic, clearly a towering intellect in the FP world, but on the other hand seems to bumble his way through the business of actually using computers!

If he’d been my lecturer back in the day, perhaps I’d have appreciated functional programming a bit more (“To do that, you’d have to solve the halting problem… so… it’s tricky.”).

### More on Tree Folds

Tuesday, July 17th, 2007

After some thought, I think I have the definitions for left and right fold on a tree:

```foldrTree :: (a -> b -> b) -> b -> InternalTree a -> b
foldrTree f z ILeaf = z
foldrTree f z (IBranch a x y) = foldrTree f (f a (foldrTree f z y)) x

foldlTree :: (a -> b -> a) -> a -> InternalTree b -> a
foldlTree f z ILeaf = z
foldlTree f z (IBranch a x y) = foldlTree f (f (foldlTree f z x) a) y```

Of course, these folds are simplified: they don’t have a separate function for dealing with leaves and branches. This leads to things like flatten only working one way, i.e.
foldrTree (:) [] t
works because (:) will accumulate values onto a list properly from the right, but
foldlTree (:) [] t
gives a type error (because (:) cannot append a value onto a list working from the left).

Monday, July 16th, 2007

The first definition is of a tree with values stored only at the leaves, i.e.

data Tree a = Leaf a | Branch (Tree a) (Tree a)

It then gives a mapTree function (equivalent to map on a list), which is trivial, and functions fringe, treeSize and treeHeight, which return respectively the list of the leaf values, the number of leaves, and the max depth. In the text, these are defined in the usual recursive way. An exercise is to write a Tree equivalent of the fold function for lists. Which I think I managed:

```foldTree :: (a -> b -> b) -> (b -> b -> b) -> b -> Tree a -> b
foldTree fLeaf _ init (Leaf x) = fLeaf x init
foldTree fLeaf fBranch init (Branch x y) = fBranch x' y'
where x' = foldTree fLeaf fBranch init x
y' = foldTree fLeaf fBranch init y```

The trick was in recognising that I needed two functions, one for dealing with leaves and the other for dealing with branches (because (:) is a different type from (++) and both are used in fringe). Of course, the performance implication of all those list appends makes me think that in the real world, I’d write fringe using only (:):

```fastFringe :: Tree a -> [a]
fastFringe t = acc t []
where acc (Leaf x) l = x : l
acc (Branch x y) l = acc x (acc y l)```

(It also occurs to me that fringe is more usually called flatten IME). The next exercises involved trees with internal values (not at the leaves), i.e.

```data InternalTree a = ILeaf
| IBranch a (InternalTree a) (InternalTree a)```

This is a bit more like way I’m used to trees working, and the exercises were easier. InternalTree versions of zipWith and zip (defined in terms of zipWith of course) were easy, as were versions of take and takeWhile. I was impressed with the “magic” of the InternalTree version of repeat:

```repeatInternalTree :: a -> InternalTree a
repeatInternalTree a = t
where t = (IBranch a (t) (t))```

There are two bits of magic here. First, this is an infinitely recursive definition. Haskell uses lazy evaluation, so this is designed to be used with takeTree or takeTreeWhile. The second piece of magic: where is the base case constructor (ILeaf) specified? It isn’t! Yet

takeTree 2 (repeatInternalTree 5)

returns

IBranch 5 (IBranch 5 ILeaf ILeaf) (IBranch 5 ILeaf ILeaf)

as hoped for, if not quite expected. The only thing I’m not really grokking right now are the InternalTree versions of foldr and possibly foldl (and possibly a third type of fold). I’ve got a feeling there is some higher-order function extractable that controls pre-, post-, or in-order node traversal on InternalTree, but I’m unsure of the structural differences of foldl and foldr when applied to trees. I am also puzzling a bit over the semantics of zipWith and zip when applied to trees with only leaf-node values.

PS. Gleep = glorp.

Tuesday, July 10th, 2007

Which pattern is more efficient?

Pattern a:

```circleSumA :: [Int] -> Int
circleSumA (i1 : is) = aux i1 is
where aux i [] = i + i1
aux i is = i + aux (head is) (tail is)```

Pattern b:

```circleSumB :: [Int] -> Int
circleSumB is = sum (is ++ [head is])```

Is the Haskell compiler/interpreter able to make pattern b as efficient as pattern a? What about in the more general case where the operation is operating on more than 2 list elements? e.g. Folding a dot product over a list of points (obviously taking three points at a time to compute the dot product of the two vectors formed). In this case, Pattern b would have to perform 2 list appends (is ++ [head is] ++ [head (tail is)]). Would these get optimised?

Tuesday, July 10th, 2007

Having read a bit of the next chapter and discovered the zipWith function, I now have a higher-order non-recursive solution to the makeChange problem, viz.

```makeChange1 :: Int -> [Int] -> [Int]
makeChange1 amt vs = zipWith div modvs vs
where modvs = scanl mod amt vs```

Sunday, July 8th, 2007

Define a function makeChange that, given a nonnegative integer amount and a list of coin denominations, returns a list of the number of coins of each denomination required to make up the amount. For example:

makeChange 99 [5,1] => [19,4]

i.e. 19 nickels and 4 pennies equals 99 cents. To make life easier, you may assume that the list of denominations is in descending order, and that the penny is always included.

I have the following, which works:

```makeChange :: Int -> [Int] -> [Int]
makeChange _ [] = []
makeChange amt (v:vs) = n : (makeChange newamt vs)
where n = amt `div` v
newamt = amt - n * v```

However, the problem is at the end of a chapter about using higher order functions like map, foldl, foldr, zip, etc. I have a feeling there’s a way to do this problem that way too. But it’s late and Mrs Elbeno is calling me to bed. AOAP.

Monday, May 21st, 2007

Well, this weekend I dived in with Haskell and started to make my way through The Haskell School of Expression. So I’ve been exercising neural pathways that haven’t seen a treadmill in a while, C++ programmer that I am. Overall, it’s been going well, and it’s giving me the feeling that Haskell is a very nice language. Also, Haskell mode for emacs is just fine. However, there are a few areas where Haskell seems rough around the edges.

First, the Haskell Graphics Library doesn’t compile for me – no x86_64 target exists! So I had to switch from GHC to Hugs, which had it built in. I still harbour hopes of a native-compiled HGL, but when I become an experienced Haskell programmer, I foresee a future of library and FFI pain.

Second, although I’m generally not too bothered with the Haskell way of indentation being vital to syntax, I have hit a problem that seems to be very common (even bothering experienced Haskellers). And to boot, Hugs’ error message is obscure (“unexpected ;”? I don’t even have any ;’s!).

Anyway, I’m up to the stage of drawing snowflake fractals and Sierpinski triangles/gaskets.

### A Functional Weekend

Tuesday, January 16th, 2007

For some time now I’ve been aware of the fact that 12 years’ professional C and C++ programming has moulded my programming thinking habits. This has been brought home particularly in the last 3 years when I’ve been doing quite advanced C++ and therefore reaching the limits of compilers and indeed the limits of C++ itself. So with that realisation, I resolved some time ago to learn more languages and broaden my current programming horizons.

I am fortunate that I learned ML before I learned C (both at university), although of course like many Brits of my generation I first learned BASIC on a BBC micro (and in fact a little 6502 assembler!). Perhaps that is why I realise the limitations of my C++ thinking in the first place. Anyway, over the last year or so I’ve been studying a bit of Lisp, and lately I’ve been studying Haskell.

Haskell reminded me of ML, or to be more accurate, reminded me of my ML course back in the day. Particularly the way it specifies function types, and although my tutorial hasn’t covered it yet, I know that curried functions are only a page or two away. Anyway, thinking back to my ML course, I remembered it really coming alive near the end when one of the exercises we had was in doing lazy list evaluation.

You are of course familiar with the Sieve of Eratosthenes. I remembered my ML exercise as using this method: i.e. lazily evaluating an infinite list of integers, then filtering out the multiples of successive list head elements. This seemed to me like a good test application for a functional language – more interesting than a factorial function (the “Hello, World!” of functional languages), and of appropriate complexity density to get a feeling for the power of the language.

So I spent some time digging in the basement and uncovered all my university lecture notes. A veritable treasure trove! But more of than another time. I found the original ML exercise (number 9 in a series of 10), which says:

Here is a definition of integer streams.

datatype stream = Item of Int * (unit->stream);
fun cons (x,xs) = Item(x, xs);
fun tail (Item(i,xf)) = xf();
fun makeints n = cons (n, fn()=> makeints(n+1));

You can see where this is going: a stream is a tuple of (int, function to produce the rest of the stream) and repeatedly tailing the stream is repeated application of the function. Thus you get a lazily-evaluated infinite stream of integers. Then we can define a function that maps a function onto every item in the stream:

fun maps f xs = cons(f (head xs), fn()=> maps f (tail xs));

From this basis it is fairly easy to write a function that filters out all multiples of n from the stream and hence to proceed to an implementation of the Sieve of Eratosthenes.

Having refreshed my memory, I set about solving the same problem in Haskell. I needed to read a few pages ahead in the tutorial, but Haskell has two great things working for it for this problem: built-in lazy evaluation, and list comprehensions. After a bit of reading, I discovered the following:

numsFrom n = n : numsFrom (n+1)

Which produces our infinite list of integers starting from n. Because it cannot of course be completely evaluated, it’s useful to use the take function to get a given number of elements from the front:

take 10 (numsFrom 1)
gives
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]

From here it was a hop and skip (via way of a list comprehension) straight to the sieve:

primes n = n : [x | x <- primes (n+1), x `mod` n /= 0]

(n is the list head, the rest of the list is made from n+1 etc with multiples of n filtered out.) And that’s it!

take 10 (primes 2)
gives
[2, 3, 5, 7, 11, 13, 17, 19, 23, 29]

Haskell passes the test. The Sieve of Eratosthenes in one line.