Sunday, July 08, 2018

Inside the paper: Build Systems a la Carte

Summary: How we went about writing a build systems comparison paper, how we learnt what we learnt, and why the end result surprised us. A glimpse inside writing a paper.

The final version of the Build Systems a la Carte paper has just been sent to ICFP 2018 - see an overview from one of my co-authors. The paper is a collaboration between Andrey Mokhov at Newcastle University, Simon Peyton Jones at Microsoft Research and me (working in industry). Below is the tale of how we discovered what we discovered, hopefully slightly demystifying the research process. While the paper is a collaboration, this blog post is my view and mine alone.

The paper started with the idea of comparing and contrasting build systems. There were two motivating factors, I wanted a blueprint for writing Cloud Shake, while Simon wanted to compare build systems (Andrey wanted a bit of both). The thing we all agreed on was that Haskell is a great executable specification for describing build systems, and that refactoring is a powerful tool. Armed with that approach, we went off to try and implement various build systems, chosen based on our familiarity with them and the perceived differences between them. You can see our progress in the git repo, starting 20th Feb (less than a month before the ICFP deadline!).

All of us came to the table with some predefined notions of what should and shouldn't be in the model. Andrey brought the Store abstraction. I brought the ideas of monadic vs applicative dependencies. We iterated and quickly made our first "breakthrough", a task abstraction which nicely modelled user rules, including the difference between monadic and applicative dependencies:

type Tasks c k v = forall f . c f => (k -> f v) -> (k -> f v)

Essentially, given a way to build dependencies, I can give you a way to any key. By parameterising the Tasks by c (of type Constraint) we can produce Tasks Monad and Tasks Applicative, nicely capturing the differences in power. It was only later when preparing an artefact for evaluation that we noticed Docker is a Tasks Functor build system. We made a number of iterations on this Tasks type (adding and removing newtype, how to represent input files, where to put the second k etc) - but fundamentally had a model to work with.

The next step was to start writing build systems. We picked Make, Shake, Excel, Ninja and Bazel as our first set to get working. Implementing these systems effectively became a four-dimensional optimisation problem:

  • Closeness of the model to the underlying system it was based on.
  • Simplicity of code for each individual system.
  • Reuse of code across build systems.
  • Reuse of abstractions across build systems.

The first versions were separate monoliths of code, reusing a handful of helper functions, with a fairly arbitrary set of what to model and what to exclude. Since we had executable specifications, with tests, we came up with possible improvements, tried them, and decided whether to keep them or discard them. We iterated, as individuals, as pairs (all three possible pairs) and as a group - making improvements along various dimensions. For a good few weeks Andrey and myself had competing directories in the repo, with different underlying ideas but stealing refinements from each other. I think there were about 25 separate "breakthroughs" to move the code to where we ended up. As the code became clearer, we started to understand the commonalities behind build systems, which helped the code become clearer - a virtuous cycle. Simon's role was to say "We have to make this simpler" or "I don’t understand that". Some of the time it couldn't be simpler; and we had to make sure the explanations were really understandable. But most of the time we really did make it simpler and the exposition is much better as a result.

The most academically interesting breakthrough was to realise that build systems can be split into something that decides what to rebuild, and something that orders the rebuilding, putting build systems in a two-dimensional table. While the result feels natural (if you carefully structure your description of a build system it even falls out grammatically!), it was entirely non-obvious beforehand, and emerged naturally by following the abstraction opportunities presented by the code.

By the end we were able to faithfully model details of Make/Excel/Shake that initially eluded us, with each build system being just two functions, where all functions could be combined to produce working build systems. As an example, Shake is:

shake = suspending vtRebuilder

The suspending is also used by Nix, and the vtRebuilder is also used by Ninja. Shake is just putting two existing things together, so we have great reuse of code and abstractions between build systems. In some places the code is more complex than I'd like, but you can't have everything (or maybe you can - we may well improve the models further).

After submitting the paper to ICFP 2018, we also put a draft online, which led to a deluge of comments from experts in many of the systems we talked about - the acknowledgements in the paper start to show how much excellent feedback we got. The most interesting feedback was that we'd misclassified Bazel - it's actually more like Excel than we realised. What was particularly nice is that our framework was able to describe what we thought Bazel was in enough detail that people involved with Bazel could correct us - a clear sign we were modelling interesting details.

Now that the paper is done, I hope the abstractions can start proving their value. In the context of Shake, I would like it can serve as a design document. Ever since the earliest days of Shake, I've used a two-timestamp approach to recording what happened with a key, as described in S2.3.1 of the original paper. Unfortunately, whenever I've tried to explain this trick to someone in person, their eyes glaze over. Fortunately, given a clear model, I now know that what I've really implemented is an optimisation over vtRebuilder. Furthermore, I now have the framework to talk about the benefits and costs of the optimisation, making it much easier to understand and justify.

My goal before writing the paper was to turn Shake into Cloud Shake, and the desire to do that in a principled way. Now the paper is finished I can resume that quest, with a fairly good understanding of how to do it. One thing the paper sensibly abstracts over is all the technical details (parallelism, network traffic etc) - armed with the right abstractions those technical details are what I'll be focusing on for Cloud Shake.

Thinking more broadly, the things this paper taught me (or that I already thought but it confirmed):

  • Follow the Simon Peyton Jones how to write a paper guidelines, of which number 1 is most important. "Writing papers is a primary mechanism for doing research (not just for reporting it)".
  • Innovation isn't thinking in isolation, it's thinking of a process that gives you the right problems, the right tools, and the right direction. With those things in place, the chances of ending up somewhere interesting increase dramatically.
  • Deadlines spur writing papers. It feels like we should be better, and not need the external deadlines, but it seems to help in practice.
  • Simplicity is valuable in its own right. The main research contribution of this paper sounds obvious a minute after explaining it, which makes me very happy.
  • Co-authors matter. As a set of co-authors we agree on some things (e.g. Haskell), but disagree strongly on others (e.g. two parallel branches of development, 6 rejected pull requests). I am sure the paper would have been significantly worse with anyone of us removed (these are my conclusions, I don't guarantee my co-authors agree!).
  • Feedback is super valuable, whether it comes from peer reviewers or Reddit comments. The feedback improved the paper, and also motivated us.

Hopefully this post lets people in on the secret that writing academic papers isn't magic, that papers don't emerge fully formed, and that it involves a lot of work and refinement.

Sunday, May 13, 2018

The end of Bake

Summary: I no longer develop Bake, my continuous integration tool.

In 2014 I started a new project of a continuous integration system, named Bake. The selling point of Bake was that it provided always-correct master development, but didn't require running all tests on all patches, allowing the development to scale much faster than the test resources. Over time I refined the model and figured out exactly how to optimise throughput. The experiments were promising, but I'm no longer working on Bake, because:

  • I wrote Bake with an eye to a particular set of challenges for my employer. I then changed jobs, and the challenges have changed. I don't have a strong need for Bake, and I don't think a project like Bake can be successful without dogfooding.
  • I have plenty of other projects that are fighting for my attention. While I think Bake is cool, it's certainly at the early stages, I think it needs a lot of work to have any meaningful impact.
  • Bake has a clever algorithm (I <3 algorithms!), and now needs a ton of evangalism, polish and web UI work (I was a webdev in a previous life, but it's not something I want to spend my spare time on).
  • The problem space that Bake targets is a bit weird. Open-source projects with a small contributor base (less than 5 people full time) are well served by Travis/Appveyor etc, which I happily use for all my open-source projects. Big tech companies (e.g. Google and Facebook) can aford to throw hardware at the problem and have custom solutions. That leaves Bake with the niche of 5-500 commerical programmer teams, which isn't such a natural fit for open-source software.

What I didn't find is any design flaw in the approach. I still think the ideas behind Bake are valuable, so would love to see them go somewhere, but leave it to others. The code remains on GitHub with a stack.yaml and shell.nix that should (in theory) let people play with it indefinitely, but I won't be accepting any patches - other than to point at forks.

Monday, April 30, 2018

Don't Fear the Monad - T-shirts

Summary: I made some t-shirts.

For Christmas my brother-in-law got me the classic "Don't Fear the Monads" t-shirt, which comes complete with the Monad functions printed on it. Of course, while one adult geeky t-shirt is awesome, a child geeky t-shirt is even better, and a whole family full of them is best. I made some SVG designs for "Don't Fear the Functor" (for my son) and "Don't Fear the Applicative" (for my wife), available here (I followed the style of the Monad one, even if I disagree with some of the technical choices in the original). You can turn these into real t-shirts with the Cafe Press design your own feature. The result is pictured below.

Wednesday, April 18, 2018

Ghcid with colors

Summary: I've just released ghcid-0.7, which provides a much better user experience, including colors.

Ghcid is now over three years old, with 28 versions, but I'm particularly pleased with the improvements in the latest version. The focus has been on better defaults and a more polished user experience, some things you might spot:

Color output: GHC 8.2 added colored output, with important information highlighted. Previously Ghcid would explicitly disable that color. Now Ghcid embraces that color, turning the flag on for GHC versions that support it and ensuring any output munging is aware of the colors. It also enables colors in Hspec and colors the "All good" message green.

Color defaults: While enabling more color, it also provides --color=never to disable colors, and auto-detects when colors are likely to work well.

Error spans: Ghcid has always recommended that people turn on the -ferror-spans flag, but now it does it for you. For people using the VS Code addin that will provide a smoother experience out of the box.

Parallel compilation: Ghcid now passes -j to ghci, which I find speeds up compilation by about one third. Not a huge speedup, but still useful.

Tracking files: Ghcid now tracks both the .ghcid file (which you can use to specify the command line you want to use with ghcid) and .ghci file (which configures ghci). If either change it will cause Ghcid to restart, picking up the changes.

Absolute paths: The internals of Ghcid have been rewritten to always use absolute file paths, rather than relative paths. If your ghci wrapper changes directory (as I believe multi-project cabal new-repl does) Ghcid will continue to work.

Enabling IDE improvements: I have improved the integration features for editor plugins - you can now output a .json file with the parsed messages, including start/end position, and escape codes. There is a new --setup flag for sending initial messages to the underlying ghci. I haven't modified any of the IDE plugins to take advantage of these new features, but that's phase 2.

Ctrl-C and cleaning up processes: Ghcid is a continual fight to deal properly with Ctrl-C and close down all appropriate processes at the right time. In this release I've fought the battle in a few more corners, seemingly with some level of success.

Crazy extensions: GHC 8.4 is now able to deal with both RebindableSyntax and OverloadedStrings and still start ghci. I've modified Ghcid so it can also deal with this composition.

Together these changes make for a much more pleasant user experience.

Saturday, March 24, 2018

Adding Package Lower-bounds

Summary: I hacked Cabal so I could spot where I was missing package lower bounds. The approach has lots of limitations, but I did find one missing lower bound in HLint.

Cabal lets you constrain your dependencies with both upper bounds and lower bounds (for when you are using a feature not available in older versions). While there has been plenty of debate and focus on upper bounds, it feels like lower bounds have been somewhat neglected. As an experiment I decided to modify cabal to prefer older versions of packages, then tried to compile a few of my packages. The approach seems sound, but would require a fair bit of work to be generally usable.

Hacking Cabal

By default Cabal prefers to choose packages that are already installed and have the highest bound possible. The code to control that is in cabal-install/Distribution/Solver/Modular/Preference.hs and reads:

-- Prefer packages with higher version numbers over packages with
-- lower version numbers.
latest :: [Ver] -> POption -> Weight
latest sortedVersions opt =
    let l = length sortedVersions
        index = fromMaybe l $ L.findIndex (<= version opt) sortedVersions
    in  fromIntegral index / fromIntegral l

To change that to prefer lower versions I simply replaced the final expression with fromIntegral (l - index) / fromIntegral l. I also removed the section about giving preferences to currently installed versions, since I wanted the lowest bound to be chosen regardless.

So I didn't mess up my standard copy of Cabal I changed the .cabal file to call the executable kabal.

Testing Kabal on Extra

To test the approach, I used my extra library, and ran kabal new-build all. I used new-build to avoid poluting my global package database with these franken-packages, and all to build all targets. That failed with:

Failed to build Win32-
In file included from dist\build\Graphics\Win32\Window_hsc_make.c:1:0:
Window.hsc: In function 'main':
Window.hsc:189:16: error: 'GWL_USERDATA' undeclared (first use in this function)
C:\ghc\ghc-8.2.2/lib/template-hsc.h:38:10: note: in definition of macro 'hsc_const'
     if ((x) < 0)                                      \

So it seems that Win32- claims to work with GHC 8.2, but probably doesn't (unfortunately it's not on the Linux-based Hackage Matrix). We can work around that problem by constraining Win32 to the version that is already installed with --constraint=Win32== With that, we can successfully build extra. For bonus points we can also use --enable-test, checking the test suite has correct lower bounds, which also works.

Testing Kabal on Shake

For Shake we start with:

kabal new-build all --constraint=Win32== --enable-test

That worked perfectly - either I had sufficient lower bounds, or this approach doesn't do what I hoped...

Testing Kabal on HLint

Trying HLint with our standard recipe we get:

Failed to build ansi-terminal-0.6.2.
[3 of 6] Compiling System.Console.ANSI.Windows.Foreign ( System\Console\ANSI\Windows\Foreign.hs, dist\build\System\Console\ANSI\Windows\Foreign.o )
System\Console\ANSI\Windows\Foreign.hs:90:20: error:
    Ambiguous occurrence `SHORT'
    It could refer to either `System.Win32.Types.SHORT',
                             imported from `System.Win32.Types' at System\Console\ANSI\Windows\Foreign.hs:41:1-25
                          or `System.Console.ANSI.Windows.Foreign.SHORT',
                             defined at System\Console\ANSI\Windows\Foreign.hs:59:1

So it seems ansi-terminal-0.6.2 and Win32- don't cooperate. Let's fix that by restricting ansi-terminal==0.7 with another constraint. Now we get:

Preprocessing library for cmdargs-0.10.2..
Building library for cmdargs-0.10.2..
[ 1 of 25] Compiling Data.Generics.Any ( Data\Generics\Any.hs, dist\build\Data\Generics\Any.o )

Data\Generics\Any.hs:65:17: error:
    Variable not in scope: tyConString :: TyCon -> String
65 | typeShellFull = tyConString . typeRepTyCon . typeOf
   |                 ^^^^^^^^^^^

Oh dear, now it's the fault of cmdargs, which is one of my packages! Checking the Hackage Matrix for cmdargs we see:

Namely that 0.10.2 to 0.10.9 don't compile with GHC 8.2. We solve that by going to the maintainers corner and editing the .cabal file of released versions to produce a revision with better bounds - replacing base == 4.* with base >= 4 && < 4.10. Finding the translation from GHC version 8.2 to base version 4.10 involved consulting the magic page of mappings.

After waiting 15 minutes for the package tarballs to update, then doing cabal update, I got to a real error in HLint:

src\Hint\Duplicate.hs:44:37: error:
    * Could not deduce (Default (String, String, SrcSpan))
        arising from a use of `duplicateOrdered'

Looking at the data-default library I see that the Default instance for triples was only introduced in version 0.3. Adding the bounds data-default >= 0.3 to the hlint.cabal dependencies fixes the issue, allowing HLint to compile cleanly.

Next, looking at the commit log, I noticed that I'd recently added a lower bound on the yaml package. I wondered if I removed that bound then it could be detected?

Resolving dependencies...
    Dependency on unbuildable library from yaml
    In the stanza 'library'
    In the inplace package 'hlint-2.1'

Alas not - Cabal says the library is unbuildable - I don't really know what that means.

Testing Kabal on Ghcid

Trying Ghcid with our standard recipe and accumulated constraints we get:

Preprocessing library for Win32-notify-0.2..
Building library for Win32-notify-0.2..
[1 of 2] Compiling System.Win32.FileNotify ( dist\build\System\Win32\FileNotify.hs, dist\build\System\Win32\FileNotify.o )

src\System\Win32\FileNotify.hsc:29:9: error:
    Ambiguous occurrence `fILE_LIST_DIRECTORY'

So it seems Win32-notify-0.2 and Win32- don't cooperate. With that discovery I had used up all the time I was willing to spend and stopped the experiment.


By modifying Cabal to select for older packages I was able to find and fix a lower bound. However, because all my dependencies aren't lower-bound safe, it became a somewhat manual process. To be practically useful the prinple of correct lower-bounds needs adopting widely. Some notes:

  • The Hackage Matrix provides a large amount of actionable intelligence - a great experience. However, fixing the issues it discovers (actually adding the bounds) is frustratingly manual, requiring lots of clicks and edits in a textbox.
  • Using cabal new-build caused each directory to gain a .ghc.environment.x86_64-mingw32-8.2.2 file, which silently recongfigured ghc and ghci in those directories so they stopped working as I expected. Not a pleasant experience!
  • I ran my tests on Windows, and most of the dependencies with incorrect bounds were Windows-specific issues. Maybe Linux would have had less lower-bound issues?
  • I used a pretty recent GHC, which excludes a lot of older versions of packages because they don't work on newer GHC versions - picking the oldest-supported GHC would probably have found more bounds.
  • Are lower bounds actually useful? If you ignore which packages are globally installed (which both stack and cabal new-build effectively do) then the only reason to be constrained to an older version is by upper bounds - in which case solving excessive upper-bounds is likely to give more actual benefit.
  • I'm not the first person to think of constraining cabal to use older versions - e.g. Cabal bug 2876 from 2015.
  • The Trustee tool can infer minimum bounds, but it's Linux only so doesn't work for me. It is probably better for people who want to do their own bound checking.
  • Compiling kabal required a bit of trial and error, I eventually settled on compiling each dependent Cabal package in turn into the global package database, which wasn't ideal, but did work.

Friday, March 09, 2018

Safe Library with better Stack Traces

Summary: The safe library now provides error messages with short and informative stack traces on errors.

The Haskell base library contains a number of functions that crash under certain circumstances, e.g. tail []. The safe library attempts to tame these functions, providing tailDef (which uses a default value on []) and tailNote (which gives you a chance to provide some extra information if there is a failure). Since GHC 7.10 there has been support for adding stack traces to exceptions, where if any function with an appropriate annotation calls error it will include some stack trace.

Just over a year ago the safe library introduced a Partial constraint in Safe.Partial to declare that a function is partial, which also provides the annotations for error. The signature of tailNote became:

tailNote :: Partial => String -> [a] -> [a]

This signature has two benefits - it is visible in the documentation and it provides the stack trace if something goes wrong. If you typed tailNote "uhoh" [] in ghci you got:

Prelude Safe> tailNote "uhoh" []
*** Exception: Safe.tailNote [], uhoh
CallStack (from HasCallStack):
  error, called at .\Safe\Util.hs:23:44 in safe-0.3.16-9YcgrXj17kg79mfNx7tCoF:Safe.Util
  fromNoteModule, called at Safe.hs:65:12 in safe-0.3.16-9YcgrXj17kg79mfNx7tCoF:Safe
  fromNote, called at Safe.hs:108:17 in safe-0.3.16-9YcgrXj17kg79mfNx7tCoF:Safe
  tailNote, called at <interactive>:5:1 in interactive:Ghci1

Great - we can see the final line says we were on line 5 of the interactive and ran tailNote. Useful, but with the new version of safe it's even better:

*Main> tailNote "uhoh" []
*** Exception: Safe.tailNote [], uhoh
CallStack (from HasCallStack):
  tailNote, called at <interactive>:1:1 in interactive:Ghci1

We still get the interesting final line, but all the internal details of safe, e.g. the fact that tailNote calls fromNote have disappeared.

To get the stack traces just add Partial to any function you believe to be partial - it's easy. If you are happy to stick with GHC 8.0 and above you can use HasCallStack from GHC.Stack without depending on safe. I am slowly adding annotations to my packages, for example the extra library has Partial annotations.

Supressing the internal functions was a lot more work. I think it's worth it for a package that is all about nice handling of errors, but I probably won't bother in any of my other packages. The change to the code was going from:

tailNote note = fromNote note "tailNote []" . tailMay


tailNote note x = withFrozenCallStack $ fromNote note "tailNote []" $ tailMay x

The main change is we've added withFrozenCallStack, which freezes the call stack at this point and stops new entries from being accumulated inside the bowels of our library. The other change was to eta-expand the definition by adding x on both sides, so that withFrozenCallStack gets to block the actual error, and not merely a function that later produces an error.

Partial constraints are very powerful, and I hope in time they are adopted universally throughout the Haskell ecosystem. One day I hope the Prelude.tail function will also have a Partial constraint.

Tuesday, February 27, 2018

Switching to HTTPS

Summary: All my domains are now on HTTPS. In this post I describe how I did it.

It's quite clear everyone should be moving their domains to HTTPS, or face the consequences. I have recently converted three domains to HTTPS - two static sites and one Haskell server. Converting was mostly a case of finding the right how-to guide and following it, so in this post I'll link to the "right" guides.

Static Websites

I have static domains at and, both of which follow a similar pattern, so I'll focus on the Shake website. The steps are:

  • Get a domain name: I bought a domain name from Soho UK, who later sold up and became Cloud Mega. I've been using them for websites for a very long time, and never had any complaints.
  • Write some content: My static websites are based of source material that is then converted via custom scripts to generate the final website. For Shake, the source is Markdown files and the converter is a Haskell program. In the case of Shake, I use the markdown package with custom tricks like hyperlinking all identifiers (see these code samples). After running the program on the Markdown files I have HTML/CSS that can be served directly.
  • Serve the content: I host and serve the content using GitHub Pages, which lets you either serve content off the branch gh-pages or a separate GitHub repo - I use the latter option. I then use the custom domain name feature to make requests to serve from GitHub Pages over HTTP.
  • Serve with HTTPS: The previous steps get us an HTTP website, but last weekend I did the work to get to HTTPS. I followed these instructions, which use Cloudflare as an intermediary - serving over HTTPS and providing a cache. I have configured things to always redirect away from the www and always use HTTPS. The only minor hiccup was the HTTPS certification for Shake took about 3 days to initialise (it should take less than 24 hours, my other domain took 15 minutes) - but it went away on its own.
  • Collect email: The final step was to get email to the domains working - in general I'd prefer people email me directly at Gmail, but it's always good for email to work. I used these instructions, which use Mailgun to collect and forward emails. The only difficulty is that sending Gmail emails to yourself via a custom domain leaves the email in the Sent mail with no indication it was delivered - I had to test using a different email account.

With that, we have a static website served over HTTPS. It's quite remarkable that such a pipeline can be built using free services.

Dynamic Website

I maintain the server which provides a search engine for Haskell libraries. This website is dynamic, executing Haskell code to return suitable results for each search.

  • Write the program: I wrote Hoogle over the last 14 years, and when run as hoogle server it spawns a web server which can serve requests, using the Warp package to do the actual serving.
  • Configure the server: The server is kindly provided by the Haskell Infrastructure Committee, where I have a VM which runs Hoogle. My setup instructions for that server are in the Hoogle repo. Of note, I forward port 80 to 8080, allowing me to serve HTTP pages with a non-root program.
  • Serve static content over CDN: The static content of Hoogle (images, CSS files) could be served up by the normal server, but it's just one small server in one single location, so I make things go faster by sending most static requests to Raw GitHack, which itself is just a wrapper around Cloudflare.
  • Obtain certificates: To serve over HTTPS you need certificates that prove you own the domain. I got the certificates from Let's Encrypt, using the Certbot client. Since I run a custom server I opted for the Standalone challenge (which spawns a web server on your box), over HTTP, serving on port 8080 to account for the redirection I had put in place. Unfortunately, generating the certificates required taking Hoogle down briefly.
  • Serving over HTTPS: Fortunately a PR was submitted to Hoogle some time ago allowing users to pass a certificate at startup and serve Hoogle over HTTPS. I passed the certificates obtained in the previous step, and spawned Hoogle on 8443 (which 443 redirected too), giving me an HTTPS server.
  • Redirecting HTTP traffic: For the static websites redirecting HTTP traffic to HTTPS was as simple as checking a box on Cloudflare. For my own server I needed to run a server on port 8080 that did the redirect. I found the Haskell program rdr2tls which is small, simple, and works very well.
  • Renewing the certificate: The Let's Encrypt serve expires every 90 days, so will need renewing. I know the approximate steps, but currently am intending to manually renew the certificate.

Switching Hoogle to HTTPS was fairly painless.

Sunday, February 18, 2018

Atomic Expressions Generically

Summary: For certain hints HLint needs to determine if a Haskell expression is atomic. I wrote a generic method to generate expressions and test if they are atomic.

With HLint, if you write a statement such as:

main = print ("Hello")

You get the hint:

Sample.hs:1:14: Warning: Redundant bracket
Why not:

One of ways HLint figures out if brackets are redundant is if the expression inside the brackets is "atomic" - if you never have to bracket it in any circumstances. As an example, a literal string is atomic, but an if expression is not. The isAtom function from haskell-src-exts-util has a list of the types of expression which are atomic, but the Exp type from haskell-src-exts has 55 distinct constructors, and I don't even know what many of them do. How can we check the isAtom function is correct?

One approach is to use human thought, and that's the approach used until now, with reasonable success. However, I've recently written a script which solves the problem more permanently, generating random expressions and checking that isAtom gives the right value. In this post I'm going to outline a few features of how that script works. There are basically three steps:

1) Generate a type-correct Exp

The first step is to generate a random Exp which follows the type definition. Fortunately the Data class in Haskell lets us generate values. We define:

mkValue :: forall a . Data a => Int -> IO a
mkValue depth
    | Just x <- cast "aA1:+" = randomElem x
    | Just x <- cast [-1 :: Int, 1] = randomElem x
    | Just x <- cast [-1 :: Integer, 1] = randomElem x
    | AlgRep cs <- dataTypeRep $ dataTypeOf (undefined :: a) =
        if depth <= 0 then throwIO LimitReached else fromConstrM (mkValue $ depth - 1) =<< randomElem cs

Here we are saying that given a depth, and a result type a, we generate a value of type a. Note that the a argument is the result, but we don't pass anything in of type a. The first three lines of the body follow the pattern:

    | Just x <- cast [list_of_element] = randomElem x

This tries to convert list_of_element to [a] by using runtime type information. If it succeeds, we pick a random element from the list. If it doesn't we continue onwards.

The final case uses dataTypeRep/dataTypeOf to get a list of the constructors of a. Note that we don't have a value of a, so we make one up using undefined :: a - but that's OK because dataTypeOf promises not to look at its argument. Given a list of constructors, we pick one at random, and then call fromConstrM - which says how to create a value of the right constructor, using some argument to fill in all the fields. We pass mkValue as that argument, which causes us to recursively build up random values.

One immediate problem is what if we are building a [Int] and the random generator often picks (:)? We'll take a very long time to finish. To solve this problem we keep a depth counter, decrement it in every recursive call, and when it runs out, throwIO an exception and give up.

2) Generate a parsing Exp

Now we've got a valid Exp value, but just because an Exp can be represented in the AST doesn't mean it corresponds to Haskell fragment. As an example, consider Var (UnQual (Ident "Test")). That's a valid value of type Exp, but if you pretty print it you get Test, and if you parse it back you'll get Con (UnQual (Ident "Test")) - variables must start with a leading lower-case letter.

To ignore invalid expressions we try pretty printing then parsing the expression, and ignore all expressions which don't roundtrip.

3) Determine if the Exp is atomic

Now we've got a valid Exp, which we know the user could have typed in as a source program, we need to figure out if isAtom is correct. To do that we see if given expression x whether self-application roundtrips, i.e. x x. As a positive example, foo (a variable) roundtrips as foo foo being foo applied to itself. However, if b then t else f when applied to itself gives if b then t else f if b then t else f, which parses back more like if b then t else f (if b then t else f), and is not atomic.

Putting it all together

Now we've got a random expression, and we know if the atomicity agrees with what we were expecting, we can report any differences. That approach has identified many additional patterns to match, but it's not perfect, in particular:

  • Most values either exceed the depth limit or fail to roundtrip. For 10,000 if expressions I typically get 1 or 2 which roundtrip properly. For non-if expressions it's usually 100 or so. The advantage of random testing is that throwing more time at a problem solves such issues without thinking too hard.
  • For some expressions, e.g. ParComp, I've never managed to get a valid value created. Perhaps haskell-src-exts can't parse it, or perhaps it requires constants I don't have in my hardcoded list - none of these were particularly common examples.
  • haskell-src-exts has a bug where -1 is pretty printed as (-1), which is then parsed as a paren and -1. That fails step 2, so we don't test with negative literals. As it happens, non-negative literals are atomic, but negative literals aren't, so we need to take care.
  • There are some patterns which appear to roundtrip successfully on their own, but not when surrounded by brackets, but secretly are just very weird. For example do rec\n [] parses successfully, but with source positions that are error values, and when applied to itself pretty prints incorrectly. There's at least one haskell-src-exts bug here.
  • The program appears to leak progressively more memory. I solved that by running slices of it at a time, and didn't look too hard. I've seen cases of blowup in Data constructors when recursing, so it could be that. but needs investigating.

As a result of all this work a future HLint will spot unnecessary brackets for 20 more types of expression, 8 more types of pattern and 7 more types of type.