Sunday, January 17, 2021

Recording video

Summary: Use OBS, Camo and Audacity.

I recently needed to record a presentation which had slides and my face combined, using a Mac. Based on suggestions from friends and searching the web, I came up with a recipe that worked reasonably well. I'm writing this down to both share that recipe, and so I can reuse the recipe next time.

Slide design: I used a slide template which had a vertical rectangle hole at the bottom left so I could overlay a picture of my video. It took a while to find a slide design that looked plausible, and make sure callouts/quotes etc didn't overlap into this area.

Camera: The best camera you have is probably the one on your phone. To hook up my iPhone to my Mac I used a £20 lightning to USB-C cable (next day shipping from Apple) along with the software Camo. I found Camo delightfully easy to use. I paid £5 per month to disable the logo and because I wanted to try out the portrait mode to blur my background - but that mode kept randomly blurring and unblurring things in the background, so I didn't use it. Camo is useful, but I record videos infrequently, and £5/month is way too steep. I'm not a fan of software subscriptions, so I'll remember to cancel Camo. Because it is subscription based, and subscribing/cancelling is a hassle, I'll probably just suck up the logo next time.

Composition: To put it all together I used OBS Studio. The lack of an undo feature is a bit annoying (click carefully), but otherwise everything was pretty smooth. I put my slide deck (in Keynote) on one monitor, and then had OBS grab the slide contents from it. I didn't use presentation mode in Keynote as that takes over all the screen, so I just used the slide editing view, with OBS cropping to the slide contents. One annoyance of slide editing view is that spelling mistakes (and variable names etc.) have red dotted underlines, so I had to go through every slide and make sure the spellings were ignored. Grabbing the video from Camo into OBS was very easy.

Camera angle: To get the best camera angle I used a lighting plus phone stand (which contains an impressive array of stands, clips, extensions etc) I'd already bought to position the camera right in front of me. Unfortunately, putting the camera right in front of me made it hard to see the screen, which is what I use to present from. It was awkward, and I had to make a real effort to ensure I kept looking into the camera - using my reflection on the back of the shiny iPhone to make sure I kept in the right position. Even then, watching the video after, you can see my eyes dart to the screen to read the next slide. There must be something better out there - or maybe it's only a problem if you're thinking about it and most people won't notice.

Recording: For actual recording there are two approaches - record perfectly in one take (which may take many tries, or accepting a lower quality) or repeatedly record each section and edit it together after. I decided to go for a single take, which meant that if a few slides through I stumbled then I restarted. Looking at my output directory, I see 15 real takes, with a combined total of about an hour runtime, for a 20 minute talk. I did two complete run throughs, one before I noticed that spelling mistakes were underlined in dotted red.

Conversion to MP4: OBS records files as .mkv, so I used VLC to preview them. When I was happy with the result, I converted the file to .mp4 using the OBS feature "Remux recordings".

Audio post processing: After listening to the audio, there was a clear background hum, I suspect from the fan of the laptop. I removed that using Audacity. Getting Audacity to open a .mp4 file was a bit of an uphill struggle, following this guide. I then cleaned up the audio using this guide, saved it as .wav, and reintegrated it with the video using ffmpeg and this guide. I was amazed and impressed how well Audacity was able to clean up the audio with no manual adjustment.

Sharing: I shared the resulting video via DropBox. However, when sharing via DropBox I noticed that the audio quality was significantly degraded in the DropBox preview on the iOS app. Be sure to download the file to assess whether the audio quality is adequate (it was fine when downloaded).

Sunday, November 15, 2020

Data types for build system dependencies

Summary: Monadic and early cut-off? Use a sequence of sets.

In the Build Systems a la Carte paper we talk about the expressive power of various types of build systems. We deliberately simplify away parallelism and implementation concerns, but those details matter. In this post I'm going to discuss some of those details, specifically the representation of dependencies.

Applicative build systems

In an applicative build system like Make, all dependencies for a target are known before you start executing the associated action. That means the dependencies have no ordering, so are best represented as a set. However, because they can be calculated from the target, they don't usually need to be stored separately. The dependencies can also be evaluated in parallel. To build a target you evaluate the dependencies to values, then evaluate the action.

Early cut-off is when an action is skipped because none of its dependencies have changed value, even if some dependencies might have required recomputing. This optimisation can be incredibly important for build systems with generated code - potentially seconds vs hours of build time. To obtain early cut-off in applicative systems, after evaluating the dependencies you compare them to the previous results, and only run the action if there were changes.

Monadic build systems

In monadic build systems like Shake, the representation of dependencies is more complex. If you have an alternative mechanism of detecting whether a rule is dirty (e.g. reverse dependencies) you don't need to record the dependencies at all. If the key is dirty, you start executing the action, and that will request the dependencies it needs. The action can then suspend, calculate the dependencies, and continue.

If you want early cut-off in a monadic build system, you need to rerun the dependencies in advance, and if they all have the same result, skip rerunning the action. Importantly, you probably want to rerun the dependencies in the same order that the action originally requested them -- otherwise you might pay a severe and unnecessary time penalty. As an example, let's consider an action:

opt <- need "is_optimised"
object <- if opt then need "foo.optimised" else need "foo.unoptimised"
link object

This rule is monadic, as whether you need the optimised or unoptimised dependency depends on the result of calculating some is_optimised property. If on the first run is_optimised is True, then we build foo.optimised. On the second run, if is_optimised is False, it is important we don't build foo.optimised as that might take a seriously long time and be entirely redundant. Therefore, it's important when checking for early cut-off we build in the order that the previous action requested the dependencies, and stop on the first difference we encounter.

(If you have unlimited resources, e.g. remote execution, it might be profitable to evaluate everything in parallel - but we're assuming that isn't the case here.)

Provided a rule performs identically between runs (i.e. is deterministic and hasn't been changed), everything that we request to check for early cut-off will still be needed for real, and we won't have wasted any work. For all these reasons, it is important to store dependencies as a sequence (e.g. a list/vector).

Monadic build systems plus parallelism

Applicative build systems naturally request all their dependencies in parallel, but monadic build systems are naturally one dependency at a time. To regain parallelism, in build systems like Shake the primitive dependency requesting mechanism takes a set of dependencies that are computed in parallel. While requesting dependencies individually or in bulk gives the same result, in bulk gives significantly more parallelism. (In Shake we use lists to track correspondence between requests and results, but it's morally a set.)

As we saw previously, it is still important that for early cut-off you reproduce the dependencies much like they were in the action. That means you request dependencies in the order they were requested, and when they were requested in bulk, they are also checked in bulk. Now we have a sequence of sets to represent dependencies, where the elements of the sets can be checked in parallel, but the sequence must be checked in order.

Monadic build systems plus explicit parallelism

What if we add an explicit parallelism operator to a monadic build system, something like parallel :: [Action a] -> IO [a] to run arbitrary actions in parallel (which is what Shake provides). Now, instead of a sequence of sets, we have a tree of parallelism. As before it's important when replaying that the dependencies are requested in order, but also that as much is requested in parallel as possible.

What Shake does

Shake is a monadic build system with early cut-off, parallelism and explicit parallelism. When building up dependencies it uses a tree representation. The full data type is:

data Traces
    = None
    | One Trace
    | Sequence Traces Traces
    | Parallel [Traces]

Sequenced dependencies are represented with Sequence and the traces captured by parallelism use Parallel. Importantly, constructing Traces values is nicely O(1) in all cases. (Shake v0.19.1 used a different representation and repeatedly normalised it, which could have awful time complexity - potentially O(2^n) in pathological cases.)

While these traces store complete information, actually evaluating that trace when checking for rebuilds would be complicated. Instead, we flatten that representation to [[Trace]] for writing to the Shake database. The outer list is a sequence, the inner list is morally a set. We have the invariant that no Trace value will occur multiple times, since if you depend on something once, and then again, the second dependency was irrelevant. To flatten Parallel computations we take the first required dependency in each parallel action, merge them together, and then repeat for the subsequent actions. If you run code like:

parallel [
    need ["a"] >> parallel [need ["b"], need ["c"]]
    need ["d"]
]

It will get flattened to appear as though you wrote need ["a","d"] >> need ["b","c"]. When checking, it will delay the evaluation of b and c until after d completes, even though that is unnecessary. But simplifying traces at the cost of marginally less rebuild parallelism for those who use explicit parallelism (which is not many) seems like the right trade-off for Shake.

Conclusion

Applicative build systems should use sets for their dependencies. Monadic build systems should use sets, but if they support early cut-off, should use sequences of sets.

Monday, November 09, 2020

Turing Incomplete Languages

Summary: Some languages ban recursion to ensure programs "terminate". That's technically true, but usually irrelevant.

In my career there have been three instances where I've worked on a programming language that went through the evolution:

  1. Ban recursion and unbounded loops. Proclaim the language is "Turing incomplete" and that all programs terminate.
  2. Declare that Turing incomplete programs are simpler. Have non-technical people conflate terminate quickly with terminate eventually.
  3. Realise lacking recursion makes things incredibly clunky to express, turning simple problems into brain teasers.
  4. Add recursion.
  5. Realise that the everything is better.

Before I left university, this process would have sounded ridiculous. In fact, even after these steps happened twice I was convinced it was the kind of thing that would never happen again. Now I've got three instances, it seems worth writing a blog post so for case number four I have something to refer to.

A language without recursion or unbounded loops

First, let's consider a small simple statement-orientated first-order programming language. How might we write a non-terminating program? There are two easy ways. Firstly, write a loop - while (true) {}. Second, write recursion, void f() { f() }. We can ban both of those, leaving only bounded iteration of the form for x in xs { .. } or similar. Now the language is Turing incomplete and all programs terminate.

The lack of recursion makes programs harder to write, but we can always use an explicit stack with unbounded loops.

The lack of unbounded loops isn't a problem provided we have an upper bound on how many steps our program might take. For example, we know QuickSort has worst-case complexity O(n^2), so if we can write for x in range(0, n^2) { .. } then we'll have enough steps in our program such that we never reach the bound.

But what if our programming language doesn't even provide a range function? We can synthesise it by realising that in a linear amount of code we can produce exponentially large values. As an example:

double xs = xs ++ xs -- Double the length of a list
repeated x = double (double (double (double (double (double (double (double (double (double [x])))))))))

The function repeated 1 makes 10 calls to double, and creates a list of length 2^10 (1024). A mere 263 more calls to double and we'll have a list long enough to contain each atom in the universe. With some tweaks we can cause doubling to stop at a given bound, and generate numbers in sequence, giving us range to any bound we pick.

We now have a menu of three techniques that lets us write almost any program we want to do so:

  1. We can encoding recursion using an explicit stack.
  2. We can change unbounded loops into loops with a conservative upper bound.
  3. We can generate structures of exponential size with a linear amount of code.

The consequences

Firstly, we still don't have a Turing complete language. The code will terminate. But there is no guarantee on how long it will take to terminate. Programs that take a million years to finish technically terminate, but probably can't be run on an actual computer. For most of the domains I've seen Turing incompleteness raised, a runtime of seconds would be desirable. Turing incompleteness doesn't help at all.

Secondly, after encoding the program in a tortured mess of logic puzzles, the code is much harder to read. While there are three general purpose techniques to encode the logic, there are usually other considerations that cause each instance to be solved differently. I've written tree traversals, sorts and parsers in such restricted languages - the result is always a lot of comments and at least one level of unnecessary indirection.

Finally, code written in such a complex style often performs significantly worse. Consider QuickSort - the standard implementation takes O(n^2) time worst case, but O(n log n) time average case, and O(log n) space (for the stack). If you take the approach of building an O(n^2) list before you start to encode a while loop, you end up with O(n^2) space and time. Moreover, while in normal QuickSort the time complexity is counting the number of cheap comparisons, in an encoded version the time complexity relates to allocations, which can be much more expensive as a constant factor.

The solution

Most languages with the standard complement of if/for etc which are Turing incomplete do not gain any benefit from this restriction. One exception is in domains where you are proving properties or doing analysis, as two examples:

  1. Dependently typed languages such as Idris, which typically have much more sophisticated termination checkers than just banning recursion and unbounded loops.
  2. Resource bounded languages such as Hume, which allow better analysis and implementation techniques by restricting how expressive the language is.

Such languages tend to be a rarity in industry. In all the Turing incomplete programming languages I've experienced, recursion was later added, programs were simplified, and programming in the language became easier.

While most languages I've worked on made this evolution in private, one language, DAML from Digital Asset, did so in public. In 2016 they wrote:

DAML was intentionally designed not to be Turing-complete. While Turing-complete languages can model any business domain, what they gain in flexibility they lose in analysability.

Whereas in 2020 their user manual says:

If there is no explicit iterator, you can use recursion. Let’s try to write a function that reverses a list, for example.

Note that while I used to work at Digital Asset, these posts both predate and postdate my time there.

Tuesday, September 22, 2020

Don't use Ghcide anymore (directly)

Summary: I recommend people use the Haskell Language Server IDE.

Just over a year ago, I recommended people looking for a Haskell IDE experience to give Ghcide a try. A few months later the Haskell IDE Engine and Ghcide teams agreed to work together on Haskell Language Server - using Ghcide as a library as the core, with the plugins/installer experience from the Haskell IDE Engine (by that stage we were already both using the same Haskell setup and LSP libraries). At that time Alan Zimmerman said to me:

"We will have succeeded in joining forces when you (Neil) start recommending people use Haskell Language Server."

I'm delighted to say that time has come. For the last few months I've been both using and recommending Haskell Language Server for all Haskell IDE users. Moreover, for VS Code users, I recommend simply installing the Haskell extension which downloads the right version automatically. The experience of Haskell Language Server is better than either the Haskell IDE Engine or Ghcide individually, and is improving rapidly. The teams have merged seamlessly, and can now be regarded as a single team, producing one IDE experience.

There's still lots of work to be done. And for those people developing the IDE, Ghcide remains an important part of the puzzle - but it's now a developer-orientated piece rather than a user-orientated piece. Users should follow the README at Haskell Language Server and report bugs against Haskell Language Server.

Monday, August 31, 2020

Interviewing while biased

Interviewing usually involves some level of subjectivity. I once struggled to decide about a candidate, and after some period of reflection, the only cause I can see is that I was biased against the candidate. That wasn't a happy realisation, but even so, it's one I think worth sharing.

Over my years, I've interviewed hundreds of candidates for software engineering jobs (I reckon somewhere in the 500-1000 mark). I've interviewed for many companies, for teams I was managing, for teams I worked in, and for other teams at the same company. In most places, I've been free to set the majority of the interview. I have a standard pattern, with a standard technical question, to which I have heard a lot of answers. The quality of the answers fall into one of three categories:

  • About 40% give excellent, quick, effortless answers. These candidates pass the technical portion.
  • About 50% are confused and make nearly no progress even with lots of hints. These candidates fail.
  • About 10% struggle a bit but get to the answer.

Candidates in the final bucket are by far the hardest to make a decision on. Not answering a question effortlessly doesn't mean you aren't a good candidate - it might mean it's not something you are used to, you got interview nerves or a million other factors that go into someone's performance. It makes the process far more subjective.

Many years ago, I interviewed one candidate over the phone. It was their first interview with the company, so I had to decide whether we should take the step of transporting them to the office for an in-person interview, which has some level of cost associated with it. Arranging an in-person interview would also mean holding a job open for them, which would mean pausing further recruitment. The candidate had a fairly strong accent, but a perfect grasp of English. Their performance fell squarely into the final bucket.

For all candidates, I make a decision, and write down a paragraph or so explaining how they performed. My initial decision was to not go any further in interviewing the candidate. But after writing down the paragraph, I found it hard to justify my decision. I'd written other paragraphs that weren't too dissimilar, but had a decision to continue onwards. I wondered about changing my decision, but felt rather hesitant - I had a sneaking suspicion that this candidate "just wouldn't work out". Had I spotted something subtle I had forgotten to write down? Had their answers about their motivation given me a subconscious red-flag? I didn't know, but for the first time I can remember, decided to wait on sending my internal interview report overnight.

One day later, I still had a feeling of unease. But still didn't have anything to pin it on. In the absence of a reason to reject them, I decided the only fair thing to do was get them onsite for further interviews. Their onsite interviews went fine, I went on to hire them, they worked for me for over a year, and were a model employee. If I saw red-flags, they were false-flags, but more likely, I saw nothing.

However, I still wonder what caused me to decide "no" initially. Unfortunately, the only thing I can hypothesise is that their accent was the cause. I had previously worked alongside someone with a similar accent, who turned out to be thoroughly incompetent. I seem to have projected some aspects of that behaviour onto an entirely unrelated candidate. That's a pretty depressing realisation to make.

To try and reduce the chance of this situation repeating, I now write down the interview description first, and then the decision last. I also remember this story, and how my biases nearly caused me to screw up someone's career.

Monday, July 27, 2020

Which packages does Hoogle search?

Summary: Hoogle searches packages on Stackage.

Haskell (as of 27 July 2020) has 14,490 packages in the Hackage package repository. Hoogle (the Haskell API search engine) searches 2,463 packages. This post explains which packages are searched, why some packages are excluded, and thus, how you can ensure your package is searched.

The first filter is that Hoogle only searches packages on Stackage. Hoogle indexes any package which is either in the latest Stackage nightly or Stackage LTS, but always indexes the latest version that is on Hackage. If you want a Hoogle search that perfectly matches a given Stackage release, I recommend using the Stackage Hoogle search available from any snapshot page. There are two reasons for restricting to only packages on Stackage:

  • I want Hoogle results to be useful. The fact that the package currently builds with a recent GHC used by Stackage is a positive sign that the package is maintained and might actually work for a user who wants to use it. Most of the packages on Hackage probably don't build with the latest GHC, and thus aren't useful search results.
  • Indexing time and memory usage is proportional to the number of packages, and somewhat the size of those packages. By dropping over 10 thousand packages we can index more frequently and on more constrained virtual machines. With the recent release of Hoogle 5.0.18 the technical limitations on size were removed to enable indexing all of Hackage - but there is still no intention to do so.

There are 2,426 packages in Stackage Nightly, and 2,508 in Stackage LTS, with most overlapping. There are 2,580 distinct packages between these two sources, the Haskell Platform and a few custom packages Hoogle knows about (e.g. GHC).

Of the 2,580 eligible packages, 77 are executables only, so don't have any libraries to search, leaving 2,503 packages.

Of the remaining packages 2,503, 40 are missing documentation on Hackage, taking us down to 2,463. As for why a package might not have documentation:

  • Some are missing documentation because they are very GHC internal and are mentioned but not on Hackage, e.g. ghc-heap.
  • Some are Windows only and won't generate documentation on the Linux Hackage servers, e.g. Win32-notify.
  • Some have dependencies not installed on the Hackage servers, e.g. rocksdb-query.
  • Some have documentation that appears to have been generated without generating a corresponding Hoogle data file, e.g. array-memoize.
  • Some are just missing docs entirely on Hackage for no good reason I can see, e.g. bytestring-builder.

The Hoogle database is generated and deployed once per day, automatically. Occasionally a test failure or dependency outage will cause generation to fail, but I get alerted, and usually it doesn't get stale by more than a few days. If you add your package to Stackage and it doesn't show up on Hoogle within a few days, raise an issue.

Wednesday, July 15, 2020

Managing Haskell Extensions

Summary: You can divide extensions into yes, no and maybe, and then use HLint to enforce that.

I've worked in multiple moderately sized multinational teams of Haskell programmers. One debate that almost always comes up is which extensions to enable. It's important to have some consistency, so that everyone is using similar dialects of Haskell and can share/review code easily. The way I've solved this debate in the past is by, as a team, dividing the extensions into three categories:

  • Always-on. For example, ScopedTypeVariables is just how Haskell should work and should be enabled everywhere. We turn these extensions on globally in Cabal with default-extensions, and then write an HLint rule to ban turning the extension on manually. In one quick stroke, a large amount of {-# LANGUAGE #-} boilerplate at the top of each file disappears.
  • Always-off, not enabled and mostly banned. For example, TransformListComp steals the group keyword and never got much traction. These extensions can be similarly banned by HLint, or you can default unmentioned extensions to being disabled. If you really need to turn on one of these extensions, you need to both turn it on in the file, and add an HLint exception. Such an edit can trigger wider code review, and serves as a good signal that something needs looking at carefully.
  • Sometimes-on, written at the top of the file as {-# LANGUAGE #-} pragmas when needed. These are extensions which show up sometimes, but not that often. A typical file might have zero to three of them. The reasons for making an extension sometimes-on fall into three categories:
    • The extension has a harmful compile-time impact, e.g. CPP or TemplateHaskell. It's better to only turn these extensions on if they are needed, but they are needed fairly often.
    • The extension can break some code, e.g. OverloadedStrings, so enabling it everywhere would cause compile failures. Generally, we work to minimize such cases, aiming to fix all code to compile with most extensions turned on.
    • The extension is used rarely within the code base and is a signal to the reader that something unusual is going on. Depending on the code base that might be things like RankNTypes or GADTs. But for certain code bases, those extensions will be very common, so it very much varies by code base.

The features that are often most debated are the syntax features - e.g. BlockSyntax or LambdaCase. Most projects should either use these extensions commonly (always-on), or never (banned). They provide some syntactic convenience, but if used rarely, tend to mostly confuse things.

Using this approach every large team I've worked on has had one initial debate to classify extensions, then every few months someone will suggest moving an extension from one pile to another. However, it's pretty much entirely silenced the issue from normal discussion thereafter, leaving us to focus on actual coding.