Friday, November 02, 2012
Henry David Mitchell
I am delighted to announce that Henry David Mitchell was born 29th October 2012 weighing 4.02kg (8lb 14oz). Mother and baby are both doing well. Here is a picture of him in his first outfit.
Labels:
henry
Sunday, October 21, 2012
Haskell Exchange 2012 talks
Summary: The talks from the Haskell Exchange 2012 are online.
The Haskell Exchange 2012 took place last week. We had a good range of speakers, who all gave excellent talks. I would like to say thanks to all the speakers and to Skills Matter for organising and running the event. The talks are all available online, and I thought I'd just go through them now:
Simon Peyton Jones argued that purity and types are the key to Haskell. The purity thing is without doubt true - people cheat if they are allowed to, but Haskell discourages impurity both through the IO monad (effects are explicit) and community pressure (don't use unsafePerformIO unsafely). I also think that laziness combined with an aggressively optimising that messes up unsafePerformIO at the slightest opportunity have also helped (impurity always comes back to bite).
Simon Marlow explained the async package, showing how you can turn complex problems into simple ones with a simple mechanism that looks highly reusable. I have previously used a similar idea in my F# coding (although not nearly as thoroughly thought out), and been happy with the results - I suspect Simon's version will be even more useful. I particularly enjoyed the use of STM - it's rarely a concurrency solution I think of (I'm a threads and locks kind of person), but seeing it used so elegantly makes me want to experiment more.
Lennart Augustsson talked about embedded languages - where you write a language that looks a lot like Haskell, inside a Haskell program, only to find out later that it wasn't Haskell at all. Lennart is clearly the expert at these languages, having covered C, Basic and now financial combinators. Once you have these inner languages, you can take them in all kinds of directions, including right down to optimised assembly code.
Blake Rain gave an introduction to Yesod. I had read the Yesod documentation a while back, but the had trouble figuring out where to start, and what it was that mattered. Blake did both the sales pitch, and the beginners guide - I'll certainly be trying a Yesod at some point. In particular I really liked the type-safe routing, that would certainly have some in handy in my ASP developer days.
Duncan Coutts gave updates on the Cloud Haskell work, which in only a year, has gone from a research project to a practical library/tool that can be distributed at scale. The work covers details like how you distribute, why you distribute and how the model was designed (basically, copy Erlang). Another interesting aspect was how the real world development challenges, both the diverse nature of network/cloud computing, and how you can fund turning an academic idea into a user tool.
Rob Harrop showed how Ruby, Erlang and Haskell can all communicate using a message passing framework (AMQP). I certainly prefer Haskell, and go fully Haskell where possible, but a heterogeneous environment provides an easier migration path. Rob showed how to start and stop Haskell processes, switch elements from Ruby to Haskell and back again, all while a website was continuously responding to requests. This development style certainly provides an easy route in for Haskell, but also highlighted that Haskell still lacks some of the nicer sugar over AMQP that Ruby provides.
The final session was an open session where people discussed topics relating to Haskell. This session was fun, but I felt it could be better. I think it meandered at some points, and had a core of people who talked, but a large number of people who just watched. I'm not sure how I'd do things better, but it felt like some of the questions after the talks (Simon Peyton Jones and Duncan Coutts talks in particular) lead to more thorough discussions. I know this can be a problem for Haskell Symposium "Future of Haskell" discussions too, so perhaps there is some scope for tweaking the format?
The Haskell Exchange 2012 took place last week. We had a good range of speakers, who all gave excellent talks. I would like to say thanks to all the speakers and to Skills Matter for organising and running the event. The talks are all available online, and I thought I'd just go through them now:
Simon Peyton Jones argued that purity and types are the key to Haskell. The purity thing is without doubt true - people cheat if they are allowed to, but Haskell discourages impurity both through the IO monad (effects are explicit) and community pressure (don't use unsafePerformIO unsafely). I also think that laziness combined with an aggressively optimising that messes up unsafePerformIO at the slightest opportunity have also helped (impurity always comes back to bite).
Simon Marlow explained the async package, showing how you can turn complex problems into simple ones with a simple mechanism that looks highly reusable. I have previously used a similar idea in my F# coding (although not nearly as thoroughly thought out), and been happy with the results - I suspect Simon's version will be even more useful. I particularly enjoyed the use of STM - it's rarely a concurrency solution I think of (I'm a threads and locks kind of person), but seeing it used so elegantly makes me want to experiment more.
Lennart Augustsson talked about embedded languages - where you write a language that looks a lot like Haskell, inside a Haskell program, only to find out later that it wasn't Haskell at all. Lennart is clearly the expert at these languages, having covered C, Basic and now financial combinators. Once you have these inner languages, you can take them in all kinds of directions, including right down to optimised assembly code.
Blake Rain gave an introduction to Yesod. I had read the Yesod documentation a while back, but the had trouble figuring out where to start, and what it was that mattered. Blake did both the sales pitch, and the beginners guide - I'll certainly be trying a Yesod at some point. In particular I really liked the type-safe routing, that would certainly have some in handy in my ASP developer days.
Duncan Coutts gave updates on the Cloud Haskell work, which in only a year, has gone from a research project to a practical library/tool that can be distributed at scale. The work covers details like how you distribute, why you distribute and how the model was designed (basically, copy Erlang). Another interesting aspect was how the real world development challenges, both the diverse nature of network/cloud computing, and how you can fund turning an academic idea into a user tool.
Rob Harrop showed how Ruby, Erlang and Haskell can all communicate using a message passing framework (AMQP). I certainly prefer Haskell, and go fully Haskell where possible, but a heterogeneous environment provides an easier migration path. Rob showed how to start and stop Haskell processes, switch elements from Ruby to Haskell and back again, all while a website was continuously responding to requests. This development style certainly provides an easy route in for Haskell, but also highlighted that Haskell still lacks some of the nicer sugar over AMQP that Ruby provides.
The final session was an open session where people discussed topics relating to Haskell. This session was fun, but I felt it could be better. I think it meandered at some points, and had a core of people who talked, but a large number of people who just watched. I'm not sure how I'd do things better, but it felt like some of the questions after the talks (Simon Peyton Jones and Duncan Coutts talks in particular) lead to more thorough discussions. I know this can be a problem for Haskell Symposium "Future of Haskell" discussions too, so perhaps there is some scope for tweaking the format?
Thursday, October 04, 2012
Haskell Exchange next Wednesday
The Haskell Exchange 2012 is happening all day next Wednesday, October 10th, in London. There is still time to register, and with the discount code HASKELLX-2012-TE1 you save £50, making it £175. This event is very much about learning how people use Haskell, and how you can use Haskell. It isn't academics giving their plans for the future, or perfecting the small details of the language, it is all about what you can do today.
I've been helping Skills Matter organise the program since the start of this year. The final list of speakers is Simon Peyton Jones, Simon Marlow, Lennart Augustsson, Duncan Coutts, Blake Rain and Rob Harrop. You can find details of their talks on the main page. I'm very happy with both the variety and quality of the speakers. It includes people who were there at the beginning of lazy functional languages and also people actively making their living developing things for clients with Haskell.
I've seen four of these speakers talk many times before, and they are always fun and informative. The two speakers who might be less familiar to some in the Haskell community are talking about topics which sound particularly interesting:
I look forward to seeing some of you there! Register now (discount code HASKELLX-2012-TE1).
I've been helping Skills Matter organise the program since the start of this year. The final list of speakers is Simon Peyton Jones, Simon Marlow, Lennart Augustsson, Duncan Coutts, Blake Rain and Rob Harrop. You can find details of their talks on the main page. I'm very happy with both the variety and quality of the speakers. It includes people who were there at the beginning of lazy functional languages and also people actively making their living developing things for clients with Haskell.
I've seen four of these speakers talk many times before, and they are always fun and informative. The two speakers who might be less familiar to some in the Haskell community are talking about topics which sound particularly interesting:
- Blake Rain will be talking about Yesod, one of the three Haskell web frameworks. I've read the excellent documentation on Yesod, but I've never seen the overview. I ran an ASP web development company many years ago, and want to see how I could have avoided the problems of untyped development. I want to see how the ideas behind Haskell web frameworks work in practice, with real end-users who care far more about the shade of blue than type safe URL's.
- Rob Harrop will be talking about integrating Haskell components into an existing system via message passing over the network. I hear more and more people structuring complex systems as separate processes that talk to each other with messaging interfaces. Once you have process boundaries, then using a different language for each piece becomes easy. Translating 1 million lines of code to Haskell isn't usually an option, but prototyping one new process might be - this approach seems like a great gateway for Haskell.
I look forward to seeing some of you there! Register now (discount code HASKELLX-2012-TE1).
Monday, September 03, 2012
Haskell eXchange 2012
Skills Matter are having a 1-day Haskell conference in London on October 10th:
Haskell Exchange 2012
The conference is dedicated to practical Haskell, not academic Haskell. Each session will teach you about how Haskell is used in the real world, and include techniques you can apply to your Haskell programming. Speakers include Simon Peyton Jones, Simon Marlow, Lennart Augustsson, Duncan Coutts, Blake Rain and Rob Harrop. Topics include types, parallelism, high-performance concurrency, LLVM, EDSLs, Yesod and Cloud Haskell.
Registration is already open, and the early registration discount applies until 9th September (6 more days). I've been working with Skills Matter since May developing the program, and am very happy with the end result. I'll be there with 69% probability, and am really looking forward to it.
Haskell Exchange 2012
The conference is dedicated to practical Haskell, not academic Haskell. Each session will teach you about how Haskell is used in the real world, and include techniques you can apply to your Haskell programming. Speakers include Simon Peyton Jones, Simon Marlow, Lennart Augustsson, Duncan Coutts, Blake Rain and Rob Harrop. Topics include types, parallelism, high-performance concurrency, LLVM, EDSLs, Yesod and Cloud Haskell.
Registration is already open, and the early registration discount applies until 9th September (6 more days). I've been working with Skills Matter since May developing the program, and am very happy with the end result. I'll be there with 69% probability, and am really looking forward to it.
Wednesday, August 29, 2012
Shake User Feedback
I'll be at ICFP 2012 in a few weeks, talking about Shake. One thing I'd like to include on the slides are some information about how Shake is used by other people. Do you use Shake? If so:
Please either leave a comment on the blog, or email me - I'm mainly looking for snippets to put on slides. As always, if you have any other feedback, I'm always interested (this applies to any of my projects).
Shake Updates
Since I'm writing about Shake, I've made a number of enhancements in the darcs repo, which will be included in the next version. These changes include:
Anyone is welcome to try the darcs version out, but I may change the database format again before the final release (causing complete rebuilds), so I don't particularly recommend it - these are all refinements, not rewrites.
I look forward to seeing some Shake users in Copenhagen!
- What is the project? How do you use Shake?
- Any comments on Shake, what you like, what you didn't?
Please either leave a comment on the blog, or email me - I'm mainly looking for snippets to put on slides. As always, if you have any other feedback, I'm always interested (this applies to any of my projects).
Shake Updates
Since I'm writing about Shake, I've made a number of enhancements in the darcs repo, which will be included in the next version. These changes include:
- Faster database serialisation
- Faster modification time checking
- GHC 7.6 compatibility
- Significant improvements to profiling output
- Improved error messages
Anyone is welcome to try the darcs version out, but I may change the database format again before the final release (causing complete rebuilds), so I don't particularly recommend it - these are all refinements, not rewrites.
I look forward to seeing some Shake users in Copenhagen!
Labels:
shake
Tuesday, August 07, 2012
Standard Chartered is hiring
Standard Chartered, the bank where I work, is hiring – there are two vacancies for functional programmers in either London or Singapore. We’re looking for programmers to use Haskell full time (or more specifically, the Mu Haskell dialect), with the results used by many people, often the very next day. We write compilers, libraries, applications, web servers, DSLs and much more besides. All of the work is for use in house, and is usually geared towards finance, but no finance background is required. Standard Chartered has been using Haskell for over four years, and has hired lots of well-known Haskell programmers, including Lennart Augustsson, Ravi Nanavati, Malcolm Wallace, Roman Leshchinskiy and Don Stewart.
We’re looking for proper hackers – people who can quickly write top-quality code in response to what our users need. We have a large existing Haskell code base, which lots of people are continually developing, so experience with larger Haskell projects would be a plus. To apply, send a CV to neville.dwyer AT sc DOT com, and make sure the CV includes links to anything you’ve written on programming (Twitter, StackOverflow, blogs, academic papers) and links to any open-source software you may have written (github, hackage). If you have any more general questions feel free to email me.
We’re looking for proper hackers – people who can quickly write top-quality code in response to what our users need. We have a large existing Haskell code base, which lots of people are continually developing, so experience with larger Haskell projects would be a plus. To apply, send a CV to neville.dwyer AT sc DOT com, and make sure the CV includes links to anything you’ve written on programming (Twitter, StackOverflow, blogs, academic papers) and links to any open-source software you may have written (github, hackage). If you have any more general questions feel free to email me.
Labels:
jobs
Sunday, July 29, 2012
Adding the Lens derivation to Derive
Derive is a tool for generating code from Haskell data types - you specify the data type, and it automatically derives code, often instance declarations. I was asked how to add derivations for the data-lens library, and this post is the answer. I have also released derive-2.5.10 which supports Lens derivations. The Derive tool should contain derivations for as many patterns as possible, and I welcome new contributions.
Step 1: Checkout and run Derive
The first step is to fetch the code with darcs:
Then follow the instructions in the README.txt file to regenerate the derivations locally and run the test suite:
Like all my projects, Derive contains a .ghci file which sets up include paths and other useful utilities for working with each package. We first run :main --generate which autogenerates all the boilerplate in the Derive tool. We then run :reload to reload the freshly generated code, and finally :main --test which runs the test suite. In order to run the test suite it is likely that you will need to install some packages, see derive.html for a list of the likely packages.
Step 2: Add the Lens derivation
The first step when adding a derivation is to find a similar derivation (there are 35 existing derivations, so there is probably something reasonably close) and copy it. For the Lens derivation, it turns out that Ref is quite similar, so we copy Data/Derive/Ref.hs to Lens.hs and start modifying. There are several sections we need to tweak.
Firstly, we update the Haddock comments at the top of the file. These comments are purely for the users of our library, and should explain how to use the derivation.
Secondly, we modify the test section. This section is in comments just below the module header. We add import "data-lens" Data.Lens.Common then, working from the data types listed in Data/Derive/Internal/Test.hs, we modify the test to match what we expect as the output. This test section is tested with :main --test and contributes to the documentation (mainly the import line).
Thirdly, we modify the top-level names in this module, so the module name is Data.Derive.Lens and the main function is makeLens.
Step 3: Test and fix
When developing new derivations I follow test driven development. The tests are written, but the code has not been changed from the Ref derivation, so we expect the tests to fail. We follow the three standard steps in GHCi and :main --test complains with:
As expected, the test fails, showing us that the copied Ref code does not do what we want the Lens code to do. Modifying the code, it takes a few minutes to arrive at:
The result is not quite right - the Int argument is missing from Lens, but so far we have been renaming and juggling existing code. Now we need to find the type of the field at hand, given the name of the field and the DataDecl of the type (DataDecl is from the haskell-src-exts library). Hunting around some of the Derive utility modules, particularly Language.Haskell, we can come up with:
We rerun :main --test and the test passes. This command checks that the examples match, then checks that the result of the derivation type-checks for a larger range of examples. We have now added Lens derivations to Derive.
(If you are lucky, and your derivation can be added by example, then you might not have to write any code at all - simply modifying the test case automatically generates the right code. See the Eq type class for such an example.)
Step 4: Final test
While we have satisfied the test suite, it is always reassuring to run some tests by hand. Using the Example.hs file in the root of the repo we can try:
This command prints out the expected result:
Step 5: Send upstream
Before sending a patch, update CHANGES.txt with a one-line summary of what you have done, then you should see:
Create a patch (using the standard darcs workflow) and send it to me. The more derivations the merrier.
Step 1: Checkout and run Derive
The first step is to fetch the code with darcs:
$ darcs get http://community.haskell.org/~ndm/darcs/derive
Then follow the instructions in the README.txt file to regenerate the derivations locally and run the test suite:
$ cd derive $ ghci > :main --generate > :reload > :main --test
Like all my projects, Derive contains a .ghci file which sets up include paths and other useful utilities for working with each package. We first run :main --generate which autogenerates all the boilerplate in the Derive tool. We then run :reload to reload the freshly generated code, and finally :main --test which runs the test suite. In order to run the test suite it is likely that you will need to install some packages, see derive.html for a list of the likely packages.
Step 2: Add the Lens derivation
The first step when adding a derivation is to find a similar derivation (there are 35 existing derivations, so there is probably something reasonably close) and copy it. For the Lens derivation, it turns out that Ref is quite similar, so we copy Data/Derive/Ref.hs to Lens.hs and start modifying. There are several sections we need to tweak.
Firstly, we update the Haddock comments at the top of the file. These comments are purely for the users of our library, and should explain how to use the derivation.
Secondly, we modify the test section. This section is in comments just below the module header. We add import "data-lens" Data.Lens.Common then, working from the data types listed in Data/Derive/Internal/Test.hs, we modify the test to match what we expect as the output. This test section is tested with :main --test and contributes to the documentation (mainly the import line).
Thirdly, we modify the top-level names in this module, so the module name is Data.Derive.Lens and the main function is makeLens.
Step 3: Test and fix
When developing new derivations I follow test driven development. The tests are written, but the code has not been changed from the Ref derivation, so we expect the tests to fail. We follow the three standard steps in GHCi and :main --test complains with:
*** Exception: Results don't match! Expected: lensSpeed :: Lens Computer Int lensSpeed = lens speed (\ x v -> v{speed = x}) Got: refSpeed :: Ref Computer refSpeed = Ref{select = speed, update = \ f v -> v{speed = f (speed v)}}
As expected, the test fails, showing us that the copied Ref code does not do what we want the Lens code to do. Modifying the code, it takes a few minutes to arrive at:
lensSpeed :: Lens Computer lensSpeed = lens speed (\ x v -> v{speed = x})
The result is not quite right - the Int argument is missing from Lens, but so far we have been renaming and juggling existing code. Now we need to find the type of the field at hand, given the name of the field and the DataDecl of the type (DataDecl is from the haskell-src-exts library). Hunting around some of the Derive utility modules, particularly Language.Haskell, we can come up with:
typ = tyApps (tyCon "Lens") [dataDeclType d, fromBangType t] Just t = lookup field $ concatMap ctorDeclFields $ dataDeclCtors d
We rerun :main --test and the test passes. This command checks that the examples match, then checks that the result of the derivation type-checks for a larger range of examples. We have now added Lens derivations to Derive.
(If you are lucky, and your derivation can be added by example, then you might not have to write any code at all - simply modifying the test case automatically generates the right code. See the Eq type class for such an example.)
Step 4: Final test
While we have satisfied the test suite, it is always reassuring to run some tests by hand. Using the Example.hs file in the root of the repo we can try:
> :main Example.hs --derive=Lens
This command prints out the expected result:
lensMemory :: Lens Computer Int lensMemory = lens memory (\ x v -> v{memory = x}) lensSpeed :: Lens Computer Int lensSpeed = lens speed (\ x v -> v{speed = x}) lensWeight :: Lens Computer Int lensWeight = lens weight (\ x v -> v{weight = x})
Step 5: Send upstream
Before sending a patch, update CHANGES.txt with a one-line summary of what you have done, then you should see:
$ darcs add Data\Derive\Lens.hs $ darcs w -s M ./CHANGES.txt +1 M ./Data/Derive/All.hs -1 +2 A ./Data/Derive/Lens.hs M ./derive.cabal +1 M ./derive.htm +1
Create a patch (using the standard darcs workflow) and send it to me. The more derivations the merrier.
Labels:
derive
Sunday, July 08, 2012
Shake ICFP paper
My ICFP 2012 Shake paper is now online: Shake Before Building - Replacing Make with Haskell. From the abstract:
There are two primary sources of documentation for Shake, the ICFP paper (as above) and the package documentation. The ICFP paper covers the theory, including how Shake relates to other tools (specifically make) and general remarks about how Shake is designed/implemented and how you can build things on top of it. The package documentation gives concrete examples of using the package and an exhaustive list of all functions available.
Most complex software projects are compiled using a build tool (e.g. make), which runs commands in an order satisfying user-defined dependencies. Unfortunately, most build tools require all dependencies to be specified before the build starts. This restriction makes many dependency patterns difficult to express, especially those involving files generated at build time. We show how to eliminate this restriction, allowing additional dependencies to be specified while building. We have implemented our ideas in the Haskell library Shake, and have used Shake to write a complex build system which compiles millions of lines of code.
There are two primary sources of documentation for Shake, the ICFP paper (as above) and the package documentation. The ICFP paper covers the theory, including how Shake relates to other tools (specifically make) and general remarks about how Shake is designed/implemented and how you can build things on top of it. The package documentation gives concrete examples of using the package and an exhaustive list of all functions available.
Labels:
shake
Sunday, June 17, 2012
Shake Storage Layer
Summary: Shake maintains metadata as the build progresses. This metadata must remain up to date and consistent even if the process is killed. This post describes my new strategy for doing that.
Shake is an advanced build system, and in common with nearly all advanced build systems, it maintains extra metadata about rules - when the rule was last run, what the dependencies were, how long it took etc. If the metadata associated with a rule is not available, the rule must be rerun, which is often expensive. Any build system is likely to be interrupted on a regular basis - both due to failing rules (compile errors) and the user aborting a build. As a result, it is important that the metadata is robustly stored to disk as soon as it is produced.
In this post, I outline the old solution to maintaining metadata, along with the new solution available in shake-0.3, which I just released. The new solution has a number of benefits:
Most of these improvements have been driven by people using Shake in new ways. When used as a replacement for Make, with one invocation per run, many of these issues are theoretical. Now people are running Shake in background threads and forcibly killing and restarting it on a regular basis, these issues can be observed in practice. However, the improvements will benefit everyone.
The Old Solution
The old solution has remained basically the same since the very first version of Shake, over three years ago. Shake maintains two files - the database contains the metadata, while the journal contains a list of metadata updates that can be appended to. The sequence of steps is:
This solution works well, but has a couple of flaws. Whenever we save the database, if it gets corrupted half-way through, we lose the entire database, causing the build to start from scratch. Another problem is that if we are building nothing, we read in all the metadata, then write it all out again with only one single modification (incrementing the build time step). Since serialisation takes 3x longer than deserialisation (in benchmarks on the Shake metadata) about 75% of the time associated with the metadata is wasted. Even when we have made many updates, the data is already stored in the journal, so rewriting the database is not strictly necessary.
The New Solution
The new solution keeps a single database, containing a list of key/value pairs, which can be appended to. At certain points a backup file is made, simply a copy of an existing database. The sequence of steps is:
In this method we never save the data after a successful run, but just close the file handles. The database accumulates key/value pairs, but only the last value associated with any key in the database is useful - earlier values are ignored. At some point the database will contain a significant number of keys that are no longer useful, and at that point we rewrite the database, taking care to make a backup before starting.
This post outlines the general steps, omitting details such as version stamps and consistency checks, which are highly important for a robust build system. These details are taken care of in the full implementation, available in the source as Development.Shake.Storage, taking about 100 lines.
Shake is an advanced build system, and in common with nearly all advanced build systems, it maintains extra metadata about rules - when the rule was last run, what the dependencies were, how long it took etc. If the metadata associated with a rule is not available, the rule must be rerun, which is often expensive. Any build system is likely to be interrupted on a regular basis - both due to failing rules (compile errors) and the user aborting a build. As a result, it is important that the metadata is robustly stored to disk as soon as it is produced.
In this post, I outline the old solution to maintaining metadata, along with the new solution available in shake-0.3, which I just released. The new solution has a number of benefits:
- Reduces time loading/saving metadata by up to 75%. In practice this is unlikely to make a significant difference unless no rules need running.
- Exceptions at any point will not cause file handles to be left open.
- Previously there were very small windows where if the process died suddenly all metadata would be corrupted. These have been eliminated.
- I removed all knowledge of the build system from the storage layer, making it properly decoupled.
Most of these improvements have been driven by people using Shake in new ways. When used as a replacement for Make, with one invocation per run, many of these issues are theoretical. Now people are running Shake in background threads and forcibly killing and restarting it on a regular basis, these issues can be observed in practice. However, the improvements will benefit everyone.
The Old Solution
The old solution has remained basically the same since the very first version of Shake, over three years ago. Shake maintains two files - the database contains the metadata, while the journal contains a list of metadata updates that can be appended to. The sequence of steps is:
- Load the database
- If the journal exists then:
- Replay the journal into the database
- Save the database
- Delete the journal
- Run the build, storing any updates to the journal
- Save the database
- Delete the journal
This solution works well, but has a couple of flaws. Whenever we save the database, if it gets corrupted half-way through, we lose the entire database, causing the build to start from scratch. Another problem is that if we are building nothing, we read in all the metadata, then write it all out again with only one single modification (incrementing the build time step). Since serialisation takes 3x longer than deserialisation (in benchmarks on the Shake metadata) about 75% of the time associated with the metadata is wasted. Even when we have made many updates, the data is already stored in the journal, so rewriting the database is not strictly necessary.
The New Solution
The new solution keeps a single database, containing a list of key/value pairs, which can be appended to. At certain points a backup file is made, simply a copy of an existing database. The sequence of steps is:
- If the backup file exists, delete the database and use the backup file
- Read all records from the database
- Put the records into a Map
- If the Map is significantly smaller than the number of records then
- Rename the database to the backup
- Resave the database
- Delete the backup
- Run the build, storing any updates to the database
In this method we never save the data after a successful run, but just close the file handles. The database accumulates key/value pairs, but only the last value associated with any key in the database is useful - earlier values are ignored. At some point the database will contain a significant number of keys that are no longer useful, and at that point we rewrite the database, taking care to make a backup before starting.
This post outlines the general steps, omitting details such as version stamps and consistency checks, which are highly important for a robust build system. These details are taken care of in the full implementation, available in the source as Development.Shake.Storage, taking about 100 lines.
Labels:
shake
Monday, June 04, 2012
The Flavours of MVar
Update: These functions have been cleaned up, improved with respect to error conditions and async exceptions, and put in the extra package.
The MVar is a flexible and powerful locking primitive, used extensively in Haskell. An MVar is like a box which is empty (has zero elements inside) or full (has one element inside). You block when trying to take from an empty MVar or put to a full MVar. On top of MVars, lots of interesting concurrent programs can be written. However, with such a flexible mechanism, there is scope for confusion. Every MVar can block on either a take or a put, but for any individual MVar it is likely you expect it to block on only one of those operations. In my programs I usually restrict my MVars to one of three flavours, each of which is described below.
The Lock guarantees single-threaded access, typically to some system resource.
And as an example:
Here we are creating a lock to ensure that when writing output our messages do not get interleaved. This use of MVar never blocks on a put. It is permissible, but rare, that a withLock contains a withLock inside it - but if so, watch out for deadlocks.
The Var operates on a mutable variable in a thread-safe way.
And as an example:
Here we have a variable which we modify atomically, so modifications are not interleaved. This use of MVar never blocks on a put. No modifyVar operation should ever block, and they should always complete in a reasonable timeframe. A Var should not be used to protect some external resource, only the variable contained within. Information from a readVar should not be subsequently inserted back into the Var.
A barrier starts with no value, is written to once, and read one or more times.
And as an example:
Here we create a barrier which will contain some computed value. A thread is forked to fill the barrier, while the main thread waits for it to complete. A barrier has similarities to a future or promise from other languages, has been known as an IVar in other Haskell work, and in some ways is like a manually managed thunk. It is an error to signal a barrier more than once and a deadlock to never signal it. Since the barrier goes from empty to full, it never blocks on a put, unless you incorrectly call signal more than once.
The previous three MVar wrappers are the flavours of MVar which I use regularly. These can be combined into higher-level abstractions specific to certain situations. I now give two examples, intended to show how to combine these primitives.
The once function takes an action, and returns a new action. If the action is never called the argument action will never be executed, but if it is called more than once, it will only be executed once. We can write this function as:
Here we create a variable to store the result, whose state is either Nothing (we have not yet started computing) or Just a barrier (we have started computing, use this barrier to get the result out). I have found 'join $ modifyVar' is a common idiom, used to defer a blocking action (often waitBarrier) until after a modifyVar has completed, ensuring we preserve our invariant of not blocking inside a modifyVar. When running the resulting action, if the variable is a Nothing we create a new barrier, store it, and then start an action (after leaving the modifyVar) to compute the result, signal the barrier and return. If we already have a barrier, we just wait for this barrier.
[Note that you can implement once in terms of MVar directly, using only one MVar, but that violates the simple rules of the restricted MVars - rightly so, you have to use the MVar empty state to mean both atomic access to shared state, and to mean computation in progress.]
As another practical example of using these restricted MVars, let us consider a special kind of queue. Message arrive individually, but are collected in bulk. When someone tries to retrieve message, if there are any messages waiting they are sent immediately. If there are no messages, the read blocks until either a message arrives or until a new reader arrives, in which case the old reader is sent away with nothing. This can be implemented as:
The type of Queue tells us most of what we need to know about the invariants - Queue has a mutable state, which is either Left (zero or more messages waiting) or a Right (someone waiting to collect messages). If we had used MVar instead of both Var and Barrier, the invariant and design would be far less clear. With these invariants clearly stated, the code just follows directly.
I find the three MVar wrappers (Lock, Var, Barrier) much easier to understand since the rules are simpler, making maintenance easier. I have also found that most projects benefit from higher-level abstractions in some places. As an example, I defined Queue in one recent project, and Shake defines a Resource type, on top of which the resources feature is implemented. Concurrency is hard, but robust abstractions split the complexity, and thus simplify the programs.
The MVar is a flexible and powerful locking primitive, used extensively in Haskell. An MVar is like a box which is empty (has zero elements inside) or full (has one element inside). You block when trying to take from an empty MVar or put to a full MVar. On top of MVars, lots of interesting concurrent programs can be written. However, with such a flexible mechanism, there is scope for confusion. Every MVar can block on either a take or a put, but for any individual MVar it is likely you expect it to block on only one of those operations. In my programs I usually restrict my MVars to one of three flavours, each of which is described below.
Lock
The Lock guarantees single-threaded access, typically to some system resource.
type Lock = MVar () newLock :: IO Lock newLock = newMVar () withLock :: Lock -> IO a -> IO a withLock x = withMVar x . const
And as an example:
lock <- newLock let output = withLock . putStrLn forkIO $ do ...; output "hello" forkIO $ do ...; output "world"
Here we are creating a lock to ensure that when writing output our messages do not get interleaved. This use of MVar never blocks on a put. It is permissible, but rare, that a withLock contains a withLock inside it - but if so, watch out for deadlocks.
Var
The Var operates on a mutable variable in a thread-safe way.
type Var a = MVar a newVar :: a -> IO (Var a) newVar = newMVar modifyVar :: Var a -> (a -> IO (a, b)) -> IO b modifyVar = modifyMVar modifyVar_ :: Var a -> (a -> IO a) -> IO () modifyVar_ = modifyMVar_ readVar :: Var a -> IO a readVar = readMVar
And as an example:
hits <- newVar 0 forkIO $ do ...; modifyVar_ hits (+1); ... i <- readVar hits print ("HITS",i)
Here we have a variable which we modify atomically, so modifications are not interleaved. This use of MVar never blocks on a put. No modifyVar operation should ever block, and they should always complete in a reasonable timeframe. A Var should not be used to protect some external resource, only the variable contained within. Information from a readVar should not be subsequently inserted back into the Var.
Barrier
A barrier starts with no value, is written to once, and read one or more times.
type Barrier a = MVar a newBarrier :: IO (Barrier a) newBarrier = newEmptyMVar signalBarrier :: Barrier a -> a -> IO () signalBarrier = putMVar waitBarrier :: Barrier a -> IO a waitBarrier = readMVar
And as an example:
bar <- newBarrier forkIO $ do ...; val <- ...; signalBarrier bar val print =<< waitBarrier bar
Here we create a barrier which will contain some computed value. A thread is forked to fill the barrier, while the main thread waits for it to complete. A barrier has similarities to a future or promise from other languages, has been known as an IVar in other Haskell work, and in some ways is like a manually managed thunk. It is an error to signal a barrier more than once and a deadlock to never signal it. Since the barrier goes from empty to full, it never blocks on a put, unless you incorrectly call signal more than once.
Combining MVar Flavours - Once
The previous three MVar wrappers are the flavours of MVar which I use regularly. These can be combined into higher-level abstractions specific to certain situations. I now give two examples, intended to show how to combine these primitives.
The once function takes an action, and returns a new action. If the action is never called the argument action will never be executed, but if it is called more than once, it will only be executed once. We can write this function as:
once :: IO a -> IO (IO a) once act = do var :: Var (Maybe (Barrier a)) <- newVar Nothing return $ join $ modifyVar var $ \v -> case v of Nothing -> do b <- newBarrier; return (Just b, do x <- act; signalBarrier b x; return x) Just b -> return (Just b, waitBarrier b)
Here we create a variable to store the result, whose state is either Nothing (we have not yet started computing) or Just a barrier (we have started computing, use this barrier to get the result out). I have found 'join $ modifyVar' is a common idiom, used to defer a blocking action (often waitBarrier) until after a modifyVar has completed, ensuring we preserve our invariant of not blocking inside a modifyVar. When running the resulting action, if the variable is a Nothing we create a new barrier, store it, and then start an action (after leaving the modifyVar) to compute the result, signal the barrier and return. If we already have a barrier, we just wait for this barrier.
[Note that you can implement once in terms of MVar directly, using only one MVar, but that violates the simple rules of the restricted MVars - rightly so, you have to use the MVar empty state to mean both atomic access to shared state, and to mean computation in progress.]
Combing MVar Flavours - Queue
As another practical example of using these restricted MVars, let us consider a special kind of queue. Message arrive individually, but are collected in bulk. When someone tries to retrieve message, if there are any messages waiting they are sent immediately. If there are no messages, the read blocks until either a message arrives or until a new reader arrives, in which case the old reader is sent away with nothing. This can be implemented as:
type Queue a = Var (Either [a] (Barrier [a])) arrive :: Queue a -> a -> IO () arrive q x = modifyVar_ q $ \q -> case q of Left xs -> return $ Left $ xs ++ [x] Right b -> do signalBarrier b [x]; return $ Left [] collect :: Queue a -> IO [a] collect q = join $ modifyVar q $ \q -> case q of Left xs@(_:_) -> return (Left [], return xs) _ -> do case q of Right b -> signalBarrier b []; _ -> return () b <- newBarrier return (Right b, waitBarrier b)
The type of Queue tells us most of what we need to know about the invariants - Queue has a mutable state, which is either Left (zero or more messages waiting) or a Right (someone waiting to collect messages). If we had used MVar instead of both Var and Barrier, the invariant and design would be far less clear. With these invariants clearly stated, the code just follows directly.
Creating New Flavours
I find the three MVar wrappers (Lock, Var, Barrier) much easier to understand since the rules are simpler, making maintenance easier. I have also found that most projects benefit from higher-level abstractions in some places. As an example, I defined Queue in one recent project, and Shake defines a Resource type, on top of which the resources feature is implemented. Concurrency is hard, but robust abstractions split the complexity, and thus simplify the programs.
Sunday, June 03, 2012
Hoogle Update
Summary: I just updated the Hoogle website. It looks nicer on the iPhone and the source is on GitHub.
The Website
The Hoogle website is (as always) at http://haskell.org/hoogle. I've just uploaded a fresh Hackage index (currently a manual operation, but one I'm intending to automate imminently). I've also made a number of improvements if you are using Hoogle over the iPhone - to simulate the iPhone experience click here.
The Source
The source code has moved to Github: https://github.com/ndmitchell/hoogle. While darcs is a much nicer version control system than Git, GitHub offers a lot of nice features, so I'm using Hoogle as an experiment. I've been promised that projects on GitHub get lots of contributions, so now I wait!
I'm leaving the bug tracker in Google code for the moment, and am considering where the Hoogle manual should live, but a GitHub wiki site is currently looking likely.
The Website
The Hoogle website is (as always) at http://haskell.org/hoogle. I've just uploaded a fresh Hackage index (currently a manual operation, but one I'm intending to automate imminently). I've also made a number of improvements if you are using Hoogle over the iPhone - to simulate the iPhone experience click here.
The Source
The source code has moved to Github: https://github.com/ndmitchell/hoogle. While darcs is a much nicer version control system than Git, GitHub offers a lot of nice features, so I'm using Hoogle as an experiment. I've been promised that projects on GitHub get lots of contributions, so now I wait!
I'm leaving the bug tracker in Google code for the moment, and am considering where the Hoogle manual should live, but a GitHub wiki site is currently looking likely.
Monday, March 26, 2012
Finding Two unordered-containers Bugs
Summary: Over the past few weeks I found two bugs in unordered-containers. I strongly recommend everyone upgrades to unordered-containers-0.2.1.0 or later, which fixes these bugs.
Both Shake and Uniplate use the unordered-containers library, which provides fast Map and Set types. Both my libraries had bugs reported against them which turned out to be introduced by unordered-containers-0.2.0.0. As soon as these bugs were identified Johan released a new version (0.2.1.0) fixing the bugs, and both my packages now work as expected.
Bug 1: Shake breaks because delete fails
Just over two weeks ago Evan Laforge reported he'd started seeing "thread blocked indefinitely in an MVar operation" if there was a compile error. Shake goes to a lot of effort to clean up nicely, so something was going wrong. Since Evan's project compiles on Linux, and I only have access to Windows machines, I tried replicating it on one of my projects. I perturbed the project in several ways and did manage to replicate the error once, but the same perturbation next time made it succeed - something was clearly non-deterministic (which is the norm in a highly concurrent program such as Shake).
I'd been wanting to write a random build system generator and tester for a while, to thoroughly test Shake, and used this bug as motivation. I wrote the tester, and usually within 5 random runs (about 60 seconds of churning) the error was reproduced on my desktop. However, the error wouldn't reproduce on my laptop even after an hour (because I was on an older version of unordered-containers, I now know), and as I was off travelling I had to put the bug away to look into on my return.
On my return I investigated further. Figuring out the command line to run the random test took about an hour (note to self - write these things down). Once I had that, it still reproduced reliably in about 60 seconds. I cut out the random parts of the test that I didn't think were contributing to the bug (tests for building successfully, minimal rebuilding etc) and cut the reproduction time down to about 5 seconds. I then started augmenting the code with print statements. The cleanup code for exceptions is almost exclusively within the thread pool implementation (see Development.Shake.Pool if you are reading the source). In particular, the code that cleans up after an exception is:
This code finds all threads in the thread pool, with the exception of this thread (the Set.delete t), and kills them. After that has finished, signal to the thread who first called the thread pool that everything has finished with an exception.
Adding trace statements at every step showed that the exception started being cleaned up, a handful of threads were killed, but not all the threads were killed and the barrier was never signaled. An initial guess was that my thread was being killed, and thus that Set.delete had failed to remove t. Since I had debugged a separate unordered-containers bug a week before (bug 2 started after and finished before this one), I went to the unordered-containers commit list and immediately found a bug fixed against delete. I upgraded, and ran my random tester for 3 hours without failure.
Bug 2: Uniplate breaks because filter fails
The Uniplate bug was reported just over a week ago, while I was travelling, by Aleksey Khudyakov. This bug report included a relatively small test case demonstrating a clear bug, along with the vital piece of information that the test case worked with unordered-containers-0.1, but not 0.2. With that information the question became whether Uniplate was using some non-guaranteed behaviour (such as assuming the result from Set.toList is ordered), or whether unordered-containers had a bug. The supplied test case was very sensitive - it involved three modules, and simply changing any module name caused the bug to disappear. The code was also exercising a very sensitive and complex part of Uniplate. After 12 hours of cooperative debugging between myself, Aleksey and Johan, I came up with the reduced test case:
This printed False/True, when it should print True/True. Of course, the choice of the good and bad lists is critical, and was based on the Hashable instance for TypeRep, which uses the fully qualified type name, and thus changes if you rename the modules.
Once I had provided Johan with the reduced test case, it was fixed the same day.
Both Shake and Uniplate use the unordered-containers library, which provides fast Map and Set types. Both my libraries had bugs reported against them which turned out to be introduced by unordered-containers-0.2.0.0. As soon as these bugs were identified Johan released a new version (0.2.1.0) fixing the bugs, and both my packages now work as expected.
Bug 1: Shake breaks because delete fails
Just over two weeks ago Evan Laforge reported he'd started seeing "thread blocked indefinitely in an MVar operation" if there was a compile error. Shake goes to a lot of effort to clean up nicely, so something was going wrong. Since Evan's project compiles on Linux, and I only have access to Windows machines, I tried replicating it on one of my projects. I perturbed the project in several ways and did manage to replicate the error once, but the same perturbation next time made it succeed - something was clearly non-deterministic (which is the norm in a highly concurrent program such as Shake).
I'd been wanting to write a random build system generator and tester for a while, to thoroughly test Shake, and used this bug as motivation. I wrote the tester, and usually within 5 random runs (about 60 seconds of churning) the error was reproduced on my desktop. However, the error wouldn't reproduce on my laptop even after an hour (because I was on an older version of unordered-containers, I now know), and as I was off travelling I had to put the bug away to look into on my return.
On my return I investigated further. Figuring out the command line to run the random test took about an hour (note to self - write these things down). Once I had that, it still reproduced reliably in about 60 seconds. I cut out the random parts of the test that I didn't think were contributing to the bug (tests for building successfully, minimal rebuilding etc) and cut the reproduction time down to about 5 seconds. I then started augmenting the code with print statements. The cleanup code for exceptions is almost exclusively within the thread pool implementation (see Development.Shake.Pool if you are reading the source). In particular, the code that cleans up after an exception is:
t <- myThreadId
mapM_ killThread $ Set.toList $ Set.delete t $ threads s
signalBarrier done $ Just e
This code finds all threads in the thread pool, with the exception of this thread (the Set.delete t), and kills them. After that has finished, signal to the thread who first called the thread pool that everything has finished with an exception.
Adding trace statements at every step showed that the exception started being cleaned up, a handful of threads were killed, but not all the threads were killed and the barrier was never signaled. An initial guess was that my thread was being killed, and thus that Set.delete had failed to remove t. Since I had debugged a separate unordered-containers bug a week before (bug 2 started after and finished before this one), I went to the unordered-containers commit list and immediately found a bug fixed against delete. I upgraded, and ran my random tester for 3 hours without failure.
Bug 2: Uniplate breaks because filter fails
The Uniplate bug was reported just over a week ago, while I was travelling, by Aleksey Khudyakov. This bug report included a relatively small test case demonstrating a clear bug, along with the vital piece of information that the test case worked with unordered-containers-0.1, but not 0.2. With that information the question became whether Uniplate was using some non-guaranteed behaviour (such as assuming the result from Set.toList is ordered), or whether unordered-containers had a bug. The supplied test case was very sensitive - it involved three modules, and simply changing any module name caused the bug to disappear. The code was also exercising a very sensitive and complex part of Uniplate. After 12 hours of cooperative debugging between myself, Aleksey and Johan, I came up with the reduced test case:
let broken = Set.filter (`elem` good) $ Set.fromList $ good ++ bad
working = Set.fromList $ Set.toList broken
print $ Set.member (head good) broken
print $ Set.member (head good) working
This printed False/True, when it should print True/True. Of course, the choice of the good and bad lists is critical, and was based on the Hashable instance for TypeRep, which uses the fully qualified type name, and thus changes if you rename the modules.
Once I had provided Johan with the reduced test case, it was fixed the same day.
Sunday, March 04, 2012
NSIS: Windows installers through an EDSL
Summary: I've just released the NSIS library on Hackage. It's useful for building Windows installers, but is also a call-by-name embedded two-stage programming language.
I've just released a new library on Hackage, named NSIS. It's a library for building Windows installers (see an example at the top of the documentation), based on the NSIS (Nullsoft Scriptable Install System). The basic idea of NSIS is that you write a program which defines your installer - it can search for files, create registry keys, create groups of files etc. But, at its heart, it is a full programming language, merely optimised to the job of writing installers. The original NSIS system has an excellent backend, producing small but featureful installers which run well on a variety of Windows platforms. Unfortunately, the front-end programming language is probably best described as assembly code with gentle influences from scripting languages. My NSIS library defines an embedded Haskell language, of the style promoted by Lennart Augustsson, which can produce scripts to be fed into the original NSIS compiler.
Why might you want to use the original NSIS backend?
I've tried several installer generators over the past few years, and even written my own from scratch. Of all these, I prefer NSIS mainly for two reasons:
Why might you want to use my frontend?
There are many advantages to my frontend. If your script is simple it is likely to look relatively similar in either system. If your script is complex it could end up 100 lines shorter and far clearer in my system. However, there are three primary advantages.
MUI2 by default
If you are writing a simple NSIS installer, the biggest difference is that my system will use the MUI2 settings by default. As NSIS has evolved there are now three separate ways to define your user interface, using the Classic builtin commands, using MUI1 or using MUI2. Of these, MUI2 is by far the nicest and should always be used, but the Classic interface is built in by default, and MUI2 is defined on top using macros. To specify that you want to insert a particular page into the installer you can write either of:
However, with my front end, you simply write:
Flow control
While NSIS installer scripts are very powerful, they aren't easy for script authors. Taking the example from earlier, let's warn before installing into the Windows directory or the System directory:
Shockingly, in 1995 someone wrote a scripting language with only goto, and hasn't added any flow control since then. In my frontend, you can write:
All control flow can be accomplished with structured programming.
Variables
The original NSIS system has global mutable variables, 16 register variables and a stack - it directly mirrors assembly programming. In my system, variables are properly scoped and named. The difference for complicated functions is significant. As an example, let us define substring replacement in the original NSIS script:
Alternatively, with my frontend, you can write:
The difference is immense - the first is hard to follow without plenty of paper. The second is a fairly obvious imperative algorithm for string replacement (and is included in the package for reuse). Note that even though we're using a functional host language, our embedded language is very much imperative.
What technologies went into the frontend?
I really enjoyed writing this installer frontend. I got to dust off many techniques from compiler design that I haven't used for a while. Below is a flavour of some of the techniques I used:
If any of these techniques are particularly interesting, please comment below, and I'll expand on that area in a future post.
I've just released a new library on Hackage, named NSIS. It's a library for building Windows installers (see an example at the top of the documentation), based on the NSIS (Nullsoft Scriptable Install System). The basic idea of NSIS is that you write a program which defines your installer - it can search for files, create registry keys, create groups of files etc. But, at its heart, it is a full programming language, merely optimised to the job of writing installers. The original NSIS system has an excellent backend, producing small but featureful installers which run well on a variety of Windows platforms. Unfortunately, the front-end programming language is probably best described as assembly code with gentle influences from scripting languages. My NSIS library defines an embedded Haskell language, of the style promoted by Lennart Augustsson, which can produce scripts to be fed into the original NSIS compiler.
Why might you want to use the original NSIS backend?
I've tried several installer generators over the past few years, and even written my own from scratch. Of all these, I prefer NSIS mainly for two reasons:
- It's a full programming language. The installer is capable of expressing any program, which means that as you need to do custom things in the installer, you can. For example, you can prohibit installing into the Windows directory. Equally, you can calculate the 1000th prime number. You are never artificially limited by the installer.
- The installers are slick and robust. As the NSIS documentation says, "Being a user's first experience with your product, a stable and reliable installer is an important component of successful software". NSIS consistently delivers a smooth end-user experience, provided you select the MUI2 (Modern User Interface 2) settings.
Why might you want to use my frontend?
There are many advantages to my frontend. If your script is simple it is likely to look relatively similar in either system. If your script is complex it could end up 100 lines shorter and far clearer in my system. However, there are three primary advantages.
MUI2 by default
If you are writing a simple NSIS installer, the biggest difference is that my system will use the MUI2 settings by default. As NSIS has evolved there are now three separate ways to define your user interface, using the Classic builtin commands, using MUI1 or using MUI2. Of these, MUI2 is by far the nicest and should always be used, but the Classic interface is built in by default, and MUI2 is defined on top using macros. To specify that you want to insert a particular page into the installer you can write either of:
page Directory # classic NSIS
!insertmacro MUI_PAGE_DIRECTORY # MUI2
However, with my front end, you simply write:
page Directory
Flow control
While NSIS installer scripts are very powerful, they aren't easy for script authors. Taking the example from earlier, let's warn before installing into the Windows directory or the System directory:
StrCmp $WINDIR $INSTDIR bad 0
StrCmp $SYSDIR $INSTDIR bad 0
Goto skip
bad:
MessageBox MBOK|MB_ICON_EXCLAMATION "Warning: installing into the Windows directory"
skip:
Shockingly, in 1995 someone wrote a scripting language with only goto, and hasn't added any flow control since then. In my frontend, you can write:
iff_ ("$INSTDIR" %== "$WINDIR" %|| "$INSTDIR" %== "$SYSDIR") $
alert "Warning: installing into the Windows directory"
All control flow can be accomplished with structured programming.
Variables
The original NSIS system has global mutable variables, 16 register variables and a stack - it directly mirrors assembly programming. In my system, variables are properly scoped and named. The difference for complicated functions is significant. As an example, let us define substring replacement in the original NSIS script:
Push "Hello World"
Push "Hello"
Push "Goodbye"
Call StrRep
Pop $R0
MessageBox MB_OK $R0
; Taken from http://nsis.sourceforge.net/Replace_Sub_String_(macro)
Function StrRep
;Written by dirtydingus 2003-02-20 04:30:09
Exch $R4 ; $R4 = Replacement String
Exch
Exch $R3 ; $R3 = String to replace (needle)
Exch 2
Exch $R1 ; $R1 = String to do replacement in (haystack)
Push $R2 ; Replaced haystack
Push $R5 ; Len (needle)
Push $R6 ; len (haystack)
Push $R7 ; Scratch reg
StrCpy $R2 ""
StrLen $R5 $R3
StrLen $R6 $R1
loop:
StrCpy $R7 $R1 $R5
StrCmp $R7 $R3 found
StrCpy $R7 $R1 1 ; - optimization can be removed if U know len needle=1
StrCpy $R2 "$R2$R7"
StrCpy $R1 $R1 $R6 1
StrCmp $R1 "" done loop
found:
StrCpy $R2 "$R2$R4"
StrCpy $R1 $R1 $R6 $R5
StrCmp $R1 "" done loop
done:
StrCpy $R3 $R2
Pop $R7
Pop $R6
Pop $R5
Pop $R2
Pop $R1
Pop $R4
Exch $R3
FunctionEnd
Alternatively, with my frontend, you can write:
alert $ strReplace "Hello" "Goodbye" "Hello World"
strReplace :: Exp String -> Exp String -> Exp String -> Exp String
strReplace from to str = do
from <- constant_ from
to <- constant_ to
rest <- mutable_ str
res <- mutable_ ""
while (rest %/= "") $ do
iff (from `strIsPrefixOf` rest)
(do
res @= res & to
rest @= strDrop (strLength from) rest)
(do
res @= res & strTake 1 rest
rest @= strDrop 1 rest)
res
The difference is immense - the first is hard to follow without plenty of paper. The second is a fairly obvious imperative algorithm for string replacement (and is included in the package for reuse). Note that even though we're using a functional host language, our embedded language is very much imperative.
What technologies went into the frontend?
I really enjoyed writing this installer frontend. I got to dust off many techniques from compiler design that I haven't used for a while. Below is a flavour of some of the techniques I used:
- Phantom types - all expressions have a phantom type, indicating what type the expression returns. In truth, all NSIS types are strings, but the phantom types let us make conversions explicit and catch user errors.
- Call-by-name - the programming language I've implemented is call-by-name, which seems to be a particularly convenient choice for embedded languages.
- Optimisation - since the target is a programming language, I wrote eight optimisation passes, which I iterate over. The result is that the string replacement function above becomes 21 lines of compiled code, while the original hand-coded version is 32 lines. There's no real need for the optimisation pass (the code is rarely a bottleneck or a significant size cost), but writing optimisers is fun.
- Generics - the front end and optimisation passes are heavily based on generic programming, in particular using Uniplate.
- Two-stage programming - the first program runs and generates a second program that is then rerun. It is possible to do computation at either level, and most errors are only caught at one or other of those levels, not both.
- Goto programming - while I provide concepts such as iff and while, the target language is exclusively goto based, which I translate down to.
- Overloaded literals - I use the overloaded strings and numbers extensively. Haskell doesn't permit overloaded booleans, but if it did I'd use those too.
If any of these techniques are particularly interesting, please comment below, and I'll expand on that area in a future post.
Labels:
nsis
Tuesday, February 28, 2012
Four Interesting Build Tools
Summary: This post describes four interesting build systems, namely Redo, Ninja, Tup and Fabricate.
While developing Shake I investigated many existing build tools. Build tools can be divided into two categories -- those which target single-language projects with fixed rules (e.g. ocamlbuild, ghc --make, Visual Studio projects), and those which allow user specified rules (e.g. make and Shake). Focusing on the second category, the defacto standard is make, but there are many make competitors (notably Ant, CMake, Jam, Scons and Waf). Most of these tools read a list of rules, generate a dependency graph, then execute commands while traversing that graph.
Since the number of build tools is vast, I will restrict my discussion to four build tools which take different approaches (Redo, Ninja, Tup and Fabricate). Interestingly, one thing all four systems have in common is that they require a database of build data, in addition to the rules and the file system. Unlike Shake, all these build systems are limited to files.
Redo
The Redo build system has a similar dependency theory to Shake. Rules are run starting at the target. A rule may call redo-ifchange (similar to need) to ensure that this rule is repeated if any of the file arguments change. A rule can build either a specific named file, or a set of files ending with a particular extension.
While Redo has similarities to Shake, the practical implementation is significantly different. Instead of a single rule store, Redo stores each rule in a separate file, and the script language is simply shell script. The advantage of separate files is that Redo is able to depend on the actual rule used to build a result, meaning that build system changes are properly tracked. However, separating build rules makes it harder to reason about the build system, and eliminates many potential uses of abstraction. Redo does not work on Windows and has no support for multiple outputs.
Ninja
The Ninja build system is designed as a two-stage build system -- users specify their build rules in a high-level manner, which is then translated to a set of Ninja build rules. As a result, the Ninja build system is not designed to be general purpose and configuration choices are expected to be resolved by the first level. The Ninja target language supports three dependency features beyond make. Firstly, a rule can depend on the list of files contained in another file, allowing additional dependencies at build time. Secondly, the command line for each rule is tracked, resulting in a rebuild if the rule itself changes. Thirdly, a rule can generate multiple outputs, which are properly tracked.
Tup
The Tup build system is designed as an incremental build system. Tup has a similar dependency structure to make, but a significantly different implementation. Instead of scanning all dependencies, it expects the operating system to supply a list of changed files, avoiding the overhead of checking which files have changed. For large build systems the result can be a significant speed improvement if there are only a few files to rebuild. We believe a similar implementation strategy could be applied to Shake.
Another difference from make is the treatment of dead build results. If a rule to build foo is deleted from the rule list, then Tup automatically deletes the file foo. The problem of dead build results is serious, resulting in builds succeeding that should have failed, and that will fail as soon as a clean build is performed (to reduce this risk, we suggest an overnight build which starts from scratch). However, it is often useful to have build modes which generate skeleton files which are then modified by the user -- deleting these files would be most unwelcome. It would be easy to add support for deleting dead build results to Shake, but we choose not to.
Fabricate
The key innovation in the Fabricate build system is that dependencies do not need to be stated explicitly. A build system is a Python program, which primarily executes system commands in order. While executing the commands, Fabricate uses system tracing (strace on Linux) to record which files are accessed. In future runs, if the same system command is reached but none of the files it used have changed, the command is skipped. The resulting build systems are simple, and avoid the difficulties of correctly specifying dependencies.
There are two inherent difficulties for build systems without explicit dependencies. Firstly, the system tracing mechanisms on different platforms are varied, and on Windows are somewhat fragile. Secondly, parallelism cannot be inferred automatically -- Fabricate requires explicit grouping annotations to use parallelism.
While developing Shake I investigated many existing build tools. Build tools can be divided into two categories -- those which target single-language projects with fixed rules (e.g. ocamlbuild, ghc --make, Visual Studio projects), and those which allow user specified rules (e.g. make and Shake). Focusing on the second category, the defacto standard is make, but there are many make competitors (notably Ant, CMake, Jam, Scons and Waf). Most of these tools read a list of rules, generate a dependency graph, then execute commands while traversing that graph.
Since the number of build tools is vast, I will restrict my discussion to four build tools which take different approaches (Redo, Ninja, Tup and Fabricate). Interestingly, one thing all four systems have in common is that they require a database of build data, in addition to the rules and the file system. Unlike Shake, all these build systems are limited to files.
Redo
The Redo build system has a similar dependency theory to Shake. Rules are run starting at the target. A rule may call redo-ifchange (similar to need) to ensure that this rule is repeated if any of the file arguments change. A rule can build either a specific named file, or a set of files ending with a particular extension.
While Redo has similarities to Shake, the practical implementation is significantly different. Instead of a single rule store, Redo stores each rule in a separate file, and the script language is simply shell script. The advantage of separate files is that Redo is able to depend on the actual rule used to build a result, meaning that build system changes are properly tracked. However, separating build rules makes it harder to reason about the build system, and eliminates many potential uses of abstraction. Redo does not work on Windows and has no support for multiple outputs.
Ninja
The Ninja build system is designed as a two-stage build system -- users specify their build rules in a high-level manner, which is then translated to a set of Ninja build rules. As a result, the Ninja build system is not designed to be general purpose and configuration choices are expected to be resolved by the first level. The Ninja target language supports three dependency features beyond make. Firstly, a rule can depend on the list of files contained in another file, allowing additional dependencies at build time. Secondly, the command line for each rule is tracked, resulting in a rebuild if the rule itself changes. Thirdly, a rule can generate multiple outputs, which are properly tracked.
Tup
The Tup build system is designed as an incremental build system. Tup has a similar dependency structure to make, but a significantly different implementation. Instead of scanning all dependencies, it expects the operating system to supply a list of changed files, avoiding the overhead of checking which files have changed. For large build systems the result can be a significant speed improvement if there are only a few files to rebuild. We believe a similar implementation strategy could be applied to Shake.
Another difference from make is the treatment of dead build results. If a rule to build foo is deleted from the rule list, then Tup automatically deletes the file foo. The problem of dead build results is serious, resulting in builds succeeding that should have failed, and that will fail as soon as a clean build is performed (to reduce this risk, we suggest an overnight build which starts from scratch). However, it is often useful to have build modes which generate skeleton files which are then modified by the user -- deleting these files would be most unwelcome. It would be easy to add support for deleting dead build results to Shake, but we choose not to.
Fabricate
The key innovation in the Fabricate build system is that dependencies do not need to be stated explicitly. A build system is a Python program, which primarily executes system commands in order. While executing the commands, Fabricate uses system tracing (strace on Linux) to record which files are accessed. In future runs, if the same system command is reached but none of the files it used have changed, the command is skipped. The resulting build systems are simple, and avoid the difficulties of correctly specifying dependencies.
There are two inherent difficulties for build systems without explicit dependencies. Firstly, the system tracing mechanisms on different platforms are varied, and on Windows are somewhat fragile. Secondly, parallelism cannot be inferred automatically -- Fabricate requires explicit grouping annotations to use parallelism.
Labels:
shake
Saturday, February 11, 2012
Shake: A Better Make
Summary: I have just released Shake, a library that allows you to write build systems in Haskell (think Make, but much better).
At the Haskell Implementors Workshop 2010 I described a Haskell build system named Shake (video here), hoping an implementation would be available "soon". Better late than never, I am delighted to announce that Shake is now available on Hackage. This version is a from scratch reimplementation, based on an improved underlying theory, all completed in my spare time. Several users have already experimented with this version, so it is reasonably robust.
A Simple Example
As a simple example of a Shake build system, let us build the file result.tar from the files listed by result.txt:
We start by importing the modules defining both Shake and routines for manipulating FilePath values. We define main to call shake with the default shakeOptions. As the second argument to shake, we provide a set of rules. There are two common forms of rules, want to specify target files, and *> to define a rule which builds a file pattern. We use want to require that after the build completes the file result.tar should be ready.
The *.tar rule describes how to build files with the extension .tar, including result.tar. We readFileLines on result.txt, after changing the .tar extension to .txt. We read each line into the variable contents -- being a list of the files that should go into result.tar. Next, we depend (need) all the files in contents. If any of these files change, the rule will be repeated. Finally we call the tar program. If either result.txt changes, or any of the files listed by result.txt change, then result.tar will be rebuilt.
If you are interested in writing a build system using Shake, I suggest you read the documentation. One thing noted in the documentation is that if ghc --make or cabal is capable of building your project, use that instead. Custom build systems are necessary for many complex projects, but many projects are not complex.
As an example of how to do this task in Make, see this StackOverflow question. I'll cover the differences and similarities with other build tools in a forthcoming blog post.
At the Haskell Implementors Workshop 2010 I described a Haskell build system named Shake (video here), hoping an implementation would be available "soon". Better late than never, I am delighted to announce that Shake is now available on Hackage. This version is a from scratch reimplementation, based on an improved underlying theory, all completed in my spare time. Several users have already experimented with this version, so it is reasonably robust.
A Simple Example
As a simple example of a Shake build system, let us build the file result.tar from the files listed by result.txt:
import Development.Shake
import Development.Shake.FilePath
main = shake shakeOptions $ do
want ["result.tar"]
"*.tar" *> \out -> do
contents <- readFileLines $ replaceExtension out "txt"
need contents
system' "tar" $ ["-cf",out] ++ contents
We start by importing the modules defining both Shake and routines for manipulating FilePath values. We define main to call shake with the default shakeOptions. As the second argument to shake, we provide a set of rules. There are two common forms of rules, want to specify target files, and *> to define a rule which builds a file pattern. We use want to require that after the build completes the file result.tar should be ready.
The *.tar rule describes how to build files with the extension .tar, including result.tar. We readFileLines on result.txt, after changing the .tar extension to .txt. We read each line into the variable contents -- being a list of the files that should go into result.tar. Next, we depend (need) all the files in contents. If any of these files change, the rule will be repeated. Finally we call the tar program. If either result.txt changes, or any of the files listed by result.txt change, then result.tar will be rebuilt.
If you are interested in writing a build system using Shake, I suggest you read the documentation. One thing noted in the documentation is that if ghc --make or cabal is capable of building your project, use that instead. Custom build systems are necessary for many complex projects, but many projects are not complex.
As an example of how to do this task in Make, see this StackOverflow question. I'll cover the differences and similarities with other build tools in a forthcoming blog post.
Labels:
shake
Sunday, January 08, 2012
Pascal's Triangle in Haskell
Summary: I'm continually amazed how concise and powerful Haskell is, compared to mainstream languages. This post describes how to write Pascal's Triangle, and gives some of the advantages of using Haskell.
Often, when programming in Haskell, I feel like I'm cheating. As an example, I recently came across this article by William Shields, which suggests that prospective interview candidates be given simple programming tasks like generating Pascal's Triangle. William gives examples in Python, including some of the answers a typical candidate might give.
Pascal's Triangle in Haskell
William describes Pascal's Triangle as:
"The root element is 1. Every other element is the sum of the one or two above it (diagonally left and diagonally right)."
As a Haskell programmer, the obvious technique to use is induction. The first row of the triangle is [1], and each row can be computed from the previous row by adding the row shifted left, and the row shifted right:
Here, we define next to take one row and produce the next row. We then use the iterate function to repeatedly apply next starting at the root element. The solution is short, and follows from the definition.
Laziness for Modularity
William originally posed three questions:
Thanks to laziness, we can concisely answer all these questions in terms of the original pascal definition. In contrast, using a language such as Python, the best solution (Dynamic Programming from the original article) can only perform the first task.
Interview problems
The original article was not about the choice of programming language, but about choosing suitable questions for interviewing programmers. I agree with William that Pascal's Triangle is a suitable problem - it isn't a trick puzzle, it isn't an API knowledge quiz - it's about understanding how to program. Given how much easier the problem is to solve in Haskell, I wonder if using Haskell in a job interview should be considered cheating? ;-)
Often, when programming in Haskell, I feel like I'm cheating. As an example, I recently came across this article by William Shields, which suggests that prospective interview candidates be given simple programming tasks like generating Pascal's Triangle. William gives examples in Python, including some of the answers a typical candidate might give.
Pascal's Triangle in Haskell
William describes Pascal's Triangle as:
"The root element is 1. Every other element is the sum of the one or two above it (diagonally left and diagonally right)."
As a Haskell programmer, the obvious technique to use is induction. The first row of the triangle is [1], and each row can be computed from the previous row by adding the row shifted left, and the row shifted right:
next xs = zipWith (+) ([0] ++ xs) (xs ++ [0])
pascal = iterate next [1]
Here, we define next to take one row and produce the next row. We then use the iterate function to repeatedly apply next starting at the root element. The solution is short, and follows from the definition.
Laziness for Modularity
William originally posed three questions:
- Print out the triangle to a specific row: print $ take 100 pascal
- Return a given row of the triangle: pascal !! 50
- Return a given element (by row and index) of the triangle: pascal !! 10 !! 5
Thanks to laziness, we can concisely answer all these questions in terms of the original pascal definition. In contrast, using a language such as Python, the best solution (Dynamic Programming from the original article) can only perform the first task.
Interview problems
The original article was not about the choice of programming language, but about choosing suitable questions for interviewing programmers. I agree with William that Pascal's Triangle is a suitable problem - it isn't a trick puzzle, it isn't an API knowledge quiz - it's about understanding how to program. Given how much easier the problem is to solve in Haskell, I wonder if using Haskell in a job interview should be considered cheating? ;-)
Hiding Win32 Windows
Summary: This post describes how to hide windows on the Windows operating system, by using the Win32 API from Haskell.
Imagine that whenever your computer restarts, it pops up a message box:
If you interact with this window, your computer will be destroyed. You can't kill the process, or dismiss the window harmlessly. (This scenario isn't hypothetical...) The solution is to hide the window, so it still exists but is out of the way of misplaced clicks. To hide the window, we first find it's OS handle, then we call some Win32 API functions.
Find the Window
To find the handle of the window, we use Spy++. Spy++ comes with Visual Studio, and is bundled in the Express edition (the free version) from 2010 onwards. Start Spy++, got to Search, Find Window, then use the finder tool to select the window in question. Check that the caption of the window matches what Spy++ reports:
The important information is the handle: 0004061E.
Hide the Window
To hide the window you need a programming language capable of making Win32 API calls. In the past I have used Word VBA as the host language, but Haskell is probably easier. Start GHCi, and type:
Replace 0x0004061E on the final line with 0xyour-handle. The final line should cause the window to be hidden, saving your computer from destruction.
Thanks: Thanks to Roman Leshchinskiy for reminding me that there were better solutions than just trying not to click the window. Thanks to the Win32 Haskell developers - the Win32 binding was a lot of work, which not many people ever see.
Imagine that whenever your computer restarts, it pops up a message box:
If you interact with this window, your computer will be destroyed. You can't kill the process, or dismiss the window harmlessly. (This scenario isn't hypothetical...) The solution is to hide the window, so it still exists but is out of the way of misplaced clicks. To hide the window, we first find it's OS handle, then we call some Win32 API functions.
Find the Window
To find the handle of the window, we use Spy++. Spy++ comes with Visual Studio, and is bundled in the Express edition (the free version) from 2010 onwards. Start Spy++, got to Search, Find Window, then use the finder tool to select the window in question. Check that the caption of the window matches what Spy++ reports:
The important information is the handle: 0004061E.
Hide the Window
To hide the window you need a programming language capable of making Win32 API calls. In the past I have used Word VBA as the host language, but Haskell is probably easier. Start GHCi, and type:
$ import System.Win32
$ import Graphics.Win32
$ showWindow (castUINTToPtr 0x0004061E) sW_HIDE
Replace 0x0004061E on the final line with 0xyour-handle. The final line should cause the window to be hidden, saving your computer from destruction.
Thanks: Thanks to Roman Leshchinskiy for reminding me that there were better solutions than just trying not to click the window. Thanks to the Win32 Haskell developers - the Win32 binding was a lot of work, which not many people ever see.
Subscribe to:
Posts (Atom)