Perspectives on DSLs: Software Quality
Software quality is a huge field: there are lots of different aspects that could be labeled as being part of “good software”. These include
- fitness for purpose (there might be a competitive word processor that has way more formatting options, so it is “better quality”)
- number of errors (sure, that competitor has a lot more features, but it fails to save the document every other time — not good quality!)
- a wide range of non-functional concerns such as performance, memory consumption and security (this other word processor takes so much memory I can’t run it on my machine)
- and the internal quality of the codebase, which affects the future potential of achieving the previous three (sure, Word is great, but we had to spend a lot of money to get rid of code that was 30 years old)
In the remainder of this little essay I will discuss how DSLs affect each of these.
Fitness for Purpose
Ensuring fitness for purpose is also known as validation, the process of checking whether the software actually fulfills the needs of its users, and how well. It is important to understand that it is not just the process of checking the implementation against the requirements, because those requirements might be incomplete or faulty, one of the motivations behind agile development that deemphasizes explicit and separate capture of requirements independent of the implementation. In other words, the quality of the requirements themselves as part of the software quality.
So having effective means of expressing those requirements is key. This is where the main benefit of domain specific language appears. You can get rid of this notion of expressing requirements as prose text or in other informal representations. Instead you provide a precise language to your users to “directly write the code”, but in a language that is high level enough for them to understand, even if they are not professional software engineers.
And even if your users are not able or willing to write in that language, a well-designed DSL is much easier to review for these folks, enabling effective collaboration between future users and those who write the models using the DSL. Additionally, suitably formal expressions of a system’s expected behavior allows tool-based validation and simulation, further helping to ensure the quality.
A term used by the engineering community is front-loading: you validate more in earlier stages of the development, and on more abstract representations, thereby enabling a much more thorough understanding of the behaviors of the system. And it is well known that it is much cheaper to fix “errors” early in a system’s development.
Implementation Errors
Let’s say a word processor does not have a button to make text bold. This might be because the developers didn’t realize that this was a requirement, or the requirement was written in a way where the coder wasn’t able to understand it. In these two cases I would say the missing button is a problem with fitness for purpose, as discussed in the previous paragraph.
But let’s say the program does have such a button, but whenever it is rendered, an error happens. For example the icon file cannot be found, a null-pointer exception occurs, and the button never shows up. This is not a problem with the requirements, it’s a bug in the implementation. Checking that the implementation performs correctly relative to what is required is usually called verification and is done through testing, code reviews and (sometimes) various formal approaches such as model checking.
DSLs can help in many ways here, from generating test cases based on the (domain-level) semantics in the model, through redundant execution to generating the input to verification tools. But the main benefit is the fact that the implementation is automatically generated, or a generic interpreter “runs” the model. This kills lots of opportunities for developers to make random errors. It does mean that the generator or interpreter are correct (so they do not introduce errors themselves), but experience demonstrates that this can be achieved relatively easily, because any such error is systematic and therefore usually relatively easier to track down. And once fixed, it will work in all cases.
The buzzword here is correctness by construction: you don’t fix errors later, you guarantee, through tools, that you never make one in the first place. While in general, correctness by construction is a very strong claim and hard to reach, DSLs, models and generation certainly contribute to this goal.
Non-Functional Properties
Well-executed DSL-based development generates code against (or runs an interpreter on) a platform that provides infrastructure services; I am vague here because the nature of this platform varies greatly, depending on the domain and other constraints. But the point is that the primitives used by the DSL are usually implemented (manually) in the platform. Whether these primitives are fast or slow, scale well, consume too much memory, offer the potential for malicious exploit or expose a privacy risk is not really affected by the language — but by the platform. A DSL (and its generation or interpretation) effectively forces the architecture to use such platforms, reducing the reliance on (potentially sub-optimal) ad-hoc implementation choices.
Similarly, for the things not “enforced” by the platform, the generator generates code based on templates that embody patterns and other best-practices, reducing the potential for misusing the platform and/or for “forgetting” to consider privacy, for example.
The maybe biggest benefit comes from the separation of model and implementation through a generator or interpreter. Because if, over time, you realize that there are better ways to use the implementation technology (to make the system faster or more memory efficient or more secure) you make the change only in the platform and generators and regenerate the system from the models. This is a general benefit of reuse — you improve the reuse artifact and all clients benefit — but generators are “reusable decisions of how to implement something”, an opportunity for reuse you don’t have when you write code manually.
There’s another consideration: there are lots of tools useful for addressing particular nonfunctional properties, from performance to scalability analysis to security. Most of them require a particular input format optimized for whatever analysis they perform. If you describe your system through DSLs, it is usually easier to generate the input for such tools and exploit their analysis.
Internal Code Quality and Maintenance
In terms of code quality and maintainability, there is of course a huge advantage because of the clear separation of concerns: domain stuff in the language and the models, and technical stuff in the generator/interpreter and platform. It’s much easier to evolve them separately this way, compared to when both are “mixed” in implementation code. As an extreme case it is certainly feasible to port the whole application to a new platform. The famous scrambled eggs example comes to mind.
There’s a long-standing discussion of whether the quality (in the sense of style, not in these sense of non-functionals) of the generated code matters. On the one hand nobody needs to read it, it’s like assembly. On the other hand people actually do have to read it while they build and validate the generators and perhaps later when they have to fix bugs. My opinion: generate “nice” code unless this results in significantly higher complexity of the generators. If you have to trade off generator maintainability/simplicity vs. generated code maintainability/simplicity, you should opt for the generator — because you’ll have to maintain it manually.
TANSTAAFL
There ain’t no such thing as a free lunch, of course. To get all these benefits, you have to consider the effort for building and maintaining languages, IDEs and generators. While this is probably less effort than you might expect because of the availability of language workbenches, you have to make the decision consciously. I have discussed this aspect here.
Summing up, DSLs and the associated automation of the construction of the implementation can have significant benefits for software quality.