Having been the unsuspecting Programmer on a couple of these projects, I can report that the simple solution does not work.
The typical pattern was for the interested stakeholder to think up a few possible directions that they would like to take their business/software in the next version or two. These would be delivered as kind of soft 'keep this in mind' requirements. Doing so necessitated additional flexibility above what the base system would normally require, and the complexity added by this flexibility increased the amount of code to design and write. In effect, a down payment was made on a future version or two, but only the initial version was actually delivered.
This situation would have been fine, except that when the future arrives, it has the unpleasant habit of taking a completely unanticipated path. Frequently, this would be one that would violate some key assumption that was never supposed to be possible in the original system. Which, of course, meant that the assumption was coded right into the system with no thought for flexibility at all. The down payments on the other features went to waste, and no time was ultimately saved.
Of course, the stakeholder seemed to see this as a failure to be sufficiently flexible. It made me think, though: how much flexibility is required in order to avoid future surprises from causing a major re-engineering effort? What is the ultimate limit that flexibility can be pushed to?
Lisp again
It seemed like the best way to make changing the system as easy as possible would be to split the application into layers. In a shopping-cart system, one layer would provide things like invoices, line-items, shipping cost estimation, and so forth; and the next layer up would link these facilities together into a cohesive whole. What would that upper layer consist of?Why, that would almost be like source code to the base-level interpreter, and all those lovingly-crafted objects like the invoice would appear as something akin to host objects in Javascript: pre-populated values that provided access to an otherwise unreachable set of functionality. As a speed hack, both layers could hypothetically be in the same language, but then it would take some discipline to keep them cleanly separated. Well, it would've taken my 24-year-old self some discipline, anyway.
Thinking of these layers as engine and script, I realized I had actually seen this before. These were domain-specific languages, and in a shocking parallel realization, UnrealScript was also a domain-specific language. That old, oft-repeated advantage about creating mini-languages in Lisp suddenly began to take on real meaning, once I could see that these languages were so useful that they were implemented outside of Lisp, too.
Worse, the Java+XML world that I was always making fun of, being way too smart to bother with such obviously silly nonsense, began to make sense. In the Kingdom of Nouns, Java classes implemented domain-specific semantics in order to execute an XML script. Java's XML facilities provided a standard, runtime-controllable reader for languages written in XML syntax, just like Lisp provided runtime control for its reader of S-expressions. Stupid Java, not being dumb after all.
After those revelations, it seemed that the ultimate level of flexibility is also a domain-specific language. As long as the necessary primitives are in place, anything computable can be computed in the language. (Although this is sometimes a uselessly academic distinction, if the operations are too hard or cumbersome to be usable.)
I didn't get to try out my newly-discovered ideas at the time, and I haven't since; back then, I preferred the devil I knew, and in most of the places I have worked, the management has not been nearly so interested in technical awesomeness as simple, maintainable code. I wonder how things would go if I tried to create a DSL today.
No comments:
Post a Comment