Share this:

Why MDA Is Like Waterfall Process And Is Doomed To Suffer The Same Fate

It occurred to me the other day that Model Driven Architecture (MDA) is a lot like waterfall processes. For those of you who don’t know, MDA is a kind-of different approach to writing software systems. Basically the idea is to define a software system in a platform independent way (e.g. using some kind of DSL for example) and then once the system is fully defined your MDA tools will be able to generate a platform specific version of your system which you can then deploy and run (yeah as simple as that :)). To simplify it even more, the MDA ideal is to be able to represent a system diagrammatically (i.e. draw it) and then press a button and get a fully operational version of the software system that you were after. If you still need to know more there is Google or Wikipedia.

MDA

MDA has been around for a few years now, and when the idea first reared it’s head it was (like many others) touted as a potential ‘silver bullet’ solution to building software quickly and efficiently. The surprising thing is how many people have bought into this, I mean, there are companies out there who have built a business around MDA. The reason I am surprised is that waterfall processes tried to do the same thing to process, as MDA tries to do to software implementation. Any developer worth his salt knows exactly how ‘well’ waterfall processes work, so what made anyone believe that doing the same thing to actual implementation of the software system would be any more successful?

How Are MDA and Waterfall Similar?

Waterfall processes try to fully define all the steps needed to create a software system from a process perspective. They attempt to create a perfect roadmap that will cover every contingency you’re likely to encounter when building a software system. The great delusion here is the fact that no-one can ever fully take into account every possible combination of domain, people, circumstances, organizational differences, etc. and did I mention people. Of course it quickly became apparent that no process can create a perfect roadmap for all those reasons. Although when I say quickly I mean it in cosmic terms a veritable blink of an eye – in human terms it was a few decades. And it will take a few more before we put waterfall processes in the grave where they belong.

What MDA tries to do is to create an abstraction of the system you’re trying to build. It is either a visual abstraction or a DSL based one, either way it is still an abstraction, you then try to generate a system based on this abstraction. Leaving aside any issues related to deploying what you generated, in order for your system to do what you need; your abstraction would have had to be spot on. So, in essence you’re trying to create a perfect system metaphor. I am not sure if that’s an oxymoron, but I do know that I’ve never been able to come up with a perfect metaphor for anything, and I don’t know anyone who has.  But, that is exactly what we’re striving towards when we do MDA, while the metaphor holds up, everything is fine, but a soon as it breaks down – as it must, what do we do?

What DO We Do When The Metaphor Breaks Down?

We do the only thing we can do, we go back to the tried and true. We crack open an editor and try to fill in the gaps by hand. But of course we can no longer use our DSL for this or our nice picture, we must work within the bounds of our platform. And this of course is where the system gets orders of magnitude more complicated. MDA tools generate the platform specific version of our system so we must find a way to hook-in our little ‘abstractional patches’ into the generation process. And of course we need to make sure that it all still fits together as the system evolves. Not to mention the fact that it is really difficult to visualize how our patches fit into the overall architecture (since we are now trying to fit a little bit of reality into an abstraction and we’re just not wired to do that easily). And should I even bother to mention that we’re no longer platform independent!

What you end up with is an environment where the further you go along the more you feel like everything is a little bit out of control. In reality it may not actually be as bad as some systems that are developed in the normal way (should I say ‘traditional’ – I better not :)). But you can’t keep the two separate representations of parts of the system in your mind at the same time and that creates a very real sense that you can’t quite get a handle on what you’re doing.

All this brings us back to me being really surprised that MDA got even the little bit of traction that it does have. We had an almost perfect example of a situation where a similar solution just didn’t work (waterfall processes), more, it was pretty much in the same domain. But that’s the thing, what we had was an abstraction, a metaphor if you prefer, it was a good one but it wasn’t perfect and so many of us missed it and MDA was born – so I guess I shouldn’t really be surprised. We’re not incompetent though, we’ve twigged on to the fact that waterfall is not the way to go and so it is going the way of the dinosaur; MDA is going to go the same way – eventually. It just that is takes us a cosmic eye-blink to get our act together.

  • http://www.modeldrivensoftware.net/ Mark Dalgarno

    I think you’ve put together a bit of a straw man here.

    The best and most common (AFAIK) approach is to evolve the DSL and associated code generator if you hit a dead-end. Plenty of people are also using agile methods to develop model-driven software; evolving their DSLs and code generators iteratively as their understanding of the problem domain grows.

    • http://www.skorks.com Alan Skorkin

      Maybe, but in my opinion it goes a little bit against the grain to evolve the tool that generates your system every time your understanding of your domain grows (which is potentially all the time). Why not just evolve your system to start with and bypass the middleman?

      I am not surprised that people are using agile processes to develop model-driven software, from what I understand doing it that way would be one of the few things that would keep it from becoming a total disaster.

      Look, I am more than happy to be convinced that I am wrong, but at the moment all the only thing that MDA brings to the table from what I can see is an unnecessary level of complexity.

      • MarkusQ

        Evolving both the DSL and the system written in it in parallel makes perfect sense.

        As an internal interface the DSL represents a factoring of the total complexity, allowing two simpler systems to do the work of an otherwise much more complicated whole. It splits the problem in two, and lets you think about the two in relative isolation. It’s the same reason we don’t write in machine code, or even assembly language any more (well OK, I do, but most people don’t).

        Consider the analogous case with integers.39916800 is an awkward number to reason about directly, but if we factor it into, say, 33264 * 120 each piece becomes a little bit easier to hold in our heads. And some decompositions are more useful than others. Splitting it into 77*25*9*9*8*8*2*2, for example, while more, typing certainly gives us a better handle on the number’s internal structure. And as we play around with it, we may want to refactor our decomposition, looking for that sweet spot–the ah-ha! moment were we see that it’s “really” 11*9*8*7*6*5*4*3*2*1, which we can just write 11! for a tremendous simplification.

        I’m no fan of the waterfall process precisely because the decomposition was fixed. I like MDA like ideas in so far as they retain this fluidity to factor problems along natural fault lines, finding clean internal interfaces (be they DSLs, APIs, or what ever else you want to call them) to reduce large problems into small chunks.

        — MarkusQ

        • http://www.skorks.com Alan Skorkin

          I agree that evolving a system and a DSL written in it together makes sense, what doesn’t make sense to me is to write the WHOLE system using a DSL in the first place. There are places where a DSL might be helpful but representing the whole domain in a DSL just doesn’t sit well with me.

          What you say about numbers makes sense, but 11! is an exact representation albeit a simplified one. Doing that with a number is relative simple (it can be hard i know :)). But trying to do that to a large system…

          • MarkusQ

            > Doing that with a number is relative simple (it can be hard i know :)). But trying
            > to do that to a large system…

            Sure, if you compare large systems to small integers but when the sizes are comparable (same number of bit say) I’d say they about the same.

            As for writing the “whole system” in a DSL, what do you think C is? Or Haskell or Ruby or whatever? All systems are developed in a DSL, the only question being is the domain “general computing with a VonNeumann machine” or “String manipulation” or “first order logic” or what? It isn’t like you have an alternative. So it boils down to choosing an appropriate abstraction.

            Personally, I favor composable DSLs and adding a rich level of abstraction centered on the decomposition of the problem (X belongs here, written in this language, because it’s part of this particular aspect of the system)–like MVC on steroids.

            — MarkusQ

  • objbuilder

    Yes.. I must agree with Markus that there’s nothing inherent in the MDA that mandates a waterfall process. Writing your own PIMs is perhaps the greatest reason why MDA (or DSL, MDD, etc.) is succeeding so well compared to CASE tools. They *were* tied to the waterfall methodology, MDA is not.

    I also agree with Alan that trying forcing all of your system’s development into an MDA tool is insane. Only use MDA where it makes sense, like the domain model.

    As someone who uses MDA , I believe it’s the next best way to move up the abstraction level. The tools are still young but EMF is looking better all the time.

  • http://www.skorks.com Alan Skorkin

    > As for writing the “whole system” in a DSL, what do you think C is? Or Haskell or Ruby or >whatever? All systems are developed in a DSL, the only question being is the domain “general >computing with a VonNeumann machine” or “String manipulation” or “first order logic” or >what? It isn’t like you have an alternative. So it boils down to choosing an appropriate >abstraction.

    I agree with what you say here, languages are a type of DSL and I don’t have any objection with using a language to represent a system (that would be insane considering my profession). My objection stems mainly from creating a domain specific DSL for a problem, then using that to generate another DSL-like representation (Java, Ruby etc.).

    Of course there may be cases where that makes sense, but as objbuilder – above – pointed out, forcing the development of the whole system into an MDA tool is not ideal. However doing it to discreet parts of the system, and the domain model was a good example, may be beneficial in some situations.

    Which I guess is sort-of like what you said about MVC on steroids :).