Building Quality In Legacy Systems — The Art of Asking Questions

Mufrid Krilic
4 min readMar 31, 2020

--

The notion of quality in software is likely a matter of debate and opinion rather than having a strict definition we can relate to. One of the more distinguished attempts to put some perspective on the topic was presented by Gojko Adzic in his blog post on “Redefining Software Quality” some years ago. In his post Gojko builds up an analogy to Maslow’s pyramid of needs suggesting that investments in software quality on lower levels in the pyramid are fundamental before proceeding to higher levels, yet over-investing on lower levels yields lower gains than intended.

In my experience this definition proved to be particularly useful in context of legacy systems in an enterprise environment, where teams need to deal with customer expectations built through years of decisions and past knowledge that are often dispersed through the organization and code base. In this post I will a present a toolkit based on Gojko’s definition that was built as part of my work as a coach, trying to help teams navigate through the landscape while building quality in the legacy system along the way.

The toolkit is comprised out of a set of questions that applied broadly for the legacy system my team was working on. The questions let us look at the legacy system through the lens of following perspectives:

  • Delivery process
  • Conditions of acceptance
  • Code and product maintainability
  • Security and performance
  • Domain-specific context in existing operational environments

We found out that by asking questions within these categories we would challenge our perception on the depth of work in front of us and would do our best to anticipate the unknown-unknowns of the legacy system. Moreover it turned out that each perspective can be mapped to a corresponding level in Gojko’s pyramid of software quality:

Starting from the bottom-most layer it is obvious you should invest in understanding the requirements combined with practices like TDD that form the backbone of software quality. However, unexpected tasks seem to pop-up when discussing the legacy system delivery process. Usually this means taking into account a number of manual steps performed by your team or the customer’s operation staff to get things installed in production environment. Hence particular attention should be devoted to, first, understanding and, then, trying to automate the delivery process as much as possible.

As far as security and performance implications are concerned I suggest, apart from standard questions pertaining to authentication, authorization, audit etc, spending some time on code analysis for discovering patterns that could lead to unexpected behavior during high-load in production. Issues that can be preempted with careful code analysis are e.g. consuming asynchronous messages out-of-order, race-conditions and concurrency conflicts with functionality elsewhere in the system.

When we get to deciding on usability and usefulness of the software, the hardest questions to answer are domain-specific.

The two main patterns to consider, related to usability and usefulness, is discovering the affected functionality elsewhere in the legacy system that either provides value-added services, or lets your customer keep its core services intact. It is here that old projects and decisions make their mark. Thus it is advisable to spend some time investigating whys and hows of the old features, as suddenly one of them could turn out to be something essential that your customer takes for granted your new code should support as well! Nourishing the learning culture in your organization can really shine at this level as openness and knowledge sharing can greatly help the teams in the discovery process.

You may have noticed from the figure above that the top-most level Successful, does not have any corresponding set of questions. When discussing software quality at this level it turned out that the discussion was closely related to how well our software models overarching business goals and processes. This is by no means easy to pull off, however I have witnessed that long-term investment in strategic Domain-Driven Design and Impact Mapping are the right choices to guide you.

Wrapping up, I would like to stress that this toolkit of questions is not sufficient in itself, it could and should be revised and adapted to the context of a legacy system you are working on. Further developments may e.g. include building a decision tree on the basis of the toolkit. Nevertheless, it seems clear to me that to succeed in legacy environments one should become relentless learner, using our inherent inquisitiveness to constantly challenge our assumptions and predispositions.

P.S. For a lightning talk on this topic see my video from NDC Oslo 2020

--

--

Mufrid Krilic
Mufrid Krilic

Written by Mufrid Krilic

Domain-Driven Design Coach and one of Coworkers at CoWork, Norway. First Lego League mentor. My views are my own. Speaking gigs: https://sessionize.com/mufrid/

No responses yet