Aligning Bounded Contexts with Subdomains in Legacy Code

Mufrid Krilic
7 min readJul 14, 2021

The quality of a boundary in a system can be defined by the quantity of logical dependencies between the parts of the system on each side of a boundary.

It is by no means coincidental use of adjective logical along with the noun boundary. I am referring to 4+1 View Model of Software Architecture where logical dependencies are distinguished from other kinds of dependencies that exist in development, process and physical view model of a system. Enforcing higher degree of logical separation on each boundary increases decoupling.

In order to visualize logical dependencies a useful approach comes from strategic Domain-Driven Design with its focus on desirable alignment between subdomains, from the problem space, and bounded contexts, from the solution space.

Four Subdomains of Bernini

This post will guide you through an attempt to improve a legacy system by achieving an alignment between perceived subdomains, logical boundaries and the source code structure. The background story is of a team I was a part of, working in the patient medication domain. We worked on a legacy system which provided a rich set of features for physicians to prescribe medications and integrate with systems for prescription delivery in pharmacies with focus on putting patient needs and safety on the top.

During the discovery process we learned, among other things, that national authorities pre-approve each medication for prescription yet a physician is allowed to prescribe non-approved medication under condition that she/he apply for approval. Sending an application for approval appeared at the time like a subdomain and a good candidate for a bounded context.

There were two reasons for this:

  • We discovered that sending an application is an activity performed as a consequence of a seemingly pivotal event NonApprovedMedicationPrescribed
  • There were already traces of a boundary in the source code as there existed separate .NET projects/assemblies supporting this feature.

When further analysis revealed that dependencies between the projects appeared to point from the Application domain in the direction of the Prescription domain we decided to strengthen the boundary by splitting the code for sending an application for non-approved medications into its own module, i.e. a separate Visual Studio Solution.

It was an essential assumption that dependencies are outwards from the code being extracted, meaning that the code for the Prescription bounded context would be unaffected by the refactoring.

We put ourselves under the time constraint of a 1 week time-box to perform the refactoring and summarize the lessons learned regardless of the outcome. The process in itself can be described in three stages as follows.

Stage 1

We started out by establishing a new VS Solution in the same repository and set up CI chain including the new module. As we already had separate .Net projects we moved them quickly over to the new module using git mv and Exclude/Include Project feature in VS Solution Explorer. The focus was on having the code that compiles and runs unit tests at all times during the refactoring.

Decision time: At this point we asked ourselves whether or not we were getting valuable enough feedback from the unit tests. At some level unit tests usually rely on mocking the dependencies out and hence cannot warn us if there is some piece of code that expects to dynamically inject a dependency implementation that has been moved somewhere else.

We did have a suite of integration end-to-end tests that were harder to maintain but could provide feedback on runtime behavior. Would it be more valuable to try to keep the end-to-end tests green?

  • Integration tests would provide early feedback and probably discover pieces of code that need closer attention during refactoring
  • On the other hand our progress on establishing the boundaries could be significantly slowed down due to amount of refactoring that was nevertheless needed in order for integration tests to run.

We decided to leave end-to-end tests for a later stage in the process.

Stage 2

With that decision behind us we proceeded with cleaning up the project references and using statements that were presumed to be obsolete after moving the code in the first step. This proved to be a rather important activity as it uncovered dependencies that challenged our assumption on unidirectional dependencies from the Application bounded context to the Prescription bounded context.

It turned out that there were some class and interface definitions within the Application bounded context that were used in both contexts. Still we felt quite optimistic because we knew that these dependences were quite probably partially introduced by IDE Intellisense tools, that are far too quick to suggest putting a reference to another .NET project. Moreover, we could reach out for Stable Abstraction Principle and add abstractions, both on a class level and .Net project level, that both the Application bounded context and the Prescription bounded context could depend on.

As depicted on this Context Map we effectively introduced a Shared Kernel pattern. This could be a perfectly valid option, especially as we still were a single team, although this pattern is less effective for multiple autonomous teams.

Stage 3

At this point we believed that the references between the modules were distributed according to the principles of stability and abstraction, and we felt ready to tackle the end-to-end tests.

The most important feedback we got from the end-to-end tests was a runtime behavior that went against our assumption on dependencies going outwards from the code being extracted.

It turned out that there was a business requirement where physicians, while prescribing a medicament, needed to be aware of prior applications for approval that were sent for the same medicament, usually for the case where a medicament has been prescribed on a life-long basis. This requirement led to a logical dependency from the Prescription bounded context to the Application bounded context. This could as well have been related to the Shared Kernel discovery from Stage 2 of the refactoring process.

This turned out to be a show-stopper for our attempts to establish a new bounded context in the system, as this would create a circular dependency across the boundaries. We did consider moving all the integrations tests to a separate module that would have available, at runtime, all the dll s that are present in the production and thus could inject all the necessary dependencies. This would take care of deployment dependencies (Process View in the 4+1 Model) and make the tests green. On the other hand, this would mask the logical dependencies between the bounded contexts (Logical View in the 4+1 Model) and hence invalidate the reason for creating the new bounded context.


In the end we learned a lot about the effort needed to create a boundary in a legacy system. One of the most important lessons was a realization that we as a team would benefit if we could strengthen the investment in overall architectural improvements in the daily routines. In particular we would pay more attention to any new .Net project references and using statements discovered during code review. That could hopefully lead to discussions about logical dependencies, the reason they were introduced, and the direction of dependencies between different parts of the system.

The code review, through focus on Stable Abstraction Principle and categorization of modules by the intended level of abstraction, could as well be a good place for discovering circular dependencies, both across the VS Solutions in the same repository and through nuget packages across the repositories.

The real potential however, lies in applying the 4+1 View Model and trying to discover logical boundaries early on, preferably through use of Domain-Driven Design Collaborative Modeling methods. Establishing the logical though not necessarily development-view boundaries, would need to be accompanied with the discipline in the team to preserve the logical structure even though the code may reside in the same repository. This is also based on the notion that is easier to pull different pieces of code together than to split it apart, or as Kent Beck explains it in his session from DDD Europe 2020, it is always easier to achieve higher cohesion. Splitting early puts you in position to explore this option.

The refactoring in itself required a great deal of documentation along the way and we chose to put an emphasis on prosaic and rich git commit messages with focus on explaining why each step was necessary. Another improvement was to try to map the continuous feedback we got on each step of refactoring towards our goal of establishing new bounded context, in particular if our assumptions about the existing direction of dependencies were challenged

  • Are we still on the right path and can employ further refactorings or should we stop and re-validate our perception of alignment between subdomains and the proposed boundaries?

This boils down to being open to the feedback that there are other boundaries or other perspectives on boundaries to consider that could have led to different outcome.

It is of course challenging to look for the alternatives, particularly in the legacy environment, yet in my opinion it is essential to try to do so. Surely not every option will appear as a low-hanging fruit, still it is possible to set the target for architectural improvement and then gradually but decisively improve the code towards the target.



Mufrid Krilic

Domain-Driven Design Coach and one of Coworkers at CoWork, Norway. First Lego League mentor. My views are my own. Speaking gigs: