Start with the Monolith
That’s right, forget about actually implementing the new Microservice. We’ll get to that later.
Refactoring legacy code is an inescapable reality
It’s tempting to jump straight into implementing the new Microservice, but this is a mistake.
I understand the temptation to start with the new Microservice. This is the fun part after all. You get to write new code from scratch! It’s a developer’s dream. You’re enamored with the idea of writing the cleanest code you’ve written thus far, with excellent automated test coverage, completely decoupled from the shackles of legacy code, able to be deployed continuously and independently. But you forgot one thing:
It won’t provide any value if you can’t plug it into the legacy system
How much value do you get out of a flawlessly executed Microservice that you can’t actually use? None.
You might have thought that creating the new Microservice first will save you from having to do much refactoring of the legacy code. You thought wrong.
Your new Microservice will almost certainly have an API that isn’t compatible with the existing structure of the legacy code which needs to call it. Something’s gotta give.
You have two options:
- Change the new Microservice so that its API is compatible with the legacy system
- Refactor the legacy system so that it’s compatible with the API of the new Microservice
If option 1 sounds insane, that’s because it is.
If you do this, you’ve failed to achieve any meaningful decoupling. Changes to the behavior of the Microservice will necessarily force changes to the legacy system — an exercise which you sought so very much to avoid by starting with the Microservice — and worse, force the deployments of both to be carefully orchestrated. Essentially all you’ve done is taken an existing composition of function calls in your legacy system, and made this workflow slower, more expensive, and more difficult to trace as your legacy system behaves exactly how it did before, only now must transfer state across a network boundary via a stateless protocol as you enter and exit functions which used to be entirely in-process.
Don’t corrupt your new design with the existing design
It’s easy to see that option 2 is the correct answer, but if you implement the new Microservice first because you’re intentionally avoiding the pain of refactoring the legacy system, there’s yet another more subtle risk: plugging into the legacy system is going to be on your mind while doing so, and that’s going to necessarily steer the design of your new Microservice into a shape that supports the legacy system at the API level. You’ve just picked the insane option 1 without even meaning to.
Bring the pain to the front
This is a guiding principle of software development: if it hurts, do it more often.
Refactoring legacy code is one of the most painful tasks for any developer, but that’s precisely why it’s so valuable for achieving your goals that you set out to accomplish with this Microservice initiative in the first place.
You can’t deploy services independently that are structurally coupled together by having APIs designed in tandem with knowledge of each other’s implementation details.
Your service calls are going to be necessarily inefficient if you don’t properly wrangle the state inside the Monolith before calling the Microservice. Remember, HTTP (and in fact all external communications methods in software I’ve seen used for service communication thus far) are stateless protocols. That is, your Microservice won’t have the session state from the Monolith, it won’t have the memory, the stack, the heap, or even the database access from the Monolith either. Whatever state from the Monolith that the Microservice must have, must be transmitted over the wire, or your Microservice will necessarily be querying for this information, adding even more inefficiencies to your service communication.
Wrapping up, for now
The simple heuristic to be gleaned from this advice is as follows: do not create an external Microservice until such a time as your Monolith has been refactored to appear as if it’s already calling that Microservice.
I’ll cover the ins-and-outs of achieving this heuristic in more depth in future posts, but for now, here are things to do which strongly signal passage:
- Create a module within your Monolith the encapsulates all the behavior of the as-of-yet non-existent Microservice
- This new module should not be able to read application state from the Monolith (session, stack, heap, DB) unless this state is passed in via the API
- Add a referential boundary between this new module and the rest of the Monolith so that the new module can be deployed independently via assembly deployment (even if you won’t actually deploy assemblies related to your Monolith independently, the ability to do that is what we’re after)
- Invert the dependency between this new module and the Monolith by having the Monolith define its own API for communicating with the new module (the “anti-corruption layer” pattern)