Migrating a monolith
If you start, a monolith is much easier to begin with. Everything is in one place, it’s fast and all team members understand the code. If you plan your application from the start as distributed system in a microservice like style, you add some significant overhead. Don’t do it, monoliths have their advantages at some point. If another team start with the same project or more functionality is added, splitting up the monolith might be a way to go. Looking at ways to migrate, you maybe can do some better design decision to for you monolith.
If you still try to find your domain model I would not recommend to migrate towards smaller services. It is very hard to move domain objects across service boundaries due to simple things like lacking IDE support. Make sure you understand your business before moving into that direction.
Your monolith could be one of:
- one executable only
- multiple executables (services) but with heavy dependencies
How to move away
An identified bounded context would be a good candidate to start with isloating in a microservice. You usually find that talking (a lot) to a domain expert. From here onward, start migrating one after the other. A microservice does not have to be very very small in terms of code size. Things like: ‘It must be 300 lines of code or less.’ does not help anybody. Let your new service do one thing and one thing only. This is a good start.
Another way to start is to extract that what hurts the most. If one part of the software is constantly causing pain and slows you down, try to isolate the problem and migrate it. You can do that either in putting it in a new service or build a facade around it (anti corruption layer). This a way if you need to start very quickly to work with a legacy system. Don’t forget later to either rip the functionality off the old service or if you understand the old legacy monolith better after some time, remove the facade and refactor the monolith.
If you started building a monolith (which I would recommend in the most cases), you usually end up with something like shared services (but try to avoid it!). This is one of the hardest things to get rid of. It might not be even services but shared database views or things like shared enterprise service buses. You can tackle this problem if you establish a team which is responsible for that service. And this does not change. Not, Team A does this today, tomorrow Team B, and in 3 months Team C. No, the same Team will stay on it for a very long time. This will take a while, since this is usually a process of better understanding the business model and migrating the domain to the right places. As mentioned above, migrating domain objects over services border is possible, but you need to be patient and is not an easy task at all.
The monolith might live on
Some part of the monolith will survive. That’s not so bad. It might be not worth the effort to split it up any further. If this part of your system does not change very often you can leave it that way. Just make sure you automate the deployment (which might be downtime deployment) and this could be ok.
Communication
If you start with a bounded context of your application, what do you do? How does it communicate with the rest of your system? This can be a pain. Discovering how another system works can involve a lot of probing interfaces and testing responses. If changes to the API occur you might be in trouble. Good API design pays.
An idea to solve this is to use a language agnostic approach like REST. Combined with HATEOAS (Hypermedia As The Engine Of Application State) it can lead to some powerful possibilities for service communication. Applications can detect where to go all by themselves. Changing urls for service would not matter anymore since everybody (hopefully) follows their links.
This is of course theory and sound all too nice. You still maintain a contract, you define keywords which can’t be changed if you want to provide the same functionality. You need to return a defined format from the API calls so the client knows what to do with it. You can’t delete functionality but adding or modifying (limited) is possible without breaking the clients. By using HTTP you get a lot of functionality for free. Working with HTTP codes, content-types and links can save you a lot of effort. REST is not a silver bullet but for certain use cases it fits very well. If you have different requirements your mileage might vary.
Moving to microservices
Of course there is some stuff you need to take care of. Moving to a distributed system will throw you into the realm of unreliable systems. Systems might go down, just starting or be running. You need to handle that. You need to make your system on an application layer much more robust. In a monolith everything is just there or not. Pretty simple. In a distributed system you need to :
- make sure your clients can handle the situation that another system is not available and can make sense of it
- you maybe need to do some intelligent way of handling request to a dead system with something like a circuit breaker pattern, otherwise the server which will startup again will be ‘slashdotted’ (do you still say that?) by the queued request of the clients
Resilience just knocked on the door.
The complexity of your system just moved one level up. The tools to do that are not yet on par with handling complexity in a monolith. Make a concious decision about it.
It is a hard to do a migration. If you don’t do it, your system might grow overly complex and nobody can handle it. With microservices your system will also be complex but on a different level. You need to handle the communication between the systems, but a team can at least understand one susbsystem all by themselves.