I am working on a set of experimental microservices and I am using this common approach to address some common problems I have experienced with code versioning, API versioning and different 'environments'.
Previously I have used a concept of different 'environments' for different uses. Mainly these environments have been called 'Development', 'Test' and 'Production'. Each environment being a complete copy of the system I am working on. My development pipeline was Dev to Test to Prod with various levels of testing occurring at each stage. It works well for monolithic systems with slow deployment cycles; gets complicated when I add a 'User Acceptance Testing' environment; and very quickly falls apart when I start integrating separate monolithic systems! Where I work at the moment it is getting so complicated we are starting to use the term eco-system to describe these environments - the word environment doesn't seem big enough!
This approach won't work when the monoliths are broken down into microservices so I am experimenting with ideas that might replace it. I would like to introduce three concepts for each microservice I build to implement:
|Concept||What is it separating?||How?|
|Tenant||Data||Each endpoint has a TenantName parameter which translates to a TenantName column in the datastore. Queries against data are executed with TenantName as a parameter.|
|Major Version||API Contract version||All endpoints have the major version as a parameter. There is at least one container running each major version and the reverse proxy points at a different container depending on the version requested.|
|Minor Version||Code Versions||Minor versions are changes to behavior of the system that do not change it's guaranteed behavior. Multiple containers run behind the reverse proxy and after testing the end point is switched to new versions as they are released.|
Where we require different independent ‘instances’ of systems with different data we can use the concept of a tenant to separate these. This may be required since we want dev, test and prod data separately; individual projects may want different data, or we may need different data for different customers.
Individual tenants can be accessed via different URL’s but the actual code being executed to get at the data is exactly the same. This can be implemented by a relational data store by using a simple where clause or by a object store by having the tenant string as part of the key. Also; now extra environments are very cheap so you have a lot of options. You can have test tenants, special tenants just for projects, tenants as trials for customers, there are lot of possibilities!
Services will be used by multiple clients each with their own deployment cycles. This means that when breaking changes to an API are required the old version needs to still be available, at least until all the clients have updated and deployed their code.
Note two separate containers containing different versions of the code are now running. I have worked with systems where the same code base supports both API versions and I bare many scars from these experiences! When redeploying V1 of the service you must guarantee that the full behavior of the service is the same, not just that it fits a particular interface. I have seen many occasions where suppliers have used a façade in front and deep down in their code the same libraries are called. The end result has always been numerous changes that break my downstream app. In these situations suppliers frequently break the old versions of the service as they deploy new ones.
I think I can avoid all this using containers. This doesn’t get around the fact that both containers use the same datastore. I think implementing the following precautions would make this workable:
- Remove all logic from the data store itself and use just as a store of state
- Make new versions of the code only change data structures in compatible ways
- When compatible changes are no longer possible create a completely new service
All code has bugs, especially mine. (Not as much as before, I am getting better – honest!) Therefore there must be a way to deploy a new version of your service with bug fixes. Especially important when these fixes may be security fixes!
A big concern I have here is how can I keep the service stable for clients during this process. My gamble is I can use service level testing in order to make sure this doesn’t happen. In this way if clients code against behaviours that are tested I can guarantee them a stable service:
This is nice and simple, two neatly independent code containers with the API Gateway switching from one to the other. This even allows for rollback if there’s a problem and AB testing. (I also have a V1_TEST endpoint and an automated test robot to sign off deployments making sure my service is stable.)
I see a big gottcha here - A bug in production is now a feature – service users may have coded against it and come to rely on it. When does a minor change become a major one? If I assume I have full test coverage then I could define a minor change as a change that only requires new tests to be added to the suite. If I have to change an existing test I am altering the behavior of the service and in that case it should be a major version change.
Another interesting feature here is that my automated test robot can use the idea of a special test data tenant mentioned before to keep it's test independent from real data.
I am hoping these concepts will take the place of the environments/ecosystems I currently work with. I think they will be manageable and hopefully easy to understand. They take advantage of the fact that containers let me separate different versions of code. One of my goals in building my experimental microservices is to see if this all hangs together. I have a feeling that the right testing at the right points in the deployment cycle will give me enough confidence to make a very stable service platform.