Wherever your opinion lies on Microservices, be it a nonchalant roll of your eyes as you dismiss it as another marketing buzz word or one of a super excited kid on Christmas day, the fact is that they are not going away any time soon. And I, for one, don’t want them too. But it’s not so much the technology or the products that excite me most. It’s the required mindset shift. It’s the tools. It’s the scalability that can be built in with ease. It’s the deployment approach you can take. For the right problem, it can be a fantastic solution, but it’s not a esoteric recipe. It’s a way of thinking. For me, Microservices are to distributed architecture what Agile is to project planning. That be a great thing, but it can also mean, as with Agile, it can easily be executed so poorly for the sake of labeling your approach, you can end giving Microservices a bad name!

The skill is being able to see where you can break things down into individual components and then still be able to recognise what they can do as a whole. It’s one of those complex Lego sets where they split the stages into numbered bags for you but then, unlike your Lego, with microservices, you never get to fit them altogether. Instead you have to suspend them in mid air in the right place and expect them interact through magic. Or a message queue. Or an event bus. Probably not magic.

There are plenty of products out there synonymous with Microservices. Kubernetes, Docker, Service Fabric, Azure Container Instances, Rabbit MQ, Azure Service Bus. With so many options, it can be difficult to grasp the concept of what Microservices are trying to solve as an architectural solution without getting caught up in the intricacies of the product you have chosen to use.

One way of trying to understand it is to take something simple and determine whether or not it could be described as a Microservice. Take my previous posts about a requirement to automatically transfer an export of a client’s data across to their domain storage. Let’s first analyse the requirements:

  1. Export Sql Azure database on a Schedule
  2. Copy export back up from source to client domain

Incredibly simple requirements. Let’s add another one:

3. Notify client via email once back up export complete.

Let’s imagine a non-Azure way of completing this with an on-premise solution. Let’s say we create a daemon application to poll on a schedule and, once a week, connects to the client database and creates an export. It then copies the export file over to a target network destination before create an email and sending that through to the client with a url link to the export file.

Pretty simple but a number of areas where things could go wrong so we need to think about error handling, logging etc and if requirements change then a re-deployment of the solution. Still, not a big ask.

So let’s consider the Azure options for a minute. Two are already in place:

  1. Automation Runbook triggered on a schedule to execute and create a export file in configured blob container
  2. Azure Function with a Blob trigger that reacts to the creation of a blob and copies the blob across to a configured target container

Let’s add in the extra requirement

3. Azure Function with a blob trigger that reacts to the creation of a blob in a target container and sends an email with a link to the blob to a configured recipient.

You could argue that option 3 could be added as an output binding into the first Azure function created and this would be a valid approach but humour me for a while.

I would argue this approach is a Microservice solution. Why? Because you have a solution comprised of multiple components, each of which know nothing of the other and would not fail should any of the other components break or be removed. There are no queues, no containers, no event bus, no web apis or guest executables and not a cluster in sight but I still feel this constitutes a Microservice. Each component does a job in its own right but what connects them together as a Microservice is that you want (not need) all three to complete the solution, but they all work independently.

So, if I developed the second Azure function to email a link that included the account key just to gain access to the export blob and then worked out this was a really bad idea and decided to rip it out quickly and change the account key in question, what happens? The scheduled export continues as before and the automated transfer picks it up and copies it across. It continues to work absolutely fine. We just lose the email notification. That means we can then fix the second azure function to instead create a SAS token with read-only access for the specific blob that lasts for perhaps a week (which is when the new export will be generated), and deploy that back into place. It kicks in when it reads a new file in the target container and all is well. That’s how a Microservice should work – separate but connected. Independent but collaboratively. If you look at some of the common tenets of Microservices, this approach ticks all the boxes:

  • Quickly iterate and release frequently
  • Easy for new devs to join and be productive
  • Increased dev velocity (3 devs could be working separately on each component)
  • Shorter test cycles
  • Polyglot is used
  • Independently scalable components.

The decisions that need to be considered concern the deployment effort and monitoring and this is the balance. The above approach has 3 deployments, 3 processes to monitor and log and I can see why this would put people off. But that’s why I say Microservices are a mindset shift more than anything else, and why the tools you use are just as important. If you’ve not implemented a devops pipeline and you don’t have a managed logging and metric solution built-in, then having 3 deployments instead of one would be a disadvantage. As long as you can weigh up the pros and cons, it’s up to you how granular you make your components. But imagine that you can have the foresight to define clear boundaries within your solution requirements. Think how well this could then feed into an Agile process where iterations can be about adding either more components, or value to existing components, rather than bulking out the functionality of existing components. That makes the delivery and testing of complete stories far more likely to succeed within a sprint. That’s got to be a good thing, right?


Ben

Certified Azure Developer, .Net Core/ 5, Angular, Azure DevOps & Docker. Based in Shropshire, England, UK.

0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: