This article is part of the series on Microservices Architecture.
DevOps is a software delivery process that emphasizes on Development and IT teams working collaboratively together in automating the deployment and updates to the infrastructure. The premise of microservices is to build technologically heterogeneous, distributed systems and to allow individual services to be deployed, updated, and scaled without impacting other services frequently as needed. So, automation and DevOps process becomes critical.
Note: Although DevOps importance is highlighted in the context of microservices here, there is no reason to believe monoliths can not benefit from DevOps process.
Some important concerns that need attention as part of DevOps process are –
- Deployment Configurations and configuration management:
The services need to be configured for dependencies on each other, handling inconsistencies between various environments, and other application and security settings. Management involves configuration versioning, sharing and storing. In many cases the configuration drives deployment steps and definitions. - Monitoring:
Monitoring the services helps take reactive actions which could be automatically scaling the services up/down or detecting a failure. - Failure handling and self recovery
Microservices architecture has to consider failures as a requirement and not as an afterthought. The services need to be developed to handle failures at the same time, the infrastructure needs to respond to and recover from resource failures as quickly as possible. - Scaling the services in demand
Auto scaling is necessary to meet the performance and availability SLA between services. - Analytics and Service usage analysis can help in scaling of specific services. It also acts as a feedback to iterate to optimize and adapt the configurations over time.
Containerization
Containerization is at the center of evolution of DevOps processes. It brings repeatability and portability of the automated system. This helps in managing inconsistencies that arise between various environments – Development, Test, Staging and Production. Docker, is an open platform leading the containerization movement with its ecosystem of tools and products. There is almost universal support for Docker containers in many popular infrastructure platforms which makes it the most preferred as a unit of infrastructure. There are various services/platforms to pick from which support most of the requirements mentioned above with containerization at center. The general features of such platforms include:
- Support for container orchestration
- Container clustering, resource management and scheduling
- High Availability by monitoring and self-healing when containers fail.
- Scalability on various matrices such as memory, CPU consumption, number of requests
- Configuration management
Some of the notable cloud/non-cloud solutions currently available are
- AWS ECS
Amazon ECS is a service that provides out-of-the-box support for containers and workflows around it. As with everything on AWS, you are covered on scalability and other aspects and if you are using AWS for deploying your microservices, ECS has to be one of your top considerations. - Kubernetes
Kubernetes by Google is open source and one of the early platforms to support containers. It provides cluster management and scheduling solutions to support deployment, operations and scaling of your infrastructure. Moreover, it provides a flexibility of using it in-premise, hybrid or cloud infrastructure. - Apache Marathon
Container orchestration platform based on mature Mesosphere’s Datacenter Operating System (DCOS) and Apache Mesos - Azure Container Service
Microsoft Azure platform also added support for containers with Azure container service. - DC/OS
Another open source platform based on Apache Mesos. It’s possible to use over baremetal infrastructure, private or public cloud - Docker Swarm
Service by Docker with clustering, auto-scaling capabilities and high availability; also supported with Microsoft Azure container Service.
Service Discovery
The network location of servers hosting the services can change quite often in an auto-scalable environment resulting from failures or load requirement changes. It’s important to isolate and abstract clients from the challenge of locating or discovering the services due to these changes. A Service discovery features a services registry which keeps the mapping of services and their actual network location – IP address and port of the server instance hosting the service.
There are two commonly used methods for auto-service-discovery –
-
Server side discovery:

Server-side Discovery
The clients make requests to a particular service through a router. The router maintains its service registry and routes the requests to the available instance. Load balancers are typically good candidates to act as routers. This approach is usually simple to implement and distributes the registry on load balancer per service. However, it doesn’t protect against load balancer failures.
AWS Elastic Load Balancer and Azure Load Balancer services are good examples of server-side discovery, both provide internal as well as external (internet facing) load balancer services.
-
Client-side Discovery:

Client-side Discovery
In this approach the service instances register their network locations in a common registry which is probed by the clients when making the service requests for the exact location of the service instance. This approach although complex to implement, provides protection against Load Balancer failures(provided that the service registry is scalable and configured for high availability).
Infrastructure Failures:
Part of responsibilities of deployments in microservices architecture is also to assume and deal with infrastructure failures. Its important to implement built in health monitoring and self recovery mechanism. AWS cloud watch alarms along with auto-scaling groups and Netflix Hystrix provide the solutions to add resiliency and fault tolerance to the microservices architectures.
Security has to be another obvious consideration while developing applications with Microservices Architecture.