Proxy services are like a translator between microservices. A service mesh provides a logically consistent way to handle service interactions.
A lot has been written about service mesh, in a way we already have them — they are proxy services. Think of every tool that is used to connect to services as a proxy, i.e. application gateway, ELBs and now Istio. A service mesh provides a logically consistent way of handling service interactions which have historically been very difficult to tackle.Many companies are building complex distributed systems with microservices, and the trend is growing. To best understand how to structure your services, it’s good to know the difference between a service mesh and a proxy service.
By definition, a service mesh deploys and configures the proxy services (that is, the translator) — often called sidecars. These proxies act as an intermediary between your application services and external entities such as databases, queues, and more. When you couple this approach with automated configuration via metadata, you effectively create an abstraction leak. We are increasingly seeing the benefits of microservices, but we’re also increasingly hearing about service meshes. When I was first introduced to the word ‘mesh’ when it comes to a service mesh, the first thing I thought of was a network topology. It turns out, that’s pretty accurate! In fact, service meshes are a way of connecting all your systems together. They act as routers and traffic cops for your services. A service mesh means you don’t have to dedicate individual engineers to build and maintain these components — they do the job automatically in a standard way that you specify in an abstract way (through policies), so different teams can build and interact with other teams’ services without getting stuck.
Services are like islands. Each service develops and operates independently without any knowledge of its neighboring islands. If each island can only talk to its direct neighbors, navigating throughout these islands would become tedious and difficult in the long term. To keep navigation smooth and responsive, microservices communicate through a service mesh, which abstracts the underlying network details involving multiple services via a single entrypoint.
All in one solution
Service mesh is typically all in one solution for multiple component/microservices architecture that implements application level proxying and solves security (SSL termination, rate-limiting), service discovery, resiliency problems (rate limiting, circuit-breaking) and traffic management (routing, balancing traffic, and service release control). Thus approach reduces the complexity of service application software due to the fact that most of the microservice patterns are already implemented on service mesh side.
The obvious trade-off is the extra service latency and traffic to and from service mesh. Likewise as an extra small probability of network failures due to 2x number of hops.
The nonobvious tradeoff is abstraction leak. Your service needs to be aware of service mesh and programmed as if there is always a service mesh running somewhere. Likewise, your code and design need to be aware of what features and approaches are used for all of these SM features.
- Service A creates translation to service B but also implements retry logic. Service Mesh has it’s own retry policy.
This could produce duplicated transactions in service B records.
2. Service A creates translation to service B . Service Mesh has it’s own retry and routing policy
This could produce a duplicated or inconsistent transactions in service B records.
The resulted service availability is also slightly different from common service to service availability by a factor of SM availability.
Example: To oversimplify let’s assume each service has 99% or 0.99 availability and we need to call service B from A
Resulted Availability = (srv A)0.99 * (Mesh) 0.99 * (srv B) 0.99 = 0.97
Service to service:
Resulted Availability = (srv A)0.99 * (srv B) 0.99 = 0.98