Table of Contents
What is Service Mesh and Where Did it Come From?
Over the past few months, you may have noticed the explosion of industry chatter and articles surrounding service mesh and the future of software architecture. These discussions have been highly polarizing, with tribes forming around specific vendors. While this partisan trend is to be expected, the common thread among these discussions is the rapid transformation of how APIs are used in the enterprise, and what this means for the topology of our traffic.
In a short period of time, service APIs went from being primarily an edge interface connecting developers outside of the organization with internal systems to the glue that binds those internal systems (microservices) into a functioning whole. Consequently, one of the unavoidable results of microservice-oriented architectures is that internalcommunication within the data center will increase. Service mesh arose as a potential solution to the challenges that arise from increased East-West traffic by providing a different framework for deploying existing technology.
As CTO of Kong, and an active participant in these conversations, I have noticed a common misconception about what service mesh is. In the hope of dispelling confusion and advancing discussions, I want to unequivocally state the following: service mesh is a pattern, not a technology.
在过去几个月，你可能已经注意到围绕着 Service Mesh 和未来软件架构激烈的行业争论和文章。这些讨论围绕特定供应商形成高度分化。虽然这种趋势是可以预料到的，但在这些讨论中，一个共同的主题是如何在企业中使用API的快速转换，以及这对我们的流量拓扑意味着什么。
在很短的一段时间，服务 API 从最初的将组织外部的开发人员与内部系统连接起来的边缘接口，转变为将这些内部系统(微服务)绑定为一个功能完整的的粘合剂。因此，面向微服务的体系结构的一个不可避免的结果是数据中心内的通信将增加。通过为部署现有技术提供一个不同的框架，Service Mesh 作为一个潜在的解决方案出现了，解决了由于东西向流量增加而产生的挑战。
作为 Kong 的首席技术官和这些对话的积极参与者，我注意到一个常见的关于 Service Mesh 是什么的误解。为了消除混淆推进讨论，我想明确指出以下几点：Service Mesh 是一种模式，而不是技术。
Service Mesh is a Pattern, Not a Technology
In the same way that microservices are a pattern and not a specific technology, so too is service mesh. Distinguishing between the two sounds more complex than it is in reality. If we think about this through the lens of Object Oriented Programming (OOP), a pattern describes the interface – not the implementation.
In the context of microservices, the service mesh deployment pattern becomes advantageous due to its ability to better manage East-West traffic via sidecar proxies. As we are decoupling our monoliths and building new products with microservices, the topology of our traffic is also changing from primarily external to increasingly internal. East-West traffic within our datacenter is growing because we are replacing function calls in the monolith with network calls, meaning our microservices must go on the network to consume each other. And the network – as we all know – is unreliable.
What service mesh seeks to address through use of a different deployment pattern are the challenges associated with increased East-West traffic. While with traditional N-S traffic 100ms of middleware processing latency was not ideal but may have been acceptable, in a microservice architecture with E-W traffic it can no longer be tolerated. The reason for this is that the increased east-west traffic between services will compound that latency, resulting in perhaps 700ms of latency by the time the chain of API requests across different services has been executed and returned.
In an effort to reduce this latency, sidecar proxies running alongside a microservice process are being introduced to remove an extra hop in the network. Sidecar proxies, which correspond to data planes on the execution path of our requests, also provide better resiliency since we don’t have a single point of failure anymore. However, sidecar proxies bear the cost of having an instance of our proxy for every instance of our microservices, which necessitates a small footprint in order to minimize resource depletion.
From a feature perspective, however, most of what service mesh introduces has been provided for many years by API Management products. Features such as observability, network error handling, health-checks, etc. are hallmarks of API management. These features don’t constitute anything novel in themselves, but as a pattern, service mesh introduces a new way of deploying those features within our architecture.
与微服务是一种模式而不是特定技术的方式相同，Service Mesh 也是如此。区分这两者听起来比实际上要复杂得多。如果我们通过面向对象编程（OOP）的视角来思考这个问题，一个模式描述了接口 - 而不是实现。
在微服务上下文中，服务网格部署模式由于能够通过 Sidecar 代理更好地管理东西向流量而变得非常有利。当我们正在解耦我们的单体服务并使用微服务构建新产品时，我们的流量拓扑结构也正在从主要的外部拓扑结构转变为越来越多的内部拓扑结构。我们数据中心内的东西向流量正在增长，因为我们正在用网络调用替换整体中的函数调用，这意味着我们的微服务必须在网络上相互使用。我们都知道，网络是不可靠的。
Service Mesh 试图通过使用不同的部署模式来解决与增加的东西向流量相关的挑战。虽然对于传统的N-S流量来说，100ms的中间件处理延迟并不理想，但是可以接受，但是对于具有E-W流量的微服务体系结构来说，这是不能再容忍的。这样做的原因是，服务之间东西向流量的增加将增加延迟，当跨不同服务的 API 请求链执行和返回时，可能会导致700ms的延迟。
为了减少这种延迟，引入了运行在微服务进程旁边的 Sidecar 代理，以去除网络中额外的一跳。 Sidecar 代理对应于请求执行路径上的数据平面，也提供了更好的弹性，因为我们不再有单点故障。然而，Sidecar 代理需要为微服务的每个实例都提供一个代理实例，这就需要占用很小的内存，以便将资源消耗降到最低。
然而，从功能的角度来看，Service Mesh 所引入的大部分内容都是 API 管理产品多年来提供的特性，如可观察性，网络错误处理，健康检查等都是API管理的标志。这些特性本身并不构成任何新特性，但是作为一种模式，Service Mesh 引入了一种在体系结构中部署这些特性的新方法。
Traditional API Management Solutions Can’t Keep Up
Microservices and containers force you to look at systems by prioritizing more lightweight processes, and service mesh as a pattern fills this need by providing a lightweight process that can act as both proxy and reverse proxy to run alongside the main microservice. Why won’t most traditional API Management solutions allow this new deployment option? Because they were born in a monolithic world. As it turns out, API Management solutions built before the advent of Docker and Kubernetes were monoliths themselves and were not designed to work effectively within the emerging container ecosystem. The heavyweight runtimes and slower performance offered by traditional API management solutions were acceptable in the traditional API-at-the-edge use case, but are not in a microservices architecture where latency compounds over time via increased east-west traffic activity. In essence, traditional API management solutions are ultimately too heavyweight, too hard to automate, and too slow to effectively broker the increased communication inherent with microservices.
Since developers understand this, legacy API Management solutions born before the advent of containers have introduced what they call “microgateways” to deal with E-W traffic and avoid rewriting their existing, bloated, monolith gateway solutions. The problem is, these microgateways – while being more lightweight – still require the legacy solution to run alongside them in order to execute policy enforcement. This doesn’t just mean keeping the same old heavy dependency in the stack, it also means increased latency between every request. It’s understandable then why service mesh feels like a whole new category. It’s not because it’s new, but rather because the API Management solutions of yesterday are incapable of supporting it.
微服务和容器通过优先处理更轻量级的进程，迫使你查看系统，而 Service Mesh 作为一种模式，通过提供一个可以同时充当代理和反向代理的轻量级进程来满足这种需求，与主微服务一起运行。为什么大多数传统的 API 管理解决方案不允许这个新的部署选项?因为他们出生在一个单体服务的世界。其实，API 管理方案在 Docker 和 Kubernetes 和单体应用本身之前就出现了，它并不是为了在新兴的容器生态系统中有效地工作而设计的。传统 API 管理方案提供的重量级的运行时和低性能在传统的 API-at-the-edge 用例中是可以接受的，但在微服务体系结构中，随着时间的推移延迟会随着东西向流量的增加而加剧。本质上传统 API 管理方案因为根本上太重，太难以自动化，而且速度太慢，无法有效地协调与微服务之间不断增加的通信。
因为开发者明白遗留的 API 管理方案在容器出现前就已经存在，他们引入了所谓的“微网关”来处理东西向流量，并且避免重写他们已有的、臃肿的单体网关方案。问题是，这些微网关-虽然变得更轻量-仍然需要遗留解决方案与它们一起运行，以便执行强制策略。这不仅意味着在技术栈上保持老重的依赖，同时意味着每次请求增加的延迟。这就不难理解为什么服务网格会给人一种全新的感觉。不是因为它新，而是因为过去的 API 管理方案不能支持它。
When you look at service mesh in the context of its feature-set, it becomes clear that it’s not very different from what traditional API Management solutions have been doing for years for N-S traffic. Most of the networking and observability capabilities are useful in both N-S and E-W traffic use-case has changed is the deployment pattern, which enables us to run the gateway/proxy as a lightweight, fast sidecar container, but not the underlying feature-set.
The feature-set that a service mesh provides is a subset of the feature-set that API Management solutions have been offering for many years, in particular when it comes to making the network reliable, service discovery and observability. The innovation of service mesh is its deployment pattern, which enables to run that same feature-set as a lightweight sidecar process/container. Too often our industry confuses – and sometimes pushes – the idea that a specific pattern equals the underlying technology, as in the case of many conversations around service mesh.
当你从 Service Mesh 的功能集这个上下文来看它时，就变得很清晰。它与传统的 API 管理方案已经处理了多年的南北向流量并没有太多不同。大多数网络和可见性功能在南北向和东西向都同样有用，用例改变其部署模式，这使得我们以轻量、快速的 Sidercar 容器来运行网关 / 代理，而不是底层的功能集合。
Service Mesh 所提供的功能集是 API 管理方案已经提供了多年功能子集。特别是在使网络可靠、服务发现和可观察性方面。Service Mesh 创新的是其部署模式，使得以一个轻量的 Sidercar 进程 / 容器运行相同的功能集合。我们的行业经常混淆-有时推广-一个特定的模式等同于底层的技术，围绕着 Service Mesh 的众多讨论就是这样的。