Service Internal Traffic Policy
If two Pods in your cluster want to communicate, and both Pods are actually running on the same node, Service Internal Traffic Policy to keep network traffic within that node. Avoiding a round trip via the cluster network can help with reliability, performance (network latency and throughput), or cost.
FEATURE STATE: Kubernetes v1.23 [beta]
Service Internal Traffic Policy enables internal traffic restrictions to only route internal traffic to endpoints within the node the traffic originated from. The “internal” traffic here refers to traffic originated from Pods in the current cluster. This can help to reduce costs and improve performance.
Using Service Internal Traffic Policy
The ServiceInternalTrafficPolicy
feature gate is a Beta feature and enabled by default. When the feature is enabled, you can enable the internal-only traffic policy for a Service, by setting its .spec.internalTrafficPolicy
to Local
. This tells kube-proxy to only use node local endpoints for cluster internal traffic.
Note: For pods on nodes with no endpoints for a given Service, the Service behaves as if it has zero endpoints (for Pods on this node) even if the service does have endpoints on other nodes.
The following example shows what a Service looks like when you set .spec.internalTrafficPolicy
to Local
:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app.kubernetes.io/name: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 9376
internalTrafficPolicy: Local
How it works
The kube-proxy filters the endpoints it routes to based on the spec.internalTrafficPolicy
setting. When it’s set to Local
, only node local endpoints are considered. When it’s Cluster
or missing, all endpoints are considered. When the feature gate ServiceInternalTrafficPolicy
is enabled, spec.internalTrafficPolicy
defaults to “Cluster”.
What’s next
- Read about Topology Aware Hints
- Read about Service External Traffic Policy
- Read Connecting Applications with Services