TCP Traffic Shifting
This task shows you how to gradually migrate TCP traffic from one version of a microservice to another. For example, you might migrate TCP traffic from an older version to a new version.
A common use case is to migrate TCP traffic gradually from one version of a microservice to another. In Istio, you accomplish this goal by configuring a sequence of rules that route a percentage of TCP traffic to one service or another. In this task, you will send 100% of the TCP traffic to tcp-echo:v1
. Then, you will route 20% of the TCP traffic to tcp-echo:v2
using Istio’s weighted routing feature.
Before you begin
Setup Istio by following the instructions in the Installation guide.
Review the Traffic Management concepts doc.
Set up the test environment
To get started, create a namespace for testing TCP traffic shifting and label it to enable automatic sidecar injection.
$ kubectl create namespace istio-io-tcp-traffic-shifting
$ kubectl label namespace istio-io-tcp-traffic-shifting istio-injection=enabled
Deploy the sleep sample app to use as a test source for sending requests.
$ kubectl apply -f @samples/sleep/sleep.yaml@ -n istio-io-tcp-traffic-shifting
Deploy the
v1
andv2
versions of thetcp-echo
microservice.$ kubectl apply -f @samples/tcp-echo/tcp-echo-services.yaml@ -n istio-io-tcp-traffic-shifting
Follow the instructions in Determining the ingress IP and ports to define the
TCP_INGRESS_PORT
andINGRESS_HOST
environment variables.
Apply weight-based TCP routing
Route all TCP traffic to the
v1
version of thetcp-echo
microservice.$ kubectl apply -f @samples/tcp-echo/tcp-echo-all-v1.yaml@ -n istio-io-tcp-traffic-shifting
Confirm that the
tcp-echo
service is up and running by sending some TCP traffic from thesleep
client.$ for i in {1..20}; do \
kubectl exec "$(kubectl get pod -l app=sleep -n istio-io-tcp-traffic-shifting -o jsonpath={.items..metadata.name})" \
-c sleep -n istio-io-tcp-traffic-shifting -- sh -c "(date; sleep 1) | nc $INGRESS_HOST $TCP_INGRESS_PORT"; \
done
one Mon Nov 12 23:24:57 UTC 2018
one Mon Nov 12 23:25:00 UTC 2018
one Mon Nov 12 23:25:02 UTC 2018
one Mon Nov 12 23:25:05 UTC 2018
one Mon Nov 12 23:25:07 UTC 2018
one Mon Nov 12 23:25:10 UTC 2018
one Mon Nov 12 23:25:12 UTC 2018
one Mon Nov 12 23:25:15 UTC 2018
one Mon Nov 12 23:25:17 UTC 2018
one Mon Nov 12 23:25:19 UTC 2018
...
You should notice that all the timestamps have a prefix of one, which means that all traffic was routed to the
v1
version of thetcp-echo
service.Transfer 20% of the traffic from
tcp-echo:v1
totcp-echo:v2
with the following command:$ kubectl apply -f @samples/tcp-echo/tcp-echo-20-v2.yaml@ -n istio-io-tcp-traffic-shifting
Wait a few seconds for the new rules to propagate.
Confirm that the rule was replaced:
$ kubectl get virtualservice tcp-echo -o yaml -n istio-io-tcp-traffic-shifting
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
...
spec:
...
tcp:
- match:
- port: 31400
route:
- destination:
host: tcp-echo
port:
number: 9000
subset: v1
weight: 80
- destination:
host: tcp-echo
port:
number: 9000
subset: v2
weight: 20
Send some more TCP traffic to the
tcp-echo
microservice.$ for i in {1..20}; do \
kubectl exec "$(kubectl get pod -l app=sleep -n istio-io-tcp-traffic-shifting -o jsonpath={.items..metadata.name})" \
-c sleep -n istio-io-tcp-traffic-shifting -- sh -c "(date; sleep 1) | nc $INGRESS_HOST $TCP_INGRESS_PORT"; \
done
one Mon Nov 12 23:38:45 UTC 2018
two Mon Nov 12 23:38:47 UTC 2018
one Mon Nov 12 23:38:50 UTC 2018
one Mon Nov 12 23:38:52 UTC 2018
one Mon Nov 12 23:38:55 UTC 2018
two Mon Nov 12 23:38:57 UTC 2018
one Mon Nov 12 23:39:00 UTC 2018
one Mon Nov 12 23:39:02 UTC 2018
one Mon Nov 12 23:39:05 UTC 2018
one Mon Nov 12 23:39:07 UTC 2018
...
You should now notice that about 20% of the timestamps have a prefix of two, which means that 80% of the TCP traffic was routed to the
v1
version of thetcp-echo
service, while 20% was routed tov2
.
Understanding what happened
In this task you partially migrated TCP traffic from an old to new version of the tcp-echo
service using Istio’s weighted routing feature. Note that this is very different than doing version migration using the deployment features of container orchestration platforms, which use instance scaling to manage the traffic.
With Istio, you can allow the two versions of the tcp-echo
service to scale up and down independently, without affecting the traffic distribution between them.
For more information about version routing with autoscaling, check out the blog article Canary Deployments using Istio.
Cleanup
Remove the
sleep
sample,tcp-echo
application, and routing rules:$ kubectl delete -f @samples/tcp-echo/tcp-echo-all-v1.yaml@ -n istio-io-tcp-traffic-shifting
$ kubectl delete -f @samples/tcp-echo/tcp-echo-services.yaml@ -n istio-io-tcp-traffic-shifting
$ kubectl delete -f @samples/sleep/sleep.yaml@ -n istio-io-tcp-traffic-shifting
$ kubectl delete namespace istio-io-tcp-traffic-shifting
See also
Direct encrypted traffic from IBM Cloud Kubernetes Service Ingress to Istio Ingress Gateway
Configure the IBM Cloud Kubernetes Service Application Load Balancer to direct traffic to the Istio Ingress gateway with mutual TLS.
Multicluster Istio configuration and service discovery using Admiral
Automating Istio configuration for Istio deployments (clusters) that work as a single mesh.
Istio as a Proxy for External Services
Configure Istio ingress gateway to act as a proxy for external services.
Multi-Mesh Deployments for Isolation and Boundary Protection
Deploy environments that require isolation into separate meshes and enable inter-mesh communication by mesh federation.
Secure Control of Egress Traffic in Istio, part 3
Comparison of alternative solutions to control egress traffic including performance considerations.
Secure Control of Egress Traffic in Istio, part 2
Use Istio Egress Traffic Control to prevent attacks involving egress traffic.