Install Istio in Dual-Stack mode
This feature is targeted at developers / expert users and is considered Alpha.
Prerequisites
- Istio 1.17 or later.
- Kubernetes 1.23 or later configured for dual-stack operations.
Installation steps
If you want to use kind
for your test, you can set up a dual stack cluster with the following command:
$ kind create cluster --name istio-ds --config - <<EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
ipFamily: dual
EOF
To enable dual-stack for Istio, you will need to modify your IstioOperator
or Helm values with the following configuration.
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
meshConfig:
defaultConfig:
proxyMetadata:
ISTIO_DUAL_STACK: "true"
values:
pilot:
env:
ISTIO_DUAL_STACK: "true"
# The below values are optional and can be used based on your requirements
gateways:
istio-ingressgateway:
ipFamilyPolicy: RequireDualStack
istio-egressgateway:
ipFamilyPolicy: RequireDualStack
meshConfig:
defaultConfig:
proxyMetadata:
ISTIO_DUAL_STACK: "true"
values:
pilot:
env:
ISTIO_DUAL_STACK: "true"
# The below values are optional and can be used based on your requirements
gateways:
istio-ingressgateway:
ipFamilyPolicy: RequireDualStack
istio-egressgateway:
ipFamilyPolicy: RequireDualStack
Verification
Create three namespaces:
dual-stack
:tcp-echo
will listen on both an IPv4 and IPv6 address.ipv4
:tcp-echo
will listen on only an IPv4 address.ipv6
:tcp-echo
will listen on only an IPv6 address.
$ kubectl create namespace dual-stack
$ kubectl create namespace ipv4
$ kubectl create namespace ipv6
Enable sidecar injection on all of those namespaces as well as the
default
namespace:$ kubectl label --overwrite namespace default istio-injection=enabled
$ kubectl label --overwrite namespace dual-stack istio-injection=enabled
$ kubectl label --overwrite namespace ipv4 istio-injection=enabled
$ kubectl label --overwrite namespace ipv6 istio-injection=enabled
Create tcp-echo deployments in the namespaces:
$ kubectl apply --namespace dual-stack -f @samples/tcp-echo/tcp-echo-dual-stack.yaml@
$ kubectl apply --namespace ipv4 -f @samples/tcp-echo/tcp-echo-ipv4.yaml@
$ kubectl apply --namespace ipv6 -f @samples/tcp-echo/tcp-echo-ipv6.yaml@
Deploy the curl sample app to use as a test source for sending requests.
$ kubectl apply -f @samples/curl/curl.yaml@
Verify the traffic reaches the dual-stack pods:
$ kubectl exec "$(kubectl get pod -l app=curl -o jsonpath='{.items[0].metadata.name}')" -- sh -c "echo dualstack | nc tcp-echo.dual-stack 9000"
hello dualstack
Verify the traffic reaches the IPv4 pods:
$ kubectl exec "$(kubectl get pod -l app=curl -o jsonpath='{.items[0].metadata.name}')" -- sh -c "echo ipv4 | nc tcp-echo.ipv4 9000"
hello ipv4
Verify the traffic reaches the IPv6 pods:
$ kubectl exec "$(kubectl get pod -l app=curl -o jsonpath='{.items[0].metadata.name}')" -- sh -c "echo ipv6 | nc tcp-echo.ipv6 9000"
hello ipv6
Verify the envoy listeners:
$ istioctl proxy-config listeners "$(kubectl get pod -n dual-stack -l app=tcp-echo -o jsonpath='{.items[0].metadata.name}')" -n dual-stack --port 9000 -ojson | jq '.[] | {name: .name, address: .address, additionalAddresses: .additionalAddresses}'
You will see listeners are now bound to multiple addresses, but only for dual stack services. Other services will only be listening on a single IP address.
"name": "fd00:10:96::f9fc_9000",
"address": {
"socketAddress": {
"address": "fd00:10:96::f9fc",
"portValue": 9000
}
},
"additionalAddresses": [
{
"address": {
"socketAddress": {
"address": "10.96.106.11",
"portValue": 9000
}
}
}
],
Verify virtual inbound addresses are configured to listen on both
0.0.0.0
and[::]
.$ istioctl proxy-config listeners "$(kubectl get pod -n dual-stack -l app=tcp-echo -o jsonpath='{.items[0].metadata.name}')" -n dual-stack -o json | jq '.[] | select(.name=="virtualInbound") | {name: .name, address: .address, additionalAddresses: .additionalAddresses}'
"name": "virtualInbound",
"address": {
"socketAddress": {
"address": "0.0.0.0",
"portValue": 15006
}
},
"additionalAddresses": [
{
"address": {
"socketAddress": {
"address": "::",
"portValue": 15006
}
}
}
],
Verify envoy endpoints are configured to route to both IPv4 and IPv6:
$ istioctl proxy-config endpoints "$(kubectl get pod -l app=curl -o jsonpath='{.items[0].metadata.name}')" --port 9000
ENDPOINT STATUS OUTLIER CHECK CLUSTER
10.244.0.19:9000 HEALTHY OK outbound|9000||tcp-echo.ipv4.svc.cluster.local
10.244.0.26:9000 HEALTHY OK outbound|9000||tcp-echo.dual-stack.svc.cluster.local
fd00:10:244::1a:9000 HEALTHY OK outbound|9000||tcp-echo.dual-stack.svc.cluster.local
fd00:10:244::18:9000 HEALTHY OK outbound|9000||tcp-echo.ipv6.svc.cluster.local
Now you can experiment with dual-stack services in your environment!
Cleanup
Cleanup application namespaces and deployments
$ kubectl delete -f @samples/curl/curl.yaml@
$ kubectl delete ns dual-stack ipv4 ipv6