SPIRE

SPIRE is a production-ready implementation of the SPIFFE specification that performs node and workload attestation in order to securely issue cryptographic identities to workloads running in heterogeneous environments. SPIRE can be configured as a source of cryptographic identities for Istio workloads through an integration with Envoy’s SDS API. Istio can detect the existence of a UNIX Domain Socket that implements the Envoy SDS API on a defined socket path, allowing Envoy to communicate and fetch identities directly from it.

This integration with SPIRE provides flexible attestation options not available with the default Istio identity management while harnessing Istio’s powerful service management. For example, SPIRE’s plugin architecture enables diverse workload attestation options beyond the Kubernetes namespace and service account attestation offered by Istio. SPIRE’s node attestation extends attestation to the physical or virtual hardware on which workloads run.

For a quick demo of how this SPIRE integration with Istio works, see Integrating SPIRE as a CA through Envoy’s SDS API.

Install SPIRE

We recommend you follow SPIRE’s installation instructions and best practices for installing SPIRE, and for deploying SPIRE in production environments.

For the examples in this guide, the SPIRE Helm charts will be used with upstream defaults, to focus on just the configuration necessary to integrate SPIRE and Istio.

  1. $ helm upgrade --install -n spire-server spire-crds spire-crds --repo https://spiffe.github.io/helm-charts-hardened/ --create-namespace
  1. $ helm upgrade --install -n spire-server spire spire --repo https://spiffe.github.io/helm-charts-hardened/ --wait --set global.spire.trustDomain="example.org"

See the SPIRE Helm chart documentation for other values you can configure for your installation.

It is important that SPIRE and Istio are configured with the exact same trust domain, to prevent authentication and authorization errors, and that the SPIFFE CSI driver is enabled and installed.

By default, the above will also install:

  • The SPIFFE CSI driver, which is used to mount an Envoy-compatible SDS socket into proxies. Using the SPIFFE CSI driver to mount SDS sockets is strongly recommended by both Istio and SPIRE, as hostMounts are a larger security risk and introduce operational hurdles. This guide assumes the use of the SPIFFE CSI driver.

  • The SPIRE Controller Manager, which eases the creation of SPIFFE registrations for workloads.

Register workloads

By design, SPIRE only grants identities to workloads that have been registered with the SPIRE server; this includes user workloads, as well as Istio components. Istio sidecars and gateways, once configured for SPIRE integration, cannot get identities, and therefore cannot reach READY status, unless there is a preexisting, matching SPIRE registration created for them ahead of time.

See the SPIRE docs on registering workloads for more information on using multiple selectors to strengthen attestation criteria, and the selectors available.

This section describes the options available for registering Istio workloads in a SPIRE Server and provides some example workload registrations.

Istio currently requires a specific SPIFFE ID format for workloads. All registrations must follow the Istio SPIFFE ID pattern: spiffe://<trust.domain>/ns/<namespace>/sa/<service-account>

Option 1: Auto-registration using the SPIRE Controller Manager

New entries will be automatically registered for each new pod that matches the selector defined in a ClusterSPIFFEID custom resource.

Both Istio sidecars and Istio gateways need to be registered with SPIRE, so that they can request identities.

Istio Gateway ClusterSPIFFEID

The following will create a ClusterSPIFFEID, which will auto-register any Istio Ingress gateway pod with SPIRE if it is scheduled into the istio-system namespace, and has a service account named istio-ingressgateway-service-account. These selectors are used as a simple example; consult the SPIRE Controller Manager documentation for more details.

  1. $ kubectl apply -f - <<EOF
  2. apiVersion: spire.spiffe.io/v1alpha1
  3. kind: ClusterSPIFFEID
  4. metadata:
  5. name: istio-ingressgateway-reg
  6. spec:
  7. spiffeIDTemplate: "spiffe://{{ .TrustDomain }}/ns/{{ .PodMeta.Namespace }}/sa/{{ .PodSpec.ServiceAccountName }}"
  8. workloadSelectorTemplates:
  9. - "k8s:ns:istio-system"
  10. - "k8s:sa:istio-ingressgateway-service-account"
  11. EOF

Istio Sidecar ClusterSPIFFEID

The following will create a ClusterSPIFFEID which will auto-register any pod with the spiffe.io/spire-managed-identity: true label that is deployed into the default namespace with SPIRE. These selectors are used as a simple example; consult the SPIRE Controller Manager documentation for more details.

  1. $ kubectl apply -f - <<EOF
  2. apiVersion: spire.spiffe.io/v1alpha1
  3. kind: ClusterSPIFFEID
  4. metadata:
  5. name: istio-sidecar-reg
  6. spec:
  7. spiffeIDTemplate: "spiffe://{{ .TrustDomain }}/ns/{{ .PodMeta.Namespace }}/sa/{{ .PodSpec.ServiceAccountName }}"
  8. podSelector:
  9. matchLabels:
  10. spiffe.io/spire-managed-identity: "true"
  11. workloadSelectorTemplates:
  12. - "k8s:ns:default"
  13. EOF

Option 2: Manual Registration

If you wish to manually create your SPIRE registrations, rather than use the SPIRE Controller Manager mentioned in the recommended option, refer to the SPIRE documentation on manual registration.

Below are the equivalent manual registrations based off the automatic registrations in Option 1. The following steps assume you have already followed the SPIRE documentation to manually register your SPIRE agent and node attestation and that your SPIRE agent was registered with the SPIFFE identity spiffe://example.org/ns/spire/sa/spire-agent.

  1. Get the spire-server pod:

    1. $ SPIRE_SERVER_POD=$(kubectl get pod -l statefulset.kubernetes.io/pod-name=spire-server-0 -n spire-server -o jsonpath="{.items[0].metadata.name}")
  2. Register an entry for the Istio Ingress gateway pod:

    1. $ kubectl exec -n spire "$SPIRE_SERVER_POD" -- \
    2. /opt/spire/bin/spire-server entry create \
    3. -spiffeID spiffe://example.org/ns/istio-system/sa/istio-ingressgateway-service-account \
    4. -parentID spiffe://example.org/ns/spire/sa/spire-agent \
    5. -selector k8s:sa:istio-ingressgateway-service-account \
    6. -selector k8s:ns:istio-system \
    7. -socketPath /run/spire/sockets/server.sock
    8. Entry ID : 6f2fe370-5261-4361-ac36-10aae8d91ff7
    9. SPIFFE ID : spiffe://example.org/ns/istio-system/sa/istio-ingressgateway-service-account
    10. Parent ID : spiffe://example.org/ns/spire/sa/spire-agent
    11. Revision : 0
    12. TTL : default
    13. Selector : k8s:ns:istio-system
    14. Selector : k8s:sa:istio-ingressgateway-service-account
  3. Register an entry for workloads injected with an Istio sidecar:

    1. $ kubectl exec -n spire "$SPIRE_SERVER_POD" -- \
    2. /opt/spire/bin/spire-server entry create \
    3. -spiffeID spiffe://example.org/ns/default/sa/sleep \
    4. -parentID spiffe://example.org/ns/spire/sa/spire-agent \
    5. -selector k8s:ns:default \
    6. -selector k8s:pod-label:spiffe.io/spire-managed-identity:true \
    7. -socketPath /run/spire/sockets/server.sock

Install Istio

  1. Download the Istio release.

  2. Create the Istio configuration with custom patches for the Ingress Gateway and istio-proxy. The Ingress Gateway component includes the spiffe.io/spire-managed-identity: "true" label.

    1. $ cat <<EOF > ./istio.yaml
    2. apiVersion: install.istio.io/v1alpha1
    3. kind: IstioOperator
    4. metadata:
    5. namespace: istio-system
    6. spec:
    7. profile: default
    8. meshConfig:
    9. trustDomain: example.org
    10. values:
    11. global:
    12. # This is used to customize the sidecar template.
    13. # It adds both the label to indicate that SPIRE should manage the
    14. # identity of this pod, as well as the CSI driver mounts.
    15. sidecarInjectorWebhook:
    16. templates:
    17. spire: |
    18. labels:
    19. spiffe.io/spire-managed-identity: "true"
    20. spec:
    21. containers:
    22. - name: istio-proxy
    23. volumeMounts:
    24. - name: workload-socket
    25. mountPath: /run/secrets/workload-spiffe-uds
    26. readOnly: true
    27. volumes:
    28. - name: workload-socket
    29. csi:
    30. driver: "csi.spiffe.io"
    31. readOnly: true
    32. components:
    33. ingressGateways:
    34. - name: istio-ingressgateway
    35. enabled: true
    36. label:
    37. istio: ingressgateway
    38. k8s:
    39. overlays:
    40. # This is used to customize the ingress gateway template.
    41. # It adds the CSI driver mounts, as well as an init container
    42. # to stall gateway startup until the CSI driver mounts the socket.
    43. - apiVersion: apps/v1
    44. kind: Deployment
    45. name: istio-ingressgateway
    46. patches:
    47. - path: spec.template.spec.volumes.[name:workload-socket]
    48. value:
    49. name: workload-socket
    50. csi:
    51. driver: "csi.spiffe.io"
    52. readOnly: true
    53. - path: spec.template.spec.containers.[name:istio-proxy].volumeMounts.[name:workload-socket]
    54. value:
    55. name: workload-socket
    56. mountPath: "/run/secrets/workload-spiffe-uds"
    57. readOnly: true
    58. - path: spec.template.spec.initContainers
    59. value:
    60. - name: wait-for-spire-socket
    61. image: busybox:1.36
    62. volumeMounts:
    63. - name: workload-socket
    64. mountPath: /run/secrets/workload-spiffe-uds
    65. readOnly: true
    66. env:
    67. - name: CHECK_FILE
    68. value: /run/secrets/workload-spiffe-uds/socket
    69. command:
    70. - sh
    71. - "-c"
    72. - |-
    73. echo "$(date -Iseconds)" Waiting for: ${CHECK_FILE}
    74. while [[ ! -e ${CHECK_FILE} ]] ; do
    75. echo "$(date -Iseconds)" File does not exist: ${CHECK_FILE}
    76. sleep 15
    77. done
    78. ls -l ${CHECK_FILE}
    79. EOF
  3. Apply the configuration:

    1. $ istioctl install --skip-confirmation -f ./istio.yaml
  4. Check Ingress Gateway pod state:

    1. $ kubectl get pods -n istio-system
    2. NAME READY STATUS RESTARTS AGE
    3. istio-ingressgateway-5b45864fd4-lgrxs 1/1 Running 0 17s
    4. istiod-989f54d9c-sg7sn 1/1 Running 0 23s

    The Ingress Gateway pod is Ready since the corresponding registration entry is automatically created for it on the SPIRE Server. Envoy is able to fetch cryptographic identities from SPIRE.

    This configuration also adds an initContainer to the gateway that will wait for SPIRE to create the UNIX Domain Socket before starting the istio-proxy. If the SPIRE agent is not ready, or has not been properly configured with the same socket path, the Ingress Gateway initContainer will wait forever.

  5. Deploy an example workload:

    Zip

    1. $ istioctl kube-inject --filename @samples/security/spire/sleep-spire.yaml@ | kubectl apply -f -

    In addition to needing spiffe.io/spire-managed-identity label, the workload will need the SPIFFE CSI Driver volume to access the SPIRE Agent socket. To accomplish this, you can leverage the spire pod annotation template from the Install Istio section or add the CSI volume to the deployment spec of your workload. Both of these alternatives are highlighted on the example snippet below:

    1. apiVersion: apps/v1
    2. kind: Deployment
    3. metadata:
    4. name: sleep
    5. spec:
    6. replicas: 1
    7. selector:
    8. matchLabels:
    9. app: sleep
    10. template:
    11. metadata:
    12. labels:
    13. app: sleep
    14. # Injects custom sidecar template
    15. annotations:
    16. inject.istio.io/templates: "sidecar,spire"
    17. spec:
    18. terminationGracePeriodSeconds: 0
    19. serviceAccountName: sleep
    20. containers:
    21. - name: sleep
    22. image: curlimages/curl
    23. command: ["/bin/sleep", "3650d"]
    24. imagePullPolicy: IfNotPresent
    25. volumeMounts:
    26. - name: tmp
    27. mountPath: /tmp
    28. securityContext:
    29. runAsUser: 1000
    30. volumes:
    31. - name: tmp
    32. emptyDir: {}
    33. # CSI volume
    34. - name: workload-socket
    35. csi:
    36. driver: "csi.spiffe.io"
    37. readOnly: true

The Istio configuration shares the spiffe-csi-driver with the Ingress Gateway and the sidecars that are going to be injected on workload pods, granting them access to the SPIRE Agent’s UNIX Domain Socket.

See Verifying that identities were created for workloads to check issued identities.

Verifying that identities were created for workloads

Use the following command to confirm that identities were created for the workloads:

  1. $ kubectl exec -t "$SPIRE_SERVER_POD" -n spire-server -c spire-server -- ./bin/spire-server entry show
  2. Found 2 entries
  3. Entry ID : c8dfccdc-9762-4762-80d3-5434e5388ae7
  4. SPIFFE ID : spiffe://example.org/ns/istio-system/sa/istio-ingressgateway-service-account
  5. Parent ID : spiffe://example.org/spire/agent/k8s_psat/demo-cluster/bea19580-ae04-4679-a22e-472e18ca4687
  6. Revision : 0
  7. X509-SVID TTL : default
  8. JWT-SVID TTL : default
  9. Selector : k8s:pod-uid:88b71387-4641-4d9c-9a89-989c88f7509d
  10. Entry ID : af7b53dc-4cc9-40d3-aaeb-08abbddd8e54
  11. SPIFFE ID : spiffe://example.org/ns/default/sa/sleep
  12. Parent ID : spiffe://example.org/spire/agent/k8s_psat/demo-cluster/bea19580-ae04-4679-a22e-472e18ca4687
  13. Revision : 0
  14. X509-SVID TTL : default
  15. JWT-SVID TTL : default
  16. Selector : k8s:pod-uid:ee490447-e502-46bd-8532-5a746b0871d6

Check the Ingress-gateway pod state:

  1. $ kubectl get pods -n istio-system
  2. NAME READY STATUS RESTARTS AGE
  3. istio-ingressgateway-5b45864fd4-lgrxs 1/1 Running 0 60s
  4. istiod-989f54d9c-sg7sn 1/1 Running 0 45s

After registering an entry for the Ingress-gateway pod, Envoy receives the identity issued by SPIRE and uses it for all TLS and mTLS communications.

Check that the workload identity was issued by SPIRE

  1. Get pod information:

    1. $ SLEEP_POD=$(kubectl get pod -l app=sleep -o jsonpath="{.items[0].metadata.name}")
  2. Retrieve sleep’s SVID identity document using the istioctl proxy-config secret command:

    1. $ istioctl proxy-config secret "$SLEEP_POD" -o json | jq -r \
    2. '.dynamicActiveSecrets[0].secret.tlsCertificate.certificateChain.inlineBytes' | base64 --decode > chain.pem
  3. Inspect the certificate and verify that SPIRE was the issuer:

    1. $ openssl x509 -in chain.pem -text | grep SPIRE
    2. Subject: C = US, O = SPIRE, CN = sleep-5f4d47c948-njvpk

SPIFFE federation

SPIRE Servers are able to authenticate SPIFFE identities originating from different trust domains. This is known as SPIFFE federation.

SPIRE Agent can be configured to push federated bundles to Envoy through the Envoy SDS API, allowing Envoy to use validation context to verify peer certificates and trust a workload from another trust domain. To enable Istio to federate SPIFFE identities through SPIRE integration, consult SPIRE Agent SDS configuration and set the following SDS configuration values for your SPIRE Agent configuration file.

ConfigurationDescriptionResource Name
default_svid_nameThe TLS Certificate resource name to use for the default X509-SVID with Envoy SDSdefault
default_bundle_nameThe Validation Context resource name to use for the default X.509 bundle with Envoy SDSnull
default_all_bundles_nameThe Validation Context resource name to use for all bundles (including federated) with Envoy SDSROOTCA

This will allow Envoy to get federated bundles directly from SPIRE.

Create federated registration entries

  • If using the SPIRE Controller Manager, create federated entries for workloads by setting the federatesWith field of the ClusterSPIFFEID CR to the trust domains you want the pod to federate with:

    1. apiVersion: spire.spiffe.io/v1alpha1
    2. kind: ClusterSPIFFEID
    3. metadata:
    4. name: federation
    5. spec:
    6. spiffeIDTemplate: "spiffe://{{ .TrustDomain }}/ns/{{ .PodMeta.Namespace }}/sa/{{ .PodSpec.ServiceAccountName }}"
    7. podSelector:
    8. matchLabels:
    9. spiffe.io/spire-managed-identity: "true"
    10. federatesWith: ["example.io", "example.ai"]
  • For manual registration see Create Registration Entries for Federation.

Cleanup SPIRE

Remove SPIRE by uninstalling its Helm charts:

  1. $ helm delete -n spire-server spire
  1. $ helm delete -n spire-server spire-crds