Installing Consul on Kubernetes
Consul can run directly on Kubernetes, both in server or client mode. For pure-Kubernetes workloads, this enables Consul to also exist purely within Kubernetes. For heterogeneous workloads, Consul agents can join a server running inside or outside of Kubernetes.
You can install Consul on Kubernetes using the following methods:
Refer to the architecture section to learn more about the general architecture of Consul on Kubernetes. For a hands-on experience with Consul as a service mesh for Kubernetes, follow the Getting Started with Consul service mesh tutorial.
Consul K8s CLI Installation
We recommend using the Consul K8s CLI to install Consul on Kubernetes for single-cluster deployments. You can install Consul on Kubernetes using the Consul K8s CLI tool after installing the CLI.
Before beginning the installation process, verify that kubectl
is already configured to authenticate to the Kubernetes cluster using a valid kubeconfig
file.
The Homebrew package manager is required to complete the following installation instructions.
NOTE: To deploy a previous version of Consul on Kubernetes via the CLI, you will need to first download the specific version of the CLI that matches the version of the control plane that you would like to deploy. Please follow Install a specific version of Consul K8s CLI.
Install the HashiCorp
tap
, which is a repository of all Homebrew packages for HashiCorp:$ brew tap hashicorp/tap
Install the Consul K8s CLI with the
hashicorp/tap/consul
formula.$ brew install hashicorp/tap/consul-k8s
Issue the
install
subcommand to install Consul on Kubernetes. Refer to the Consul K8s CLI reference for details about all commands and available options. Without any additional options passed, theconsul-k8s
CLI will install Consul on Kubernetes by using the Consul Helm chart’s default values. Below is an example that installs Consul on Kubernetes with Service Mesh and CRDs enabled. If you did not set the-auto-approve
option totrue
, you will be prompted to proceed with the installation if the pre-install checks pass.The pre-install checks may fail if existing
PersistentVolumeClaims
(PVC) are detected. Refer to the uninstall instructions for information about removing PVCs.$ consul-k8s install -set connectInject.enabled=true -set controller.enabled=true
==> Pre-Install Checks
No existing installations found.
✓ No previous persistent volume claims found
✓ No previous secrets found
==> Consul Installation Summary
Installation name: consul
Namespace: consul
Overrides:
connectInject:
enabled: true
controller:
enabled: true
Proceed with installation? (y/N) y
==> Running Installation
✓ Downloaded charts
--> creating 1 resource(s)
--> creating 45 resource(s)
--> beginning wait for 45 resources with timeout of 10m0s
✓ Consul installed into namespace "consul"
(Optional) Issue the
consul-k8s status
command to quickly glance at the status of the installed Consul cluster.$ consul-k8s status
==> Consul-K8s Status Summary
NAME | NAMESPACE | STATUS | CHARTVERSION | APPVERSION | REVISION | LAST UPDATED
---------+-----------+----------+--------------+------------+----------+--------------------------
consul | consul | deployed | 0.40.0 | 1.11.2 | 1 | 2022/01/31 16:58:51 PST
==> Config:
connectInject:
enabled: true
controller:
enabled: true
global:
name: consul
✓ Consul servers healthy (3/3)
✓ Consul clients healthy (3/3)
Helm Chart Installation
We recommend using the Consul Helm chart to install Consul on Kubernetes for multi-cluster installations that involve cross-partition of cross datacenter communication. The Helm chart installs and configures all necessary components to run Consul. The configuration enables you to run a server cluster, a client cluster, or both.
Step-by-step tutorials for how to deploy Consul to Kubernetes, please see our Deploy to Kubernetes collection. This collection includes configuration caveats for single-node deployments.
The Helm chart exposes several useful configurations and automatically sets up complex resources, but it does not automatically operate Consul. You must still become familiar with how to monitor, backup, upgrade, etc. the Consul cluster.
The Helm chart has no required configuration and will install a Consul cluster with default configurations. We strongly recommend learning about the configuration options prior to going to production.
Security Warning: By default, the chart will install an insecure configuration of Consul. This provides a less complicated out-of-box experience for new users, but is not appropriate for a production setup. We strongly recommend using a properly-secured Kubernetes cluster or making sure that you understand and enable the recommended security features. Currently, some of these features are not supported in the Helm chart and require additional manual configuration.
Prerequisites
The Consul Helm only supports Helm 3.2+. Install the latest version of the Helm CLI here: Installing Helm.
Installing Consul
Add the HashiCorp Helm Repository:
$ helm repo add hashicorp https://helm.releases.hashicorp.com
"hashicorp" has been added to your repositories
Verify that you have access to the consul chart:
$ helm search repo hashicorp/consul
NAME CHART VERSION APP VERSION DESCRIPTION
hashicorp/consul 0.39.0 1.11.1 Official HashiCorp Consul Chart
Prior to installing via Helm, ensure that the
consul
Kubernetes namespace does not exist, as installing on a dedicated namespace is recommended.$ kubectl get namespace
NAME STATUS AGE
default Active 18h
kube-node-lease Active 18h
kube-public Active 18h
kube-system Active 18h
Issue the following command to install Consul with the default configuration using Helm. You could also install Consul on a dedicated namespace of your choosing by modifying the value of the
-n
flag for the Helm install.$ helm install consul hashicorp/consul --set global.name=consul --create-namespace --namespace consul
NAME: consul
...
The Helm chart does everything to set up a recommended Consul-on-Kubernetes deployment. After installation, a Consul cluster will be formed, a leader will be elected, and every node will have a running Consul agent.
Customizing Your Installation
If you want to customize your installation, create a config.yaml
file to override the default settings. You can learn what settings are available by running helm inspect values hashicorp/consul
or by reading the Helm Chart Reference.
Minimal config.yaml
for Consul Service Mesh
The minimal settings to enable Consul Service Mesh) would be captured in the following config.yaml
config file:
global:
name: consul
connectInject:
enabled: true
controller:
enabled: true
config.yaml
Once you’ve created your config.yaml
file, run helm install
with the --values
flag:
$ helm install consul hashicorp/consul --create-namespace --namespace consul --values config.yaml
NAME: consul
...
Enable Consul Service Mesh on select namespaces
By default, Consul Service Mesh is enabled on almost all namespaces (with the exception of kube-system
and local-path-storage
) within a Kubernetes cluster. You can restrict this to a subset of namespaces by specifying a namespaceSelector
that matches a label attached to each namespace denoting whether to enable Consul service mesh. In order to default to enabling service mesh on select namespaces by label, the connectInject.default
value must be set to true
.
global:
name: consul
connectInject:
enabled: true
default: true
namespaceSelector: |
matchLabels:
connect-inject : enabled
controller:
enabled: true
config.yaml
Label the namespace(s), where you would like to enable Consul Service Mesh.
$ kubectl create ns foo
$ kubectl label namespace foo connect-inject=enabled
Next, run helm install
with the --values
flag:
$ helm install consul hashicorp/consul --create-namespace --namespace consul --values config.yaml
NAME: consul
...
Updating your Consul on Kubernetes configuration
If you’ve already installed Consul and want to make changes, you’ll need to run helm upgrade
. See Upgrading for more details.
Viewing the Consul UI
The Consul UI is enabled by default when using the Helm chart. For security reasons, it isn’t exposed via a LoadBalancer
Service by default so you must use kubectl port-forward
to visit the UI.
TLS Disabled
If running with TLS disabled, the Consul UI will be accessible via http on port 8500:
$ kubectl port-forward service/consul-server --namespace consul 8500:8500
...
Once the port is forwarded navigate to http://localhost:8500.
TLS Enabled
If running with TLS enabled, the Consul UI will be accessible via https on port 8501:
$ kubectl port-forward service/consul-server --namespace consul 8501:8501
...
Once the port is forwarded navigate to https://localhost:8501.
You’ll need to click through an SSL warning from your browser because the Consul certificate authority is self-signed and not in the browser’s trust store.
ACLs Enabled
If ACLs are enabled, you will need to input an ACL token into the UI in order to see all resources and make modifications.
To retrieve the bootstrap token that has full permissions, run:
$ kubectl get secrets/consul-bootstrap-acl-token --template='{{.data.token | base64decode }}'
e7924dd1-dc3f-f644-da54-81a73ba0a178%
Then paste the token into the UI under the ACLs tab (without the %
).
NOTE: If using multi-cluster federation, your kubectl context must be in the primary datacenter to retrieve the bootstrap token since secondary datacenters use a separate token with less permissions.
Exposing the UI via a service
If you want to expose the UI via a Kubernetes Service, configure the ui.service chart values. This service will allow requests to the Consul servers so it should not be open to the world.
Accessing the Consul HTTP API
The Consul HTTP API should be accessed by communicating to the local agent running on the same node. While technically any listening agent (client or server) can respond to the HTTP API, communicating with the local agent has important caching behavior, and allows you to use the simpler /agent endpoints for services and checks.
For Consul installed via the Helm chart, a client agent is installed on each Kubernetes node. This is explained in the architecture section. To access the agent, you may use the downward API.
An example pod specification is shown below. In addition to pods, anything with a pod template can also access the downward API and can therefore also access Consul: StatefulSets, Deployments, Jobs, etc.
apiVersion: v1
kind: Pod
metadata:
name: consul-example
spec:
containers:
- name: example
image: 'consul:latest'
env:
- name: HOST_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
command:
- '/bin/sh'
- '-ec'
- |
export CONSUL_HTTP_ADDR="${HOST_IP}:8500"
consul kv put hello world
restartPolicy: Never
An example Deployment
is also shown below to show how the host IP can be accessed from nested pod specifications:
apiVersion: apps/v1
kind: Deployment
metadata:
name: consul-example-deployment
spec:
replicas: 1
selector:
matchLabels:
app: consul-example
template:
metadata:
labels:
app: consul-example
spec:
containers:
- name: example
image: 'consul:latest'
env:
- name: HOST_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
command:
- '/bin/sh'
- '-ec'
- |
export CONSUL_HTTP_ADDR="${HOST_IP}:8500"
consul kv put hello world
Next Steps
If you are still considering a move to Kubernetes, or to Consul on Kubernetes specifically, our Migrate to Microservices with Consul Service Mesh on Kubernetes collection uses an example application written by a fictional company to illustrate why and how organizations can migrate from monolith to microservices using Consul service mesh on Kubernetes. The case study in this collection should provide information valuable for understanding how to develop services that leverage Consul during any stage of your microservices journey.