In this tutorial you will deploy a Consul datacenter to the Elastic Kubernetes Services (EKS) on Amazon Web Services (AWS) with HashiCorp’s official Helm chart or the Consul K8S CLI. You do not need to override any values in the Helm chart for a basic installation, however, in this guide you will be creating a config file with custom values to allow access to the Consul UI.
Security Warning This tutorial is not for production use. By default, the chart will install an insecure configuration of Consul. Please refer to the Kubernetes deployment guide to determine how you can secure Consul on Kubernetes in production. Additionally, it is highly recommended to use a properly secured Kubernetes cluster or make sure that you understand and enable the recommended security features.
Prerequisites
Installing aws-cli, kubectl, and helm CLI tools
To follow this tutorial, you will need the aws-cli
binary installed, as well as kubectl
and helm
.
Reference the following instruction for setting up aws-cli
as well as general documentation:
Reference the following instructions to download kubectl
and helm
:
Installing helm and kubectl with Homebrew
Homebrew allows you to quickly install both Helm and kubectl
on MacOS & Linux.
Install kubectl
with Homebrew.
$ brew install kubernetes-cli
Install helm
with Homebrew.
$ brew install kubernetes-helm
VPC and security group creation
The AWS documentation for creating an EKS cluster assumes that you have a VPC and a dedicated security group created. The instructions on how to create these are here:
You will need the SecurityGroups, VpcId, and SubnetId values for the EKS cluster creation step.
Create an EKS cluster
At least a three node EKS cluster is required to deploy Consul using the official Consul Helm chart. Create a three node cluster on EKS by following the the EKS AWS documentation.
Note: If using eksctl
, you can use this command to create a three-node cluster: eksctl create cluster --name=<YOUR CLUSTER NAME> --region=<YOUR REGION> --nodes=3
Configure kubectl to talk to your cluster
Setting up kubectl to talk to your EKS cluster should be as simple as running the following:
$ aws eks update-kubeconfig --region <region where you deployed your cluster> --name <your cluster name>
You can then run the command kubectl cluster-info
to verify you are connected to your Kubernetes cluster:
$ kubectl cluster-info
Kubernetes master is running at https://<your K8s master location>.eks.amazonaws.com
CoreDNS is running at https://<your CoreDNS location>.eks.amazonaws.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
You can also review the documentation for configuring kubectl and EKS here:
Deploy Consul
You can deploy a complete Consul datacenter using the official Consul Helm chart or the Consul K8S CLI. By default, these methods will install a total of three Consul servers as well as one client per Kubernetes node into your EKS cluster. You can review the Consul Kubernetes installation documentation to learn more about these installation options.
Create a values file
To customize your deployment, you can pass a yaml file to be used during the deployment; it will override the Helm chart’s default values. The following values change your datacenter name and enable the Consul UI via a service.
global:
name: consul
datacenter: hashidc1
ui:
enabled: true
service:
type: LoadBalancer
helm-consul-values.yaml
Install Consul in your cluster
You can now deploy a complete Consul datacenter in your Kubernetes cluster using the official Consul Helm chart or the Consul K8S CLI.
$ helm repo add hashicorp https://helm.releases.hashicorp.com
"hashicorp" has been added to your repositories
$ helm install --values helm-consul-values.yaml consul hashicorp/consul --version "0.40.0"
Note: You can review the official Helm chart values to learn more about the default settings.
Run the command kubectl get pods
to verify three servers and three clients were successfully created.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
consul-5fkt7 1/1 Running 0 69s
consul-8zkjc 1/1 Running 0 69s
consul-lnr74 1/1 Running 0 69s
consul-server-0 1/1 Running 0 69s
consul-server-1 1/1 Running 0 69s
consul-server-2 1/1 Running 0 69s
Accessing the Consul UI
Since you enabled the Consul UI in your values file, you can run the command kubectl get services
to find the load balancer DNS name or external IP of your UI service.
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
consul-dns ClusterIP 172.20.39.92 <none> 53/TCP,53/UDP 8m17s
consul-server ClusterIP None <none> 8500/TCP,8301/TCP,8301/UDP,8302/TCP,8302/UDP,8300/TCP,8600/TCP,8600/UDP 8m17s
consul-ui LoadBalancer 172.20.223.228 aabd04e592a324a369daf25df429accd-601998447.us-east-1.elb.amazonaws.com 80:32026/TCP 8m17s
kubernetes ClusterIP 172.20.0.1 <none> 443/TCP 21m
You can verify that, in this case, the UI is exposed at http://aabd04e592a324a369daf25df429accd-601998447.us-east-1.elb.amazonaws.com
over port 80. Navigate to the load balancer DNS name or external IP in your browser to interact with the Consul UI.
Click the Nodes tab and you can observe several Consul servers and agents running.
Accessing Consul with the CLI and API
In addition to accessing Consul with the UI, you can manage Consul by directly connecting to the pod with kubectl
.
You can also use the Consul HTTP API by communicating to the local agent running on the Kubernetes node. Feel free to explore the Consul API documentation if you are interested in learning more about using the Consul HTTP API with Kubernetes.
Kubectl
To access the pod and data directory, you can remote execute into the pod with the command kubectl
to start a shell session.
$ kubectl exec --stdin --tty consul-server-0 -- /bin/sh
This will allow you to navigate the file system and run Consul CLI commands on the pod. For example you can view the Consul members.
$ consul members
Node Address Status Type Build Protocol DC Segment
consul-server-0 10.0.3.70:8301 alive server 1.10.3 2 hashidc1 <all>
consul-server-1 10.0.2.253:8301 alive server 1.10.3 2 hashidc1 <all>
consul-server-2 10.0.1.39:8301 alive server 1.10.3 2 hashidc1 <all>
ip-10-0-1-139.ec2.internal 10.0.1.148:8301 alive client 1.10.3 2 hashidc1 <default>
ip-10-0-2-47.ec2.internal 10.0.2.59:8301 alive client 1.10.3 2 hashidc1 <default>
ip-10-0-3-94.ec2.internal 10.0.3.225:8301 alive client 1.10.3 2 hashidc1 <default>
When you have finished interacting with the pod, exit the shell.
$ exit
Using Consul environment variables
You can also access the Consul datacenter with your local Consul binary by enabling environment variables. You can read more about Consul environment variables documented here.
In this case, since you are exposing HTTP via the load balancer/UI service, you can export the CONSUL_HTTP_ADDR
variable to point to the load balancer DNS name (or external IP) of your Consul UI service:
$ export CONSUL_HTTP_ADDR=http://aabd04e592a324a369daf25df429accd-601998447.us-east-1.elb.amazonaws.com:80
You can now use your local installation of the Consul binary to run Consul commands:
$ consul members
Node Address Status Type Build Protocol DC Partition Segment
consul-server-0 10.0.3.70:8301 alive server 1.10.3 2 hashidc1 default <all>
consul-server-1 10.0.2.253:8301 alive server 1.10.3 2 hashidc1 default <all>
consul-server-2 10.0.1.39:8301 alive server 1.10.3 2 hashidc1 default <all>
ip-10-0-1-139.ec2.internal 10.0.1.148:8301 alive client 1.10.3 2 hashidc1 default <default>
ip-10-0-2-47.ec2.internal 10.0.2.59:8301 alive client 1.10.3 2 hashidc1 default <default>
ip-10-0-3-94.ec2.internal 10.0.3.225:8301 alive client 1.10.3 2 hashidc1 default <default>
Next steps
In this tutorial, you deployed a Consul datacenter to AWS Elastic Kubernetes Service using the official Helm chart or Consul K8S CLI. You also configured access to the Consul UI. To learn more about deployment best practices, review the Kubernetes Reference Architecture tutorial.