Installing Helm
There are two parts to Helm: The Helm client (helm
) and the Helmserver (Tiller). This guide shows how to install the client, and thenproceeds to show two ways to install the server.
IMPORTANT: If you are responsible for ensuring your cluster is a controlled environment, especially when resources are shared, it is strongly recommended installing Tiller using a secured configuration. For guidance, see Securing your Helm Installation.
Installing the Helm Client
The Helm client can be installed either from source, or from pre-built binaryreleases.
From the Binary Releases
Every release of Helmprovides binary releases for a variety of OSes. These binary versionscan be manually downloaded and installed.
- Download your desired version
- Unpack it (
tar -zxvf helm-v2.0.0-linux-amd64.tgz
) - Find the
helm
binary in the unpacked directory, and move it to itsdesired destination (mv linux-amd64/helm /usr/local/bin/helm
) From there, you should be able to run the client:helm help
.
From Snap (Linux)
The Snap package for Helm is maintained bySnapcrafters.
sudo snap install helm --classic
From Homebrew (macOS)
Members of the Kubernetes community have contributed a Helm formula build toHomebrew. This formula is generally up to date.
brew install kubernetes-helm
(Note: There is also a formula for emacs-helm, which is a differentproject.)
From Chocolatey or scoop (Windows)
Members of the Kubernetes community have contributed a Helm package build toChocolatey. This package is generally up to date.
choco install kubernetes-helm
The binary can also be installed via scoop
command-line installer.
scoop install helm
From Script
Helm now has an installer script that will automatically grab the latest versionof the Helm client and install it locally.
You can fetch that script, and then execute it locally. It’s well documented sothat you can read through it and understand what it is doing before you run it.
$ curl -LO https://git.io/get_helm.sh
$ chmod 700 get_helm.sh
$ ./get_helm.sh
Yes, you can curl -L https://git.io/get_helm.sh | bash
that if you want to live on the edge.
From Canary Builds
“Canary” builds are versions of the Helm software that are built fromthe latest master branch. They are not official releases, and may not bestable. However, they offer the opportunity to test the cutting edgefeatures.
Canary Helm binaries are stored at get.helm.sh.Here are links to the common builds:
From Source (Linux, macOS)
Building Helm from source is slightly more work, but is the best way togo if you want to test the latest (pre-release) Helm version.
You must have a working Go environment withglide installed.
$ cd $GOPATH
$ mkdir -p src/k8s.io
$ cd src/k8s.io
$ git clone https://github.com/helm/helm.git
$ cd helm
$ make bootstrap build
The bootstrap
target will attempt to install dependencies, rebuild thevendor/
tree, and validate configuration.
The build
target will compile helm
and place it in bin/helm
.Tiller is also compiled, and is placed in bin/tiller
.
Installing Tiller
Tiller, the server portion of Helm, typically runs inside of yourKubernetes cluster. But for development, it can also be run locally, andconfigured to talk to a remote Kubernetes cluster.
Special Note for RBAC Users
Most cloud providers enable a feature called Role-Based Access Control - RBAC for short. If your cloud provider enables this feature, you will need to create a service account for Tiller with the right roles and permissions to access resources.
Check the Kubernetes Distribution Guide to see if there’s any further points of interest on using Helm with your cloud provider. Also check out the guide on Tiller and Role-Based Access Control for more information on how to run Tiller in an RBAC-enabled Kubernetes cluster.
Easy In-Cluster Installation
The easiest way to install tiller
into the cluster is simply to runhelm init
. This will validate that helm
’s local environment is setup correctly (and set it up if necessary). Then it will connect towhatever cluster kubectl
connects to by default (kubectl config
view
). Once it connects, it will install tiller
into thekube-system
namespace.
After helm init
, you should be able to run kubectl get pods —namespace
kube-system
and see Tiller running.
You can explicitly tell helm init
to…
- Install the canary build with the
—canary-image
flag - Install a particular image (version) with
—tiller-image
- Install to a particular cluster with
—kube-context
- Install into a particular namespace with
—tiller-namespace
- Install Tiller with a Service Account with
—service-account
(for RBAC enabled clusters) - Install Tiller without mounting a service account with
—automount-service-account false
Once Tiller is installed, running helm version
should show you boththe client and server version. (If it shows only the client version,helm
cannot yet connect to the server. Use kubectl
to see if anytiller
pods are running.)
Helm will look for Tiller in the kube-system
namespace unless—tiller-namespace
or TILLER_NAMESPACE
is set.
Installing Tiller Canary Builds
Canary images are built from the master
branch. They may not bestable, but they offer you the chance to test out the latest features.
The easiest way to install a canary image is to use helm init
with the—canary-image
flag:
$ helm init --canary-image
This will use the most recently built container image. You can alwaysuninstall Tiller by deleting the Tiller deployment from thekube-system
namespace using kubectl
.
Running Tiller Locally
For development, it is sometimes easier to work on Tiller locally, andconfigure it to connect to a remote Kubernetes cluster.
The process of building Tiller is explained above.
Once tiller
has been built, simply start it:
$ bin/tiller
Tiller running on :44134
When Tiller is running locally, it will attempt to connect to theKubernetes cluster that is configured by kubectl
. (Run kubectl config
view
to see which cluster that is.)
You must tell helm
to connect to this new local Tiller host instead ofconnecting to the one in-cluster. There are two ways to do this. Thefirst is to specify the —host
option on the command line. The secondis to set the $HELM_HOST
environment variable.
$ export HELM_HOST=localhost:44134
$ helm version # Should connect to localhost.
Client: &version.Version{SemVer:"v2.0.0-alpha.4", GitCommit:"db...", GitTreeState:"dirty"}
Server: &version.Version{SemVer:"v2.0.0-alpha.4", GitCommit:"a5...", GitTreeState:"dirty"}
Importantly, even when running locally, Tiller will store releaseconfiguration in ConfigMaps inside of Kubernetes.
Upgrading Tiller
As of Helm 2.2.0, Tiller can be upgraded using helm init —upgrade
.
For older versions of Helm, or for manual upgrades, you can use kubectl
to modifythe Tiller image:
$ export TILLER_TAG=v2.0.0-beta.1 # Or whatever version you want
$ kubectl --namespace=kube-system set image deployments/tiller-deploy tiller=gcr.io/kubernetes-helm/tiller:$TILLER_TAG
deployment "tiller-deploy" image updated
Setting TILLER_TAG=canary
will get the latest snapshot of master.
Deleting or Reinstalling Tiller
Because Tiller stores its data in Kubernetes ConfigMaps, you can safelydelete and re-install Tiller without worrying about losing any data. Therecommended way of deleting Tiller is with kubectl delete deployment
tiller-deploy —namespace kube-system
, or more concisely helm reset
.
Tiller can then be re-installed from the client with:
$ helm init
Advanced Usage
helm init
provides additional flags for modifying Tiller’s deploymentmanifest before it is installed.
Using —node-selectors
The —node-selectors
flag allows us to specify the node labels requiredfor scheduling the Tiller pod.
The example below will create the specified label under the nodeSelectorproperty.
helm init --node-selectors "beta.kubernetes.io/os"="linux"
The installed deployment manifest will contain our node selector label.
...
spec:
template:
spec:
nodeSelector:
beta.kubernetes.io/os: linux
...
Using —override
—override
allows you to specify properties of Tiller’sdeployment manifest. Unlike the —set
command used elsewhere in Helm,helm init —override
manipulates the specified properties of the finalmanifest (there is no “values” file). Therefore you may specify any validvalue for any valid property in the deployment manifest.
Override annotation
In the example below we use —override
to add the revision property and setits value to 1.
helm init --override metadata.annotations."deployment\.kubernetes\.io/revision"="1"
Output:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
...
Override affinity
In the example below we set properties for node affinity. Multiple—override
commands may be combined to modify different properties of thesame list item.
helm init --override "spec.template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].weight"="1" --override "spec.template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].preference.matchExpressions[0].key"="e2e-az-name"
The specified properties are combined into the“preferredDuringSchedulingIgnoredDuringExecution” property’s firstlist item.
...
spec:
strategy: {}
template:
...
spec:
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- preference:
matchExpressions:
- key: e2e-az-name
operator: ""
weight: 1
...
Using —output
The —output
flag allows us skip the installation of Tiller’s deploymentmanifest and simply output the deployment manifest to stdout in eitherJSON or YAML format. The output may then be modified with tools like jq
and installed manually with kubectl
.
In the example below we execute helm init
with the —output json
flag.
helm init --output json
The Tiller installation is skipped and the manifest is output to stdoutin JSON format.
"apiVersion": "apps/v1",
"kind": "Deployment",
"metadata": {
"creationTimestamp": null,
"labels": {
"app": "helm",
"name": "tiller"
},
"name": "tiller-deploy",
"namespace": "kube-system"
},
...
Storage backends
By default, tiller
stores release information in ConfigMaps
in the namespacewhere it is running.
Secret storage backend
As of Helm 2.7.0, there is now a beta storage backend thatuses Secrets
for storing release information. This was added for additionalsecurity in protecting charts in conjunction with the release of Secret
encryption in Kubernetes.
To enable the secrets backend, you’ll need to init Tiller with the followingoptions:
helm init --override 'spec.template.spec.containers[0].command'='{/tiller,--storage=secret}'
Currently, if you want to switch from the default backend to the secretsbackend, you’ll have to do the migration for this on your own. When this backendgraduates from beta, there will be a more official path of migration
SQL storage backend
As of Helm 2.14.0 there is now a beta SQL storage backend that stores releaseinformation in an SQL database (only postgres has been tested so far).
Using such a storage backend is particularly useful if your release informationweighs more than 1MB (in which case, it can’t be stored in ConfigMaps/Secretsbecause of internal limits in Kubernetes’ underlying etcd key-value store).
To enable the SQL backend, you’ll need to deploy a SQL database and init Tillerwith the following options:
helm init \
--override \
'spec.template.spec.containers[0].args'='{--storage=sql,--sql-dialect=postgres,--sql-connection-string=postgresql://tiller-postgres:5432/helm?user=helm&password=changeme}'
PRODUCTION NOTES: it’s recommended to change the username and password ofthe SQL database in production deployments. Enabling SSL is also a good idea.Last, but not least, perform regular backups/snapshots of your SQL database.
Currently, if you want to switch from the default backend to the SQL backend,you’ll have to do the migration for this on your own. When this backendgraduates from beta, there will be a more official migration path.
Conclusion
In most cases, installation is as simple as getting a pre-built helm
binaryand running helm init
. This document covers additional cases for thosewho want to do more sophisticated things with Helm.
Once you have the Helm Client and Tiller successfully installed, you canmove on to using Helm to manage charts.