Persistent Volume Snapshots
You are viewing documentation for a release that is no longer supported. The latest supported version of version 3 is [3.11]. For the most recent version 4, see [4]
You are viewing documentation for a release that is no longer supported. The latest supported version of version 3 is [3.11]. For the most recent version 4, see [4]
Overview
Persistent Volume Snapshots are a Technology Preview feature. |
Many storage systems provide the ability to create “snapshots” of a persistent volume (PV) to protect against data loss. The external snapshot controller and provisioner provide means to use the feature in the OKD cluster and handle volume snapshots through the OKD API.
This document describes the current state of volume snapshot support in OKD. Familiarity with PVs, persistent volume claims (PVCs), and dynamic provisioning is recommended.
Features
Create snapshot of a
PersistentVolume
bound to aPersistentVolumeClaim
List existing
VolumeSnapshots
Delete existing
VolumeSnapshot
Create a new
PersistentVolume
from an existingVolumeSnapshot
Supported
PersistentVolume
types:AWS Elastic Block Store (EBS)
Google Compute Engine (GCE) Persistent Disk (PD)
Installation and Setup
The external controller and provisioner are the external components that provide volume snapshotting. These external components run in the cluster. The controller is responsible for creating, deleting, and reporting events on volume snapshots. The provisioner creates new PersistentVolumes
from the volume snapshots. See Create Snapshot and Restore Snapshot for more information.
Starting the External Controller and Provisioner
The external controller and provisioner services are distributed as container images and can be run in the OKD cluster as usual. There are also RPM versions for the controller and provisioner.
To allow the containers managing the API objects, the necessary role-based access control (RBAC) rules need to be configured by the administrator:
Create a
ServiceAccount
andClusterRole
:apiVersion: v1
kind: ServiceAccount
metadata:
name: snapshot-controller-runner
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: snapshot-controller-role
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
- apiGroups: ["apiextensions.k8s.io"]
resources: ["customresourcedefinitions"]
verbs: ["create", "list", "watch", "delete"]
- apiGroups: ["volumesnapshot.external-storage.k8s.io"]
resources: ["volumesnapshots"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ["volumesnapshot.external-storage.k8s.io"]
resources: ["volumesnapshotdatas"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
Bind the rules via
ClusterRoleBinding
:apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: snapshot-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: snapshot-controller-role
subjects:
- kind: ServiceAccount
name: snapshot-controller-runner
namespace: default
If the external controller and provisioner are deployed in Amazon Web Services (AWS), they must be able to authenticate using the access key. To provide the credential to the pod, the administrator creates a new secret:
apiVersion: v1
kind: Secret
metadata:
name: awskeys
type: Opaque
data:
access-key-id: <base64 encoded AWS_ACCESS_KEY_ID>
secret-access-key: <base64 encoded AWS_SECRET_ACCESS_KEY>
The AWS deployment of the external controller and provisioner containers (note that both pod containers use the secret to access the AWS cloud provider API):
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: snapshot-controller
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: snapshot-controller
spec:
serviceAccountName: snapshot-controller-runner
containers:
- name: snapshot-controller
image: "registry.access.redhat.com/openshift3/snapshot-controller:latest"
imagePullPolicy: "IfNotPresent"
args: ["-cloudprovider", "aws"]
env:
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: awskeys
key: access-key-id
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: awskeys
key: secret-access-key
- name: snapshot-provisioner
image: "registry.access.redhat.com/openshift3/snapshot-provisioner:latest"
imagePullPolicy: "IfNotPresent"
args: ["-cloudprovider", "aws"]
env:
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: awskeys
key: access-key-id
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: awskeys
key: secret-access-key
For GCE, there is no need to use secrets to access the GCE cloud provider API. The administrator can proceed with the deployment:
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: snapshot-controller
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: snapshot-controller
spec:
serviceAccountName: snapshot-controller-runner
containers:
- name: snapshot-controller
image: "registry.access.redhat.com/openshift3/snapshot-controller:latest"
imagePullPolicy: "IfNotPresent"
args: ["-cloudprovider", "gce"]
- name: snapshot-provisioner
image: "registry.access.redhat.com/openshift3/snapshot-provisioner:latest"
imagePullPolicy: "IfNotPresent"
args: ["-cloudprovider", "gce"]
Managing Snapshot Users
Depending on the cluster configuration, it might be necessary to allow non-administrator users to manipulate the VolumeSnapshot
objects on the API server. This can be done by creating a ClusterRole
bound to a particular user or group.
For example, assume the user ‘alice’ needs to work with snapshots in the cluster. The cluster administrator completes the following steps:
Define a new
ClusterRole
:apiVersion: v1
kind: ClusterRole
metadata:
name: volumesnapshot-admin
rules:
- apiGroups:
- "volumesnapshot.external-storage.k8s.io"
attributeRestrictions: null
resources:
- volumesnapshots
verbs:
- create
- delete
- deletecollection
- get
- list
- patch
- update
- watch
Bind the cluster role to the user ‘alice’ by creating a
ClusterRoleBinding
object:apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: volumesnapshot-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: volumesnapshot-admin
subjects:
- kind: User
name: alice
This is only an example of API access configuration. The |
Lifecycle of a Volume Snapshot and Volume Snapshot Data
Persistent Volume Claim and Persistent Volume
The PersistentVolumeClaim
is bound to a PersistentVolume
. The PersistentVolume
type must be one of the snapshot supported persistent volume types.
Snapshot Promoter
To create a StorageClass
:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: snapshot-promoter
provisioner: volumesnapshot.external-storage.k8s.io/snapshot-promoter
This StorageClass
is necessary to restore a PersistentVolume
from a VolumeSnapshot
that was previously created.
Create Snapshot
To take a snapshot of a PV, create a new VolumeSnapshot
object:
apiVersion: volumesnapshot.external-storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
name: snapshot-demo
spec:
persistentVolumeClaimName: ebs-pvc
persistentVolumeClaimName
is the name of the PersistentVolumeClaim
bound to a PersistentVolume
. This particular PV is snapshotted.
A VolumeSnapshotData
object is then automatically created based on the VolumeSnapshot
. The relationship between VolumeSnapshot
and VolumeSnapshotData
is similar to the relationship between PersistentVolumeClaim
and PersistentVolume
.
Depending on the PV type, the operation might go through several phases, which are reflected by the VolumeSnapshot
status:
The new
VolumeSnapshot
object is created.The controller starts the snapshot operation. The snapshotted
PersistentVolume
might need to be frozen and the applications paused.The storage system finishes creating the snapshot (the snapshot is “cut”) and the snapshotted
PersistentVolume
might return to normal operation. The snapshot itself is not yet ready. The last status condition is ofPending
type with status valueTrue
. A newVolumeSnapshotData
object is created to represent the actual snapshot.The newly created snapshot is complete and ready to use. The last status condition is of
Ready
type with status valueTrue
.
It is the user’s responsibility to ensure data consistency (stop the pod/application, flush caches, freeze the file system, and so on). |
In case of error, the |
To display the VolumeSnapshot
status:
$ oc get volumesnapshot -o yaml
The status is displayed.
apiVersion: volumesnapshot.external-storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
clusterName: ""
creationTimestamp: 2017-09-19T13:58:28Z
generation: 0
labels:
Timestamp: "1505829508178510973"
name: snapshot-demo
namespace: default
resourceVersion: "780"
selfLink: /apis/volumesnapshot.external-storage.k8s.io/v1/namespaces/default/volumesnapshots/snapshot-demo
uid: 9cc5da57-9d42-11e7-9b25-90b11c132b3f
spec:
persistentVolumeClaimName: ebs-pvc
snapshotDataName: k8s-volume-snapshot-9cc8813e-9d42-11e7-8bed-90b11c132b3f
status:
conditions:
- lastTransitionTime: null
message: Snapshot created successfully
reason: ""
status: "True"
type: Ready
creationTimestamp: null
Restore Snapshot
To restore a PV from a VolumeSnapshot
, create a PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: snapshot-pv-provisioning-demo
annotations:
snapshot.alpha.kubernetes.io/snapshot: snapshot-demo
spec:
storageClassName: snapshot-promoter
annotations
: snapshot.alpha.kubernetes.io/snapshot
is the name of the VolumeSnapshot
to be restored. storageClassName
: StorageClass
is created by the administrator for restoring VolumeSnapshots
.
A new PersistentVolume
is created and bound to the PersistentVolumeClaim
. The process may take several minutes depending on the PV type.
Delete Snapshot
To delete a snapshot-demo
:
$ oc delete volumesnapshot/snapshot-demo
The VolumeSnapshotData
bound to the VolumeSnapshot
is automatically deleted.