Importing virtual machine images to block storage with data volumes
You can import an existing virtual machine image into your OKD cluster. OKD Virtualization uses data volumes to automate the import of data and the creation of an underlying persistent volume claim (PVC).
When you import a disk image into a PVC, the disk image is expanded to use the full storage capacity that is requested in the PVC. To use this space, the disk partitions and file system(s) in the virtual machine might need to be expanded. The resizing procedure varies based on the operating system that is installed on the virtual machine. See the operating system documentation for details. |
Prerequisites
- If you require scratch space according to the CDI supported operations matrix, you must first define a storage class or prepare CDI scratch space for this operation to complete successfully.
About data volumes
DataVolume
objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). Data volumes are integrated with OKD Virtualization, and they prevent a virtual machine from being started before the PVC has been prepared.
About block persistent volumes
A block persistent volume (PV) is a PV that is backed by a raw block device. These volumes do not have a file system and can provide performance benefits for virtual machines by reducing overhead.
Raw block volumes are provisioned by specifying volumeMode: Block
in the PV and persistent volume claim (PVC) specification.
Creating a local block persistent volume
Create a local block persistent volume (PV) on a node by populating a file and mounting it as a loop device. You can then reference this loop device in a PV manifest as a Block
volume and use it as a block device for a virtual machine image.
Procedure
Log in as
root
to the node on which to create the local PV. This procedure usesnode01
for its examples.Create a file and populate it with null characters so that it can be used as a block device. The following example creates a file
loop10
with a size of 2Gb (20 100Mb blocks):$ dd if=/dev/zero of=<loop10> bs=100M count=20
Mount the
loop10
file as a loop device.$ losetup </dev/loop10>d3 <loop10> (1) (2)
1 File path where the loop device is mounted. 2 The file created in the previous step to be mounted as the loop device. Create a
PersistentVolume
manifest that references the mounted loop device.kind: PersistentVolume
apiVersion: v1
metadata:
name: <local-block-pv10>
annotations:
spec:
local:
path: </dev/loop10> (1)
capacity:
storage: <2Gi>
volumeMode: Block (2)
storageClassName: local (3)
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- <node01> (4)
1 The path of the loop device on the node. 2 Specifies it is a block PV. 3 Optional: Set a storage class for the PV. If you omit it, the cluster default is used. 4 The node on which the block device was mounted. Create the block PV.
# oc create -f <local-block-pv10.yaml>(1)
1 The file name of the persistent volume created in the previous step.
Importing a virtual machine image to a block persistent volume using data volumes
You can import an existing virtual machine image into your OKD cluster. OKD Virtualization uses data volumes to automate the importing data and the creation of an underlying persistent volume claim (PVC). You can then reference the data volume in a virtual machine manifest.
Prerequisites
A virtual machine disk image, in RAW, ISO, or QCOW2 format, optionally compressed by using
xz
orgz
.An
HTTP
ors3
endpoint where the image is hosted, along with any authentication credentials needed to access the data sourceAt least one available block PV.
Procedure
If your data source requires authentication credentials, edit the
endpoint-secret.yaml
file, and apply the updated configuration to the cluster.Edit the
endpoint-secret.yaml
file with your preferred text editor:apiVersion: v1
kind: Secret
metadata:
name: <endpoint-secret>
labels:
app: containerized-data-importer
type: Opaque
data:
accessKeyId: "" (1)
secretKey: "" (2)
1 Optional: your key or user name, base64 encoded 2 Optional: your secret or password, base64 encoded Update the secret by running the following command:
$ oc apply -f endpoint-secret.yaml
Create a
DataVolume
manifest that specifies the data source for the image you want to import andvolumeMode: Block
so that an available block PV is used.apiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume
metadata:
name: <import-pv-datavolume> (1)
spec:
storageClassName: local (2)
source:
http:
url: <http://download.fedoraproject.org/pub/fedora/linux/releases/28/Cloud/x86_64/images/Fedora-Cloud-Base-28-1.1.x86_64.qcow2> (3)
secretRef: <endpoint-secret> (4)
pvc:
volumeMode: Block (5)
accessModes:
- ReadWriteOnce
resources:
requests:
storage: <2Gi>
1 The name of the data volume. 2 Optional: Set the storage class or omit it to accept the cluster default. 3 The HTTP
source of the image to import.4 Only required if the data source requires authentication. 5 Required for importing to a block PV. Create the data volume to import the virtual machine image by running the following command:
$ oc create -f <import-pv-datavolume.yaml>(1)
1 The file name of the data volume that you created in the previous step.
CDI supported operations matrix
This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space.
Content types | HTTP | HTTPS | HTTP basic auth | Registry | Upload |
---|---|---|---|---|---|
KubeVirt (QCOW2) | ✓ QCOW2 | ✓ QCOW2* | ✓ QCOW2 | ✓ QCOW2 | ✓ QCOW2 |
KubeVirt (RAW) | ✓ RAW | ✓ RAW | ✓ RAW | ✓ RAW | ✓ RAW |
✓ Supported operation
□ Unsupported operation
* Requires scratch space
** Requires scratch space if a custom certificate authority is required
CDI now uses the OKD cluster-wide proxy configuration. |
Additional resources
- Configure preallocation mode to improve write performance for data volume operations.