Accessing Virtual Machines

Graphical and Serial Console Access

Once a virtual machine is started you are able to connect to the consoles it exposes. Usually there are two types of consoles:

  • Serial Console
  • Graphical Console (VNC)

Note: You need to have virtctl installed to gain access to the VirtualMachineInstance.

Accessing the serial console

The serial console of a virtual machine can be accessed by using the console command:

  1. $ virtctl console --kubeconfig=$KUBECONFIG testvmi

Accessing the graphical console (VNC)

Accessing the graphical console of a virtual machine is usually done through VNC, which requires remote-viewer. Once the tool is installed you can access the graphical console using:

  1. $ virtctl vnc --kubeconfig=$KUBECONFIG testvmi

If you need to open only a vnc-proxy without executing the remote-viewer command, it can be done using:

  1. $ virtctl vnc --kubeconfig=$KUBECONFIG --proxy-only testvmi

this would print the port number on your machine where you can manually connect using any of the vnc viewers

Debugging console access

Should the connection fail, you can use the -v flag to get more output from both virtctl and the remote-viewer tool, to troubleshoot the problem.

  1. $ virtctl vnc --kubeconfig=$KUBECONFIG testvmi -v 4

Note: If you are using virtctl via ssh on a remote machine, you need to forward the X session to your machine (Look up the -X and -Y flags of ssh if you are not familiar with that). As an alternative you can proxy the apiserver port with ssh to your machine (either direct or in combination with kubectl proxy)

RBAC Permissions for Console/VNC Access

Using Default RBAC ClusterRoles

Every KubeVirt installation after version v0.5.1 comes a set of default RBAC cluster roles that can be used to grant users access to VirtualMachineInstances.

The kubevirt.io:admin and kubevirt.io:edit ClusterRoles have console and VNC access permissions built into them. By binding either of these roles to a user, they will have the ability to use virtctl to access console and VNC.

With Custom RBAC ClusterRole

The default KubeVirt ClusterRoles give access to more than just console in VNC. In the event that an Admin would like to craft a custom role that targets only console and VNC, the ClusterRole below demonstrates how that can be done.

  1. apiVersion: rbac.authorization.k8s.io/v1beta1
  2. kind: ClusterRole
  3. metadata:
  4. name: allow-vnc-console-access
  5. rules:
  6. - apiGroups:
  7. - subresources.kubevirt.io
  8. resources:
  9. - virtualmachineinstances/console
  10. - virtualmachineinstances/vnc
  11. verbs:
  12. - get

The ClusterRole above provides access to virtual machines across all namespaces.

In order to reduce the scope to a single namespace, bind this ClusterRole using a RoleBinding that targets a single namespace.

SSH Access

A common operational pattern used when managing virtual machines is to inject public ssh keys into the virtual machines at boot. This allows automation tools (like ansible) to provision the virtual machine. It also gives operators a way of gaining secure passwordless access to a virtual machine.

KubeVirt provides multiple ways to inject ssh public keys into a virtual machine. In general, these methods fall into two categories. Static key injection, which places keys on the virtual machine the first time it is booted, and dynamic injection, which allows keys to be dynamically updated both at boot and during runtime.

Static SSH Key Injection via Cloud Init

Users creating virtual machines have the ability to provide startup scripts to their virtual machines which allow any number of custom operations to take place. Placing public ssh keys into a cloud-init startup script is one option people have for getting their public keys into the virtual machine, however there are some other options that grant more flexibility.

The VM’s access credential api allows statically injecting ssh public keys at creation time independently of the cloud-init user data by placing the ssh public key in a Kubernetes secret. This is useful because it allows people creating virtual machines to separate the application data in their cloud-init user data from the credentials used to access the virtual machine.

For example, someone can put their ssh key into a Kubernetes secret like this.

  1. # Place ssh key into a secret
  2. kubectl create secret generic my-pub-key --from-file=key1=/id_rsa.pub

Then assign that key to the virtual machine with the access credentials api using the configDrive propagation method. Note here how the cloud-init user data is not touched. KubeVirt is injecting the ssh key into the virtual machine using the machine generated cloud-init metadata, and not the user data. This keeps the application user date separate from credentials.

  1. #Create a vm yaml that references the secret in using the access credentials api.
  2. cat << END > my-vm.yaml
  3. apiVersion: kubevirt.io/v1
  4. kind: VirtualMachine
  5. metadata:
  6. labels:
  7. kubevirt.io/vm: my-vm
  8. name: my-vm
  9. spec:
  10. dataVolumeTemplates:
  11. - metadata:
  12. creationTimestamp: null
  13. name: fedora-dv
  14. spec:
  15. pvc:
  16. accessModes:
  17. - ReadWriteOnce
  18. resources:
  19. requests:
  20. storage: 5Gi
  21. storageClassName: local
  22. source:
  23. registry:
  24. url: docker://quay.io/kubevirt/fedora-cloud-container-disk-demo
  25. running: false
  26. template:
  27. metadata:
  28. labels:
  29. kubevirt.io/vm: my-vm
  30. spec:
  31. domain:
  32. devices:
  33. disks:
  34. - disk:
  35. bus: virtio
  36. name: disk0
  37. - disk:
  38. bus: virtio
  39. name: disk1
  40. machine:
  41. type: ""
  42. resources:
  43. requests:
  44. cpu: 1000m
  45. memory: 1G
  46. terminationGracePeriodSeconds: 0
  47. accessCredentials:
  48. - sshPublicKey:
  49. source:
  50. secret:
  51. secretName: my-pub-key
  52. propagationMethod:
  53. configDrive: {}
  54. volumes:
  55. - dataVolume:
  56. name: fedora-dv
  57. name: disk0
  58. - cloudInitConfigDrive:
  59. userData: |
  60. #!/bin/bash
  61. echo "Application setup goes here"
  62. name: disk1
  63. END
  64. kubectl create -f my-vm.yaml

Dynamic SSH Key Injection via Qemu User Agent

KubeVirt supports dynamically injecting public ssh keys at run time through the use of the qemu guest agent. This is achieved through the access credentials api by using the qemuGuestAgent propagation method.

Note: This requires the qemu guest agent to be installed within the guest

Note: When using qemuGuestAgent propagation, the /home/$USER/.ssh/authorized_keys file will be owned by the guest agent. Changes to that file that are made outside of the qemu guest agent’s control will get deleted.

Note: More information about the motivation behind the access credentials api can be found in the pull request description that introduced this api.

In the example below, a secret contains an ssh key. When attached to the VM via the access credential api with the qemuGuestAgent propagation method, the contents of the secret can be updated at any time which will automatically get applied to a running VM. The secret can contain multiple public keys.

  1. # Place ssh key into a secret
  2. kubectl create secret generic my-pub-key --from-file=key1=/id_rsa.pub

Now reference this secret on the VM with the access credentials api using qemuGuestAgent propagation. This example installs and starts the qemu guest agent using a cloud-init script in order to ensure the agent is available.

  1. # Create a vm yaml that references the secret in using the access credentials api.
  2. cat << END > my-vm.yaml
  3. apiVersion: kubevirt.io/v1
  4. kind: VirtualMachine
  5. metadata:
  6. labels:
  7. kubevirt.io/vm: my-vm
  8. name: my-vm
  9. spec:
  10. dataVolumeTemplates:
  11. - metadata:
  12. creationTimestamp: null
  13. name: fedora-dv
  14. spec:
  15. pvc:
  16. accessModes:
  17. - ReadWriteOnce
  18. resources:
  19. requests:
  20. storage: 5Gi
  21. storageClassName: local
  22. source:
  23. registry:
  24. url: docker://quay.io/kubevirt/fedora-cloud-container-disk-demo
  25. running: false
  26. template:
  27. metadata:
  28. labels:
  29. kubevirt.io/vm: my-vm
  30. spec:
  31. domain:
  32. devices:
  33. disks:
  34. - disk:
  35. bus: virtio
  36. name: disk0
  37. - disk:
  38. bus: virtio
  39. name: disk1
  40. machine:
  41. type: ""
  42. resources:
  43. requests:
  44. cpu: 1000m
  45. memory: 1G
  46. terminationGracePeriodSeconds: 0
  47. accessCredentials:
  48. - sshPublicKey:
  49. source:
  50. secret:
  51. secretName: my-pub-key
  52. propagationMethod:
  53. qemuGuestAgent:
  54. users:
  55. - "fedora"
  56. volumes:
  57. - dataVolume:
  58. name: fedora-dv
  59. name: disk0
  60. - cloudInitConfigDrive:
  61. userData: |
  62. #!/bin/bash
  63. sudo setenforce Permissive
  64. sudo yum install -y qemu-guest-agent
  65. sudo systemctl start qemu-guest-agent
  66. name: disk1
  67. END
  68. kubectl create -f my-vm.yaml