Accessing Virtual Machines

Graphical and Serial Console Access

Once a virtual machine is started you are able to connect to the consoles it exposes. Usually there are two types of consoles:

  • Serial Console
  • Graphical Console (VNC)

Note: You need to have virtctl installed to gain access to the VirtualMachineInstance.

Accessing the Serial Console

The serial console of a virtual machine can be accessed by using the console command:

  1. virtctl console testvm

Accessing the Graphical Console (VNC)

To access the graphical console of a virtual machine the VNC protocol is typically used. This requires remote-viewer to be installed. Once the tool is installed, you can access the graphical console using:

  1. virtctl vnc testvm

If you only want to open a vnc-proxy without executing the remote-viewer command, it can be accomplished with:

  1. virtctl vnc --proxy-only testvm

This would print the port number on your machine where you can manually connect using any VNC viewer.

Debugging console access

If the connection fails, you can use the -v flag to get more verbose output from both virtctl and the remote-viewer tool to troubleshoot the problem.

  1. virtctl vnc testvm -v 4

Note: If you are using virtctl via SSH on a remote machine, you need to forward the X session to your machine. Look up the -X and -Y flags of ssh if you are not familiar with that. As an alternative you can proxy the API server port with SSH to your machine (either direct or in combination with kubectl proxy).

SSH Access

A common operational pattern used when managing virtual machines is to inject SSH public keys into the virtual machines at boot. This allows automation tools (like Ansible) to provision the virtual machine. It also gives operators a way of gaining secure and passwordless access to a virtual machine.

KubeVirt provides multiple ways to inject SSH public keys into a virtual machine.

In general, these methods fall into two categories: - Static key injection, which places keys on the virtual machine the first time it is booted. - Dynamic key injection, which allows keys to be dynamically updated both at boot and during runtime.

Once a SSH public key is injected into the virtual machine, it can be accessed via virtctl.

Static SSH public key injection via cloud-init

Users creating virtual machines can provide startup scripts to their virtual machines, allowing multiple customization operations.

One option for injecting public SSH keys into a VM is via cloud-init startup script. However, there are more flexible options available.

The virtual machine’s access credential API allows statically injecting SSH public keys at startup time independently of the cloud-init user data by placing the SSH public key into a Kubernetes Secret. This allows keeping the application data in the cloud-init user data separate from the credentials used to access the virtual machine.

A Kubernetes Secret can be created from an SSH public key like this:

  1. # Place SSH public key into a Secret
  2. kubectl create secret generic my-pub-key --from-file=key1=id_rsa.pub

The Secret containing the public key is then assigned to a virtual machine using the access credentials API with the configDrive propagation method.

KubeVirt injects the SSH public key into the virtual machine by using the generated cloud-init metadata instead of the user data. This separates the application user data and user credentials.

Note: The cloud-init userData is not touched.

  1. # Create a VM referencing the Secret using propagation method configDrive
  2. kubectl create -f - <<EOF
  3. apiVersion: kubevirt.io/v1
  4. kind: VirtualMachine
  5. metadata:
  6. name: testvm
  7. spec:
  8. running: true
  9. template:
  10. spec:
  11. domain:
  12. devices:
  13. disks:
  14. - disk:
  15. bus: virtio
  16. name: containerdisk
  17. - disk:
  18. bus: virtio
  19. name: cloudinitdisk
  20. rng: {}
  21. resources:
  22. requests:
  23. memory: 1024M
  24. terminationGracePeriodSeconds: 0
  25. accessCredentials:
  26. - sshPublicKey:
  27. source:
  28. secret:
  29. secretName: my-pub-key
  30. propagationMethod:
  31. configDrive: {}
  32. volumes:
  33. - containerDisk:
  34. image: quay.io/containerdisks/fedora:latest
  35. name: containerdisk
  36. - cloudInitConfigDrive:
  37. userData: |-
  38. #cloud-config
  39. password: fedora
  40. chpasswd: { expire: False }
  41. name: cloudinitdisk
  42. EOF

Dynamic SSH public key injection via qemu-guest-agent

KubeVirt supports dynamic injection of SSH public keys at runtime by using the qemu-guest-agent. This is configured by using the access credentials API with the qemuGuestAgent propagation method.

Note: This requires the qemu-guest-agent to be installed within the guest.

Note: When using qemuGuestAgent propagation, the /home/$USER/.ssh/authorized_keys file will be owned by the guest agent. Changes to the file not made by the guest agent will be lost.

Note: More information about the motivation behind the access credentials API can be found in the pull request description that introduced the API.

In the example below the Secret containing the SSH public key is attached to the virtual machine via the access credentials API with the qemuGuestAgent propagation method. This allows updating the contents of the Secret at any time, which will result in the changes getting applied to the running virtual machine immediately. The Secret may also contain multiple SSH public keys.

  1. # Place SSH public key into a secret
  2. kubectl create secret generic my-pub-key --from-file=key1=id_rsa.pub

Now reference this secret in the VirtualMachine spec with the access credentials API using qemuGuestAgent propagation.

  1. # Create a VM referencing the Secret using propagation method qemuGuestAgent
  2. kubectl create -f - <<EOF
  3. apiVersion: kubevirt.io/v1
  4. kind: VirtualMachine
  5. metadata:
  6. name: testvm
  7. spec:
  8. running: true
  9. template:
  10. spec:
  11. domain:
  12. devices:
  13. disks:
  14. - disk:
  15. bus: virtio
  16. name: containerdisk
  17. - disk:
  18. bus: virtio
  19. name: cloudinitdisk
  20. rng: {}
  21. resources:
  22. requests:
  23. memory: 1024M
  24. terminationGracePeriodSeconds: 0
  25. accessCredentials:
  26. - sshPublicKey:
  27. source:
  28. secret:
  29. secretName: my-pub-key
  30. propagationMethod:
  31. qemuGuestAgent:
  32. users:
  33. - fedora
  34. volumes:
  35. - containerDisk:
  36. image: quay.io/containerdisks/fedora:latest
  37. name: containerdisk
  38. - cloudInitConfigDrive:
  39. userData: |-
  40. #cloud-config
  41. password: fedora
  42. chpasswd: { expire: False }
  43. # Disable SELinux for now, so qemu-guest-agent can write the authorized_keys file
  44. # The selinux-policy is too restrictive currently, see open bugs:
  45. # - https://bugzilla.redhat.com/show_bug.cgi?id=1917024
  46. # - https://bugzilla.redhat.com/show_bug.cgi?id=2028762
  47. # - https://bugzilla.redhat.com/show_bug.cgi?id=2057310
  48. bootcmd:
  49. - setenforce 0
  50. name: cloudinitdisk
  51. EOF

Accessing the VMI using virtctl

The user can create a websocket backed network tunnel to a port inside the instance by using the virtualmachineinstances/portforward subresource of the VirtualMachineInstance.

One use-case for this subresource is to forward SSH traffic into the VirtualMachineInstance either from the CLI or a web-UI.

To connect to a VirtualMachineInstance from your local machine, virtctl provides a lightweight SSH client with the ssh command, that uses port forwarding. Refer to the command’s help for more details.

  1. virtctl ssh

To transfer files from or to a VirtualMachineInstance virtctl also provides a lightweight SCP client with the scp command. Its usage is similar to the ssh command. Refer to the command’s help for more details.

  1. virtctl scp

Using virtctl as proxy

If you prefer to use your local OpenSSH client, there are two ways of doing that in combination with virtctl.

Note: Most of this applies to the virtctl scp command too.

  1. The virtctl ssh command has a --local-ssh option. With this option virtctl wraps the local OpenSSH client transparently to the user. The executed SSH command can be viewed by increasing the verbosity (-v 3).
  1. virtctl ssh --local-ssh -v 3 testvm
  1. The virtctl port-forward command provides an option to tunnel a single port to your local stdout/stdin. This allows the command to be used in combination with the OpenSSH client’s ProxyCommand option.
  1. ssh -o 'ProxyCommand=virtctl port-forward --stdio=true vmi/testvm.mynamespace 22' fedora@testvm.mynamespace

To provide easier access to arbitrary virtual machines you can add the following lines to your SSH config:

  1. Host vmi/*
  2. ProxyCommand virtctl port-forward --stdio=true %h %p
  3. Host vm/*
  4. ProxyCommand virtctl port-forward --stdio=true %h %p

This allows you to simply call ssh user@vmi/testvmi.mynamespace and your SSH config and virtctl will do the rest. Using this method it becomes easy to set up different identities for different namespaces inside your SSH config.

This feature can also be used with Ansible to automate configuration of virtual machines running on KubeVirt. You can put the snippet above into its own file (e.g. ~/.ssh/virtctl-proxy-config) and add the following lines to your .ansible.cfg:

  1. [ssh_connection]
  2. ssh_args = -F ~/.ssh/virtctl-proxy-config

Note that all port forwarding traffic will be sent over the Kubernetes control plane. A high amount of connections and traffic can increase pressure on the API server. If you regularly need a high amount of connections and traffic consider using a dedicated Kubernetes Service instead.

Example

  1. Create virtual machine and inject SSH public key as explained above

  2. SSH into virtual machine

  1. # Add --local-ssh to transparently use local OpenSSH client
  2. virtctl ssh -i id_rsa fedora@testvm

or

  1. ssh -o 'ProxyCommand=virtctl port-forward --stdio=true vmi/testvm.mynamespace 22' -i id_rsa fedora@vmi/testvm.mynamespace
  1. SCP file to the virtual machine
  1. # Add --local-ssh to transparently use local OpenSSH client
  2. virtctl scp -i id_rsa testfile fedora@testvm:/tmp

or

  1. scp -o 'ProxyCommand=virtctl port-forward --stdio=true vmi/testvm.mynamespace 22' -i id_rsa testfile fedora@testvm.mynamespace:/tmp

RBAC permissions for Console/VNC/SSH access

Using default RBAC cluster roles

Every KubeVirt installation starting with version v0.5.1 ships a set of default RBAC cluster roles that can be used to grant users access to VirtualMachineInstances.

The kubevirt.io:admin and kubevirt.io:edit cluster roles have console, VNC and SSH respectively port-forwarding access permissions built into them. By binding either of these roles to a user, they will have the ability to use virtctl to access the console, VNC and SSH.

Using custom RBAC cluster role

The default KubeVirt cluster roles grant access to more than just the console, VNC and port-forwarding. The ClusterRole below demonstrates how to craft a custom role, that only allows access to the console, VNC and port-forwarding.

  1. apiVersion: rbac.authorization.k8s.io/v1beta1
  2. kind: ClusterRole
  3. metadata:
  4. name: allow-console-vnc-port-forward-access
  5. rules:
  6. - apiGroups:
  7. - subresources.kubevirt.io
  8. resources:
  9. - virtualmachineinstances/console
  10. - virtualmachineinstances/vnc
  11. verbs:
  12. - get
  13. - apiGroups:
  14. - subresources.kubevirt.io
  15. resources:
  16. - virtualmachineinstances/portforward
  17. verbs:
  18. - update

When bound with a ClusterRoleBinding the ClusterRole above grants access to virtual machines across all namespaces.

In order to reduce the scope to a single namespace, bind this ClusterRole using a RoleBinding that targets a single namespace.