Description of problem: VM should be rejected when set clountinit without defining cloudinit volume in spec.domain.devices.disk Version-Release number of selected component (if applicable): CNV4.8 How reproducible: Always Steps to Reproduce: 1. Create a VM with cloudinit defined without cloundinit volume --- apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: fedora name: fedora-1619580640-1816523 spec: template: metadata: labels: kubevirt.io/vm: fedora-1619580640-1816523 kubevirt.io/domain: fedora-1619580640-1816523 spec: domain: cpu: cores: 1 devices: disks: - disk: bus: virtio name: containerdisk interfaces: - masquerade: {} name: default rng: {} machine: type: '' resources: requests: memory: 1024Mi networks: - name: default pod: {} terminationGracePeriodSeconds: 30 volumes: - containerDisk: image: quay.io/openshift-cnv/qe-cnv-tests-fedora:33 name: containerdisk - name: cloudinitdisk cloudInitNoCloud: userData: |- #cloud-config password: fedora chpasswd: { expire: False } running: true Actual results: The VM can be running well Expected results: The VM creation should be rejected because there is no cloudinit disk defined Additional info:
Yan, is there a workflow where this BZ becomes impossible to avoid? The obvious workaround is to ensure a disk is included, but is there a case where it can't be? Deferring this BZ to a future release as it's not immediately clear to me that this represents a danger of data corruption--the VM simply won't successfully boot. However, this is a likely candidate to be picked up sooner than later as the fix should be straightforward.
Hi, Stu Yes, I think you mentioned the good way to avoid the issue - ensure the cloudinit disk is included. And if there is no cloudinit disk in spec, the cloudinit data won't take effect and also cause failure when trying to hotplug disk. Or maybe we need a disk validation check for the yaml file when creating the VM.
@sgott As Yan said she found this while trying to hotplug. In hotplug I explicitly verify that the number of disks = the number of volumes, and if not I reject the request outright. Since the disk and volume count doesn't match with the cloudinit as described above the hotplug fails.
To verify, follow steps to reproduce in description.
[kbidarka@localhost secureboot]$ cat test-nocloudinitdisk-dv-rhel84.yaml --- apiVersion: kubevirt.io/v1alpha3 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm2-rhel84 name: vm2-rhel84 spec: dataVolumeTemplates: - metadata: creationTimestamp: null name: rhel84-dv2 spec: pvc: accessModes: - ReadWriteMany resources: requests: storage: 25Gi storageClassName: ocs-storagecluster-ceph-rbd volumeMode: Block source: http: url: http://127.0.0.1/rhel-images/rhel-84.qcow2 status: {} running: false template: metadata: labels: kubevirt.io/vm: vm2-rhel84 spec: domain: cpu: cores: 1 devices: disks: - disk: bus: virtio name: datavolumedisk1 machine: type: "" resources: requests: memory: "1Gi" terminationGracePeriodSeconds: 0 volumes: - dataVolume: name: rhel84-dv2 name: datavolumedisk1 - cloudInitNoCloud: userData: |- #cloud-config password: 123@321 chpasswd: { expire: False } name: cloudinitdisk (cnv-tests) [kbidarka@localhost secureboot]$ oc apply -f test-nocloudinitdisk-dv-rhel84.yaml The request is invalid: spec.template.spec.domain.volumes[1].name: spec.template.spec.domain.volumes[1].name 'cloudinitdisk' not found. As seen above the request is invalid and VM creation is avoided with no cloudinitdisk.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Virtualization 4.8.0 Images), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:2920