Created attachment 1687555 [details] VM overview page Description of problem: A running VM with runStrategy attribute (instead of 'running' attribute) has a running VMI but the VM status in the UI is 'Stopping' Version-Release number of selected component (if applicable): OCP 4.4, CNV 2.3 How reproducible: 100% Steps to Reproduce: 1. Create a VM with runStrategy: Always (as an example, can be Manual/Halted and a started VM) 2. Check the VM status in the UI Actual results: While the VMI has 'phase: Running', in the UI - the VM status is 'Stopping'; see attached screenshot. Expected results: Correct VM status should be displayed (running). Additional info: ========= VM yaml =========== --- apiVersion: kubevirt.io/v1alpha3 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm-cirros name: vm-cirros spec: runStrategy: Always template: metadata: labels: kubevirt.io/vm: vm-cirros spec: domain: devices: disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk machine: type: "" resources: requests: memory: 64M terminationGracePeriodSeconds: 0 volumes: - containerDisk: image: kubevirt/cirros-container-disk-demo:latest name: containerdisk - cloudInitNoCloud: userData: | #!/bin/sh echo 'printed from cloud-init userdata' name: cloudinitdisk === VM spec === $ oc get vm vm-cirros -oyaml apiVersion: kubevirt.io/v1alpha3 kind: VirtualMachine metadata: annotations: kubevirt.io/latest-observed-api-version: v1alpha3 kubevirt.io/storage-observed-api-version: v1alpha3 creationTimestamp: "2020-05-06T11:27:36Z" generation: 49 labels: kubevirt.io/vm: vm-cirros name: vm-cirros namespace: default resourceVersion: "6312417" selfLink: /apis/kubevirt.io/v1alpha3/namespaces/default/virtualmachines/vm-cirros uid: 1b72b78b-4e22-4268-9715-96f0fa84268c spec: runStrategy: Always template: metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-cirros spec: domain: devices: disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk machine: type: q35 resources: requests: memory: 64M terminationGracePeriodSeconds: 0 volumes: - containerDisk: image: kubevirt/cirros-container-disk-demo:latest name: containerdisk - cloudInitNoCloud: userData: | #!/bin/sh echo 'printed from cloud-init userdata' name: cloudinitdisk status: created: true ready: true === VMI spec === $ oc get vmi vm-cirros -oyaml apiVersion: kubevirt.io/v1alpha3 kind: VirtualMachineInstance metadata: annotations: kubevirt.io/latest-observed-api-version: v1alpha3 kubevirt.io/storage-observed-api-version: v1alpha3 creationTimestamp: "2020-05-12T07:08:14Z" finalizers: - foregroundDeleteVirtualMachine generateName: vm-cirros generation: 8 labels: kubevirt.io/nodeName: host-172-16-0-22 kubevirt.io/vm: vm-cirros name: vm-cirros namespace: default ownerReferences: - apiVersion: kubevirt.io/v1alpha3 blockOwnerDeletion: true controller: true kind: VirtualMachine name: vm-cirros uid: 1b72b78b-4e22-4268-9715-96f0fa84268c resourceVersion: "6312414" selfLink: /apis/kubevirt.io/v1alpha3/namespaces/default/virtualmachineinstances/vm-cirros uid: 068eb3f7-c75b-454d-bb86-d35efaeb7531 spec: domain: devices: disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk interfaces: - bridge: {} name: default features: acpi: enabled: true firmware: uuid: 0d2a2043-41c0-59c3-9b17-025022203668 machine: type: q35 resources: requests: cpu: 100m memory: 64M networks: - name: default pod: {} terminationGracePeriodSeconds: 0 volumes: - containerDisk: image: kubevirt/cirros-container-disk-demo:latest imagePullPolicy: Always name: containerdisk - cloudInitNoCloud: userData: | #!/bin/sh echo 'printed from cloud-init userdata' name: cloudinitdisk status: conditions: - lastProbeTime: null lastTransitionTime: null message: cannot migrate VMI which does not use masquerade to connect to the pod network reason: InterfaceNotLiveMigratable status: "False" type: LiveMigratable - lastProbeTime: null lastTransitionTime: "2020-05-12T07:08:21Z" status: "True" type: Ready guestOSInfo: {} interfaces: - ipAddress: 10.131.1.79 mac: 0a:58:0a:83:01:4f name: default migrationMethod: BlockMigration nodeName: host-172-16-0-22 phase: Running qosClass: Burstable (attached vm and vmi spec when running:true)
Separated the UI issue from https://bugzilla.redhat.com/show_bug.cgi?id=1832179
will copy a new bz and target to 4.4 because it happen also in 4.4 notes: https://github.com/openshift/console/blob/master/frontend/packages/kubevirt-plugin/src/selectors/vm/selectors.ts#L73
notes ii: https://github.com/kubevirt/kubevirt/blob/master/pkg/virt-controller/watch/vm.go#L1268
Verified in console release-4.5 branch commit: 5994c64ee529b650bae348ef78ebc23dca8db5c5
Re-opening: When a stopped VM which has runStrategy: Manual: 1. The VM cannot be started from the UI. 2. The VM status is either 'Unknown' or 'VM error' (see screenshot)
Created attachment 1692230 [details] Stopped VM with runStrategy: Manual
According to https://kubevirt.io/user-guide/#/creation/run-strategies?id=run-strategies it should be impossible to Start a VM that has runStrategy: RerunOnFailure Only start and restart should be possible.
Can you please specify? If you create a VM with runStrategy: RerunOnFailure, it will go into running immediately. When you stop the VM the runStrategy will change to Halted. Which is valid strategy to start VM and will cause the VM to change to Always strategy. When you restart the VM with RerunOnFailure it will stay in RerunOnFailure and will be running again. Did I forgot about some state? Or what is wrongly handled by UI?
Book keeping note: should be 4.6 at this point, see Samuel Padgett comment above, reverting back to 4.6
@Filip apologies, I thought that it was possible to start the VM while in RerunOnFailure runStrategy, but it is indeed not possible. Moving to verified, I believe we can change the target release to 4.5.0 again.
> I believe we can change the target release to 4.5.0 again. Thanks +1
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:2409