Bug 1893790
| Summary: | VM stuck in pending state | ||||||
|---|---|---|---|---|---|---|---|
| Product: | Container Native Virtualization (CNV) | Reporter: | Israel Pinto <ipinto> | ||||
| Component: | Virtualization | Assignee: | lpivarc | ||||
| Status: | CLOSED ERRATA | QA Contact: | Israel Pinto <ipinto> | ||||
| Severity: | high | Docs Contact: | |||||
| Priority: | high | ||||||
| Version: | 2.5.0 | CC: | cnv-qe-bugs, danken, gouyang, kbidarka, sgott | ||||
| Target Milestone: | --- | ||||||
| Target Release: | 4.8.0 | ||||||
| Hardware: | Unspecified | ||||||
| OS: | Unspecified | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | hco-bundle-registry-container-v4.8.0-347 virt-operator-container-v4.8.0-58 | Doc Type: | If docs needed, set a value | ||||
| Doc Text: | Story Points: | --- | |||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2021-07-27 14:20:49 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Embargoed: | |||||||
| Attachments: |
|
||||||
|
Description
Israel Pinto
2020-11-02 15:59:08 UTC
Created attachment 1725890 [details]
vm spec
Could not reproduce the issue by using the provided yaml to create VM on latest CNV 2.5. Israel, is this still an issue? Does this still occur on the current release? (In reply to sgott from comment #6) > Does this still occur on the current release? Reproduce with: #oc get clusterversions.config.openshift.io NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.8.0-0.nightly-2021-02-25-151800 True False 10h Cluster version is 4.8.0-0.nightly-2021-02-25-151800 #oc get csv -n openshift-cnv NAME DISPLAY VERSION REPLACES PHASE kubevirt-hyperconverged-operator.v4.8.0 OpenShift Virtualization 4.8.0 kubevirt-hyperconverged-operator.v2.5.3 Succeeded Steps: Create VM with the attached VM spec. VMI is in pending state since import is failed I can't stop the VMI with virtctl, only delete it. See outputs: [cnv-qe-jenkins@virt01-4vvbc-executor ~]$ oc get vmi NAME AGE PHASE IP NODENAME f31-v2 2m12s Pending [cnv-qe-jenkins@virt01-4vvbc-executor ~]$ oc get pods NAME READY STATUS RESTARTS AGE importer-f31-rootdisk 0/1 CrashLoopBackOff 2 67s [cnv-qe-jenkins@virt01-4vvbc-executor ~]$ oc get pods NAME READY STATUS RESTARTS AGE importer-f31-rootdisk 0/1 CrashLoopBackOff 2 71s [cnv-qe-jenkins@virt01-4vvbc-executor ~]$ oc get pods NAME READY STATUS RESTARTS AGE importer-f31-rootdisk 1/1 Running 3 75s [cnv-qe-jenkins@virt01-4vvbc-executor ~]$ oc get pods NAME READY STATUS RESTARTS AGE importer-f31-rootdisk 0/1 Error 3 82s [cnv-qe-jenkins@virt01-4vvbc-executor ~]$ oc get pods NAME READY STATUS RESTARTS AGE importer-f31-rootdisk 0/1 Error 3 86s [cnv-qe-jenkins@virt01-4vvbc-executor ~]$ oc get pods NAME READY STATUS RESTARTS AGE importer-f31-rootdisk 0/1 CrashLoopBackOff 3 92s [cnv-qe-jenkins@virt01-4vvbc-executor ~]$ oc get vmi NAME AGE PHASE IP NODENAME f31-v2 2m49s Pending [cnv-qe-jenkins@virt01-4vvbc-executor ~]$ virtctl stop f31-v2 VM f31-v2 was scheduled to stop [cnv-qe-jenkins@virt01-4vvbc-executor ~]$ oc get vmi NAME AGE PHASE IP NODENAME f31-v2 3m10s Pending [cnv-qe-jenkins@virt01-4vvbc-executor ~]$ oc get vmi NAME AGE PHASE IP NODENAME f31-v2 3m13s Pending [cnv-qe-jenkins@virt01-4vvbc-executor ~]$ oc delete vm f31-v2 virtualmachine.kubevirt.io "f31-v2" deleted [cnv-qe-jenkins@virt01-4vvbc-executor ~]$ oc get vmi No resources found in default namespace. https://github.com/kubevirt/kubevirt/pull/5349 should fix this. PR https://github.com/kubevirt/kubevirt/pull/5349 was merged to master, moving to MODIFIED Verify with the vm spec (image url no exits)
---
apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachine
metadata:
annotations:
description: Fedora 31 image to run kubevirtci k8s-1.19 in it.
kubevirt.io/latest-observed-api-version: v1alpha3
kubevirt.io/storage-observed-api-version: v1alpha3
name.os.template.kubevirt.io/silverblue32: Fedora 31 or higher
name: f31-v2
labels:
app: f31-v2
spec:
dataVolumeTemplates:
-
metadata:
name: f31-rootdisk
spec:
pvc:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
volumeMode: Filesystem
source:
http:
url: "https://download.fedoraproject.org/pub/fedora/linux/releases/32/Cloud/x86_64/images/Fedora-Cloud-Base-35-1.6.x86_64.qcow2"
status: {}
running: true
template:
metadata:
labels:
flavor.template.kubevirt.io/medium: 'true'
kubevirt.io/domain: f31
kubevirt.io/size: medium
os.template.kubevirt.io/silverblue32: 'true'
vm.kubevirt.io/name: f31
workload.template.kubevirt.io/server: 'true'
spec:
domain:
cpu:
cores: 1
sockets: 1
threads: 1
devices:
disks:
- bootOrder: 1
disk:
bus: virtio
name: rootdisk
interfaces:
- masquerade: {}
model: virtio
name: nic-0
networkInterfaceMultiqueue: true
rng: {}
machine:
type: pc-q35-rhel8.2.0
resources:
requests:
memory: 4Gi
evictionStrategy: LiveMigrate
hostname: f31
networks:
- name: nic-0
pod: {}
terminationGracePeriodSeconds: 180
volumes:
- dataVolume:
name: f31-rootdisk
name: rootdisk
status: {}
flow:
# oc apply -f ~/cnv_yamls/pending_bug.yaml -n israel-bugs (9:49)(:|✔)
virtualmachine.kubevirt.io/f31-v2 created
# oc get vm,vmi,dv -o wide -n israel-bugs (9:50)(:|✔)
NAME AGE VOLUME CREATED
virtualmachine.kubevirt.io/f31-v2 3s true
NAME AGE PHASE IP NODENAME LIVE-MIGRATABLE PAUSED
virtualmachineinstance.kubevirt.io/f31-v2 3s Pending
NAME PHASE PROGRESS RESTARTS AGE
datavolume.cdi.kubevirt.io/f31-rootdisk PVCBound N/A 4s
# oc get vm,vmi,dv -o wide -n israel-bugs (9:50)(:|✔)
NAME AGE VOLUME CREATED
virtualmachine.kubevirt.io/f31-v2 10s true
NAME AGE PHASE IP NODENAME LIVE-MIGRATABLE PAUSED
virtualmachineinstance.kubevirt.io/f31-v2 10s Pending
NAME PHASE PROGRESS RESTARTS AGE
datavolume.cdi.kubevirt.io/f31-rootdisk ImportInProgress N/A 1 11s
# oc get vm,vmi,dv,pods -o wide -n israel-bugs (9:50)(:|✔)
NAME AGE VOLUME CREATED
virtualmachine.kubevirt.io/f31-v2 17s true
NAME AGE PHASE IP NODENAME LIVE-MIGRATABLE PAUSED
virtualmachineinstance.kubevirt.io/f31-v2 17s Pending
NAME PHASE PROGRESS RESTARTS AGE
datavolume.cdi.kubevirt.io/f31-rootdisk ImportInProgress N/A 1 18s
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/importer-f31-rootdisk 0/1 CrashLoopBackOff 1 12s 10.129.3.153 cnv-qe-13.cnvqe.lab.eng.rdu2.redhat.com <none> <none>
# oc get vm,vmi,dv,pods -o wide -n israel-bugs (9:50)(:|✔)
NAME AGE VOLUME CREATED
virtualmachine.kubevirt.io/f31-v2 22s true
NAME AGE PHASE IP NODENAME LIVE-MIGRATABLE PAUSED
virtualmachineinstance.kubevirt.io/f31-v2 22s Pending
NAME PHASE PROGRESS RESTARTS AGE
datavolume.cdi.kubevirt.io/f31-rootdisk ImportInProgress N/A 1 23s
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/importer-f31-rootdisk 0/1 CrashLoopBackOff 1 17s 10.129.3.153 cnv-qe-13.cnvqe.lab.eng.rdu2.redhat.com <none> <none>
# virtctl stop f31-v2 -n israel-bugs (9:50)(:|✔)
VM f31-v2 was scheduled to stop
# oc get vm,vmi,dv -o wide -n israel-bugs (9:50)(:|✔)
NAME AGE VOLUME CREATED
virtualmachine.kubevirt.io/f31-v2 39s
NAME PHASE PROGRESS RESTARTS AGE
datavolume.cdi.kubevirt.io/f31-rootdisk ImportInProgress N/A 2 40s
# oc delete vm f31-v2 -n israel-bugs (9:51)(:|✔)
virtualmachine.kubevirt.io "f31-v2" deleted
VM been stopped with virtctl, we can get out of pending state.
moving to verfiy.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Virtualization 4.8.0 Images), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:2920 |