Bug 2209321 - Automatic workload update failed after upgrade from 4.11.4->4.12.3 [NEEDINFO]
Summary: Automatic workload update failed after upgrade from 4.11.4->4.12.3
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Container Native Virtualization (CNV)
Classification: Red Hat
Component: Virtualization
Version: 4.12.3
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 4.14.0
Assignee: lpivarc
QA Contact: Kedar Bidarkar
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-05-23 13:36 UTC by Debarati Basu-Nag
Modified: 2023-08-14 13:09 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-08-14 13:09:27 UTC
Target Upstream Version:
Embargoed:
lpivarc: needinfo? (dbasunag)


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker CNV-28990 0 None None None 2023-05-23 13:38:16 UTC

Comment 1 Debarati Basu-Nag 2023-05-23 22:31:12 UTC
This issue does not seem like a blocker. Only have seen 1 out of 4 attempts (2 times against PSI - Large clusters and 2 times against BM clusters). 4 vms on the clusters were not migratable (vm-upgrade-b-*, vm-upgrade-a-*, vma-macspoof-*, vmb-macspoof-*; matching targets does not exist). I saw both migratable and non-migratable vms both being picked up for migration over 3 hours and reporting failure.

Comment 2 lpivarc 2023-06-27 07:56:14 UTC
Hi Debarati,

From the provided information I cannot conclude why the upgrade took unexpected time. The only relevant warning and facts which I would be interested to investigate are:

1. {"component":"virt-controller","kind":"","level":"warning","msg":"Migration target pod for VMI [test-upgrade-namespace/windows-vm-1684797613-5818446] is currently unschedulable.","name":"kubevirt-workload-update-8g2vb","namespace":"test-upgrade-namespace","pos":"migration.go:1092","timestamp":"2023-05-23T00:55:41.091782Z","uid":"d19b62e4-99b0-4fd9-b379-e4f7e8a0bc21"}

The log suggests the target Pod cannot be scheduled. It would be good to check why as this might be the root cause of why it took so long.

2. Migrations in flight. How long did they take and if they did succeed?

It would be great to reproduce this issue and provide cluster access in order to be able to investigate the points above.


Note You need to log in before you can comment on or make changes to this bug.