This issue does not seem like a blocker. Only have seen 1 out of 4 attempts (2 times against PSI - Large clusters and 2 times against BM clusters). 4 vms on the clusters were not migratable (vm-upgrade-b-*, vm-upgrade-a-*, vma-macspoof-*, vmb-macspoof-*; matching targets does not exist). I saw both migratable and non-migratable vms both being picked up for migration over 3 hours and reporting failure.
Hi Debarati, From the provided information I cannot conclude why the upgrade took unexpected time. The only relevant warning and facts which I would be interested to investigate are: 1. {"component":"virt-controller","kind":"","level":"warning","msg":"Migration target pod for VMI [test-upgrade-namespace/windows-vm-1684797613-5818446] is currently unschedulable.","name":"kubevirt-workload-update-8g2vb","namespace":"test-upgrade-namespace","pos":"migration.go:1092","timestamp":"2023-05-23T00:55:41.091782Z","uid":"d19b62e4-99b0-4fd9-b379-e4f7e8a0bc21"} The log suggests the target Pod cannot be scheduled. It would be good to check why as this might be the root cause of why it took so long. 2. Migrations in flight. How long did they take and if they did succeed? It would be great to reproduce this issue and provide cluster access in order to be able to investigate the points above.