Description of problem: Customer used a Windows 2019 template via the wizard and the NIC was defined as virtio by default which caused VM to not have networking on bootup. Version-Release number of selected component (if applicable): CNV 2.3 How reproducible: 100% Steps to Reproduce: 1.Install CNV 2.3 2.Create a Windows 2019 VM via wizard 3.Boot VM 4.Via console go into Windows 2019 VM and device manager and nic will have a "?" on it since no driver available. Actual results: Windows 2019 template creates a NIC with virtio driver Expected results: By default WIndows 2019 should be created wit a SATA disk and e1000e NIC Additional info:
Let's understand if this is a UI or template issue.
I can see e1000e is already used in our templates: https://github.com/kubevirt/common-templates/blob/master/templates/windows.tpl.yaml#L125 Maybe an older version of the template was used? can you please provide the VM spec?
So I can still see this behavior in a 4.5.7 OCP cluster with CNV 2.4. If I go to the UI->Create VM Wizard->Select Windows 2019 as OS Type->Networking screen still provides me a nic-0 with VirtIO.
Tomas, can this be a modification of the UI?
Indeed it is a bug in the UI, taking. @Benjamin: when you pick a windows OS, you will get the windows guest tools automatically attached so you should find the drivers inside of your VM on a CD so you should be able to install them. Does this work for you?
Both the customer and myself through testing did see the guest tools mounted. However the customer actually preferred my workaround of editing the NIC driver before deploying the VM where we set it to rtl8139 which allowed Windows to come up without any additional guest tool installs. Why can't the UI have e1000 as the default when Windows is selected as the OS?
As what is UI concerned. This is either about 1. Adding a new feature and accepting nics/disks from common templates. It is currently not implemented. or 2. Adding interface model validations into common templates and specifying the right interface model as recommended (justWarning attribute). The UI could then act upon this. Nevertheless, either of these is not a simple bug fix.
I would go with option 1 - similarly to https://github.com/openshift/console/pull/6543
ok, so what are going to do in this bug fix? Copy all networks from common templates to the final VM/Template? And disable the current automatic pod network creation by the UI? Are we going to also include disks?
(In reply to Filip Krepinsky from comment #10) > ok, so what are going to do in this bug fix? > > Copy all networks from common templates to the final VM/Template? yes, we can not ignore them > And disable the current automatic pod network creation by the UI? Are we going > to also include disks? That is less important now since there are none. But in general we should. But as part of this fix Id worry about the templates.
severity is low, moving out of blocker list
works for me on current master, moving to modified
Created attachment 1746183 [details] win 2019 flow with sata and e1000 working
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2020:5633