Bug 1581709 - Move the vfio-mdev vGPU hook to a VDSM code base
Summary: Move the vfio-mdev vGPU hook to a VDSM code base
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: vdsm
Classification: oVirt
Component: Core
Version: ---
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ovirt-4.2.6
: ---
Assignee: Milan Zamazal
QA Contact: Nisim Simsolo
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-05-23 13:02 UTC by Martin Polednik
Modified: 2019-04-28 09:25 UTC (History)
7 users (show)

Fixed In Version: v4.20.37
Clone Of:
Environment:
Last Closed: 2018-09-03 15:08:59 UTC
oVirt Team: Virt
Embargoed:
rule-engine: ovirt-4.2+


Attachments (Terms of Use)
engine.log (273.31 KB, application/x-xz)
2018-06-05 14:40 UTC, Nisim Simsolo
no flags Details
vdsm.log (582.96 KB, application/x-xz)
2018-06-05 14:40 UTC, Nisim Simsolo
no flags Details


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 91541 0 'None' MERGED vm: update DomainDescriptor after executing before_vm_start 2020-06-16 14:04:39 UTC
oVirt gerrit 91589 0 'None' MERGED vm: update DomainDescriptor after executing before_vm_start 2020-06-16 14:04:39 UTC
oVirt gerrit 91965 0 'None' MERGED hooking: update DomainDescriptor after each before_vm_start hook 2020-06-16 14:04:39 UTC
oVirt gerrit 92057 0 'None' MERGED hooking: update DomainDescriptor after each before_vm_start hook 2020-06-16 14:04:39 UTC
oVirt gerrit 92092 0 'None' MERGED vfio-mdev: move hook to codebase 2020-06-16 14:04:38 UTC
oVirt gerrit 92182 0 'None' MERGED core: set vgpu device in libvirt xml 2020-06-16 14:04:38 UTC
oVirt gerrit 93478 0 'None' MERGED vfio-mdev: move hook to codebase 2020-06-16 14:04:38 UTC
oVirt gerrit 93585 0 'None' MERGED core: set vgpu device in libvirt xml 2020-06-16 14:04:38 UTC
oVirt gerrit 98779 0 'None' MERGED cleaning: drop vdsm-hook-vfio-mdev dependency 2020-06-16 14:04:38 UTC
oVirt gerrit 98796 0 'None' MERGED cleaning: drop vdsm-hook-vfio-mdev dependency 2020-06-16 14:04:38 UTC

Description Martin Polednik 2018-05-23 13:02:45 UTC
Description of problem:
If the start process of a VM fails, the vGPU devices created by vdsm-hook-vfio-mdev may remain in the system. In that case, trying to start the same VM again will fail and the only way to resolve that is to clean the devices up manually.

Version-Release number of selected component (if applicable):
4.2

How reproducible:
100%

Steps to Reproduce:
1. Create a VM that fails when being started,
2. assign mdev type to that VM,
3. start the VM.

Actual results:
/usr/libexec/vdsm/hooks/after_vm_destroy/50_vfio_mdev: rc=1 err=vgpu: No mdev found.

can be seen in the log file, and the mdev instance is still seen in the system.

Expected results:
/usr/libexec/vdsm/hooks/after_vm_destroy/50_vfio_mdev: rc=0 err=

Additional info:

Comment 1 Nisim Simsolo 2018-06-05 14:37:46 UTC
Reassigned:
First scenario:
 When reproducing VM start failure with vfio_mdev hook by enabling Nvidia ECC, the VM failed to run and after disabling Nvidia ECC the VM is running properly and the vGPU device removed from /sys/class/mdev_bus.
Second scenario:
 But when causing VM start failure using hook which is after mdev_hook, the vGPU device is not cleared from /sys/class/mdev_bus.

Scenario:
1. vfio_mdev hook failure:
a. Enable Nvidia ECC support, add hook to VM and Run VM.
b. VM failed to run.
c. Disable Nvidia ECC support, reboot host.
d. Verify vGPU device removed from /sys/class/mdev_bus/
e. Run VM and Verify VM is running properly with vGPU device.

2. Hook failure after the mdev_hook (in my case I use spicestream hook)
a. Rename /usr/libexec/vdsm/hooks/before_vm_start/50_spicestream to 60_spicestream, edit file and unmark sys.exit at the end of the file.
b. add mdev_type hook to VM and run VM.
c. VM failed to run.
d. Mark sys.exit and run VM.
e. VM failed to run with the next ERROR engine.log:
2018-06-05 16:10:34,944+03 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-10) [] EVENT_ID: VM_DOWN_ERROR(119), VM vGPU_Win10_C1 is down with error. Exit message: Hook Error: ('',).

and vGPU device is not removed from /sys/class/mdev_bus/

Verification version:
rhvm-4.2.4.1-0.1.el7
vdsm-hook-vfio-mdev-4.20.29-1.el7ev.noarch
libvirt-client-3.9.0-14.el7_5.5.x86_64
vdsm-4.20.29-1.el7ev.x86_64
qemu-kvm-rhev-2.10.0-21.el7_5.3.x86_64

engine.log and vdsm.log attached

Comment 2 Nisim Simsolo 2018-06-05 14:40:17 UTC
Created attachment 1447877 [details]
engine.log

Comment 3 Nisim Simsolo 2018-06-05 14:40:40 UTC
Created attachment 1447878 [details]
vdsm.log

Comment 4 Michal Skrivanek 2018-06-14 12:11:19 UTC
actually, for 4.2.5 I'd like to see a solution based on https://gerrit.ovirt.org/#/c/92092/

Comment 5 Michal Skrivanek 2018-07-16 14:36:04 UTC
the cleanup now works fine, but the hook->code move needs a bit more time

Comment 6 Milan Zamazal 2018-08-03 15:19:38 UTC
Notes on verification (once the patch is merged) of this somewhat complex change:

- It must be tested at least with Engine 4.1 (or older), Engine 4.2.5 (or older 4.2.Z) and Engine containing https://gerrit.ovirt.org/92182.
- Vdsm upgrade from 4.2.5 (or older) must be tested -- former and no longer available vdsm-hook-vfio-mdev package must be smoothly and automatically removed on Vdsm upgrades.
- The vGPU VM should be run successfully at least three times in each particular environment.
- It should be always checked that multiple mdev devices of the same type are never present in VM device list. Even e.g. when mdev_type custom property is removed and added again between VM restarts.
- A vGPU VM successfully started with an older Vdsm version (via the hook) must be able to successfully start, repeatedly and without device list disturbance, also after Vdsm upgrade.
- Erroneous situations, such as failed device initialization or VM start failure after successful device initialization, should be tested for proper vGPU cleanup.

Comment 7 Milan Zamazal 2018-08-07 15:32:43 UTC
Let's wait for the Engine patch.

Comment 8 Nisim Simsolo 2018-08-26 13:39:10 UTC
Verified, rhvm-4.2.6.3-0.1.el7ev

Scenario A - Update vdsm from 4.2.4 to 4.2.6
Builds used:
rhvm-4.2.4.7-0.1.el7ev -> rhvm-4.2.6.3-0.1.el7ev
vdsm-4.20.32-1.el7ev.x86_64 -> vdsm-4.20.37-1.el7ev.x86_64
vdsm-hook-vfio-mdev-4.20.32-1.el7ev -> vdsm code base

1. RHV 4.2.4 - Run 3 different OS type VMs (RHEL7, Windows 10 and Fedora 27) with different vGPU nvidia instances.
     Verify VMs are running, GPU is available and Nvidia drivers are installed correctly on the VMs.
     Observe VM XML and verify mdevType property and hostdev uuid are correct.
     For example:
   <ovirt-vm:device devtype="hostdev" uuid="3e55a8e3-7360-464f-b4ae-81b63d0b8a37">
        <ovirt-vm:mdevType>nvidia-22</ovirt-vm:mdevType>
2. Reboot VMs 3 times.
     Verify VMs are running properly after each reboot.
3. Upgrade RHV engine and host to RHV 4.2.6
     Verify vdsm-hook-vfio-mdev package removed after the upgrade.
4. Repeat verification steps 1-2
5. Browse vdsm.log or virsh in order to verify host device of vGPU is listed under VM XML
For example:
    <hostdev mode='subsystem' type='mdev' managed='no' model='vfio-pci'>
      <source>
        <address uuid='4fc4f0f3-0cda-3c2a-aec3-0cdc1341efc8'/>
      </source>
      <alias name='hostdev0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </hostdev>
6. Power off VMs, remove Nvidia mdev_type and Run VMs again.
     Verify VMs are running properly
     Verify mdev type is not listed in VM XML and also cleared from Webadmin -> Compute -> VM ->  VM devices tab
7. Power off VMs and add different mdev_type nvidia instance to VMs.
     Run VMs and verify VMs are running with correct GPU type.
     Verify that mdevType property and hostdev uuid replaced accordingly in VM XML and in Webadmin VM device tab.
8. Reboot VMs 3 time.
     After each reboot, Verify VMs are running with vGPU device and mdevType/hostdev uuid are listed in VMs XML correctly.

---------------------------------------------------

Scenario B - Cluster level 4.1 to cluster level 4.2
Verification builds:
rhvm-4.2.6.3-0.1.el7ev
vdsm-4.20.37-1.el7ev.x86_64
qemu-kvm-rhev-2.10.0-21.el7_5.6.x86_64
sanlock-3.6.0-1.el7.x86_64
libvirt-client-3.9.0-14.el7_5.7.x86_64
NVIDIA-vGPU-rhel-7.5-390.56.x86_64

1. Change DC and Cluster level compatibility version to 4.1.
2. Run 3 different OS type VMs (RHEL7, Windows 10 and Fedora 27) with different vGPU nvidia instances.
     Verify VMs are running, vGPU is available and Nvidia drivers are installed correctly on the VMs.
     Observe VM XML and verify mdevType property and hostdev uuid are correct.
3. Reboot VMs 3 times.
     Verify VMs are running properly after each reboot.
4. While VMs are running, Change Cluster compatibility version to 4.2
     After compatibility changed, Verify VMs are still running properly.
     In Webadmin VMs tab, verify delta icon appeared next to each VM mentioning that VM restart is required.
5. While VMs are running, change VMs custom property mdev_type to a different Nvidia instance and run VMs.
     Verify popup message appears stating that VMs restart is required.
6. Restart VMs
     Verify VMs are running properly and mdevType/hostdev uuid changed accordingly in VM XML and Webadmin -> VM devices.
7. Reboot VMs 3 times.
     Verify VMs are running properly after each reboot.
8. While VMs are running,  change DC compatibility versiion to 4.2
      Verify VMs continue to run.
9. Poweroff -> Run VMs 3 times
      Verify VMs are running properly after each reboot.
10. Power off VMs, remove custom property mdev_type and run VMs.
     Observe VMs XML and Webadmin -> VM devices and verify mdevType/hostdev uuid removed from VM
11. Run 3 VMs with same mdev_type nvidia instance type
     Verify VMs are running properly and mdevType/hostdev uuid changed accordingly in VM XML and Webadmin -> VM devices.
12. Reboot VMs
      Verify VMs are running after each reboot with mdev device.

----------------------------------------------

Scenario C - Enable Nvidia ECC (in order to add mdev_type to VM but create running VM failure)
1. Enable Nvidia ECC support, and run VM with custom property mdev_type
2. VM failed to run.
3. Disable Nvidia ECC support, reboot host.
4. Verify vGPU device removed from /sys/class/mdev_bus/
5. Run VM and Verify VM is running properly with mdev_type device.
    Observe VM XML and Webadmin VM devices tab and verify correct mdev uuid is listed.
6. Reboot VM 3 times    
    Verify VMs are running properly after each reboot with correct hostdev uuid and mdevType property
7. Remove mdev_type from VM and run VM
    Verify mdev_type device removed from VM XML and from VM_devices in Webadmin

----------------------------------------------

Scenario D - hook failure
1. Rename /usr/libexec/vdsm/hooks/before_vm_start/50_spicestream to 60_spicestream, edit file and unmark sys.exit at the end of the file.
2. add mdev_type custom property to VM and run VM.
3. VM failed to run.
4. Mark sys.exit and run VM.
     Verify VM is running properly with correct mdevType propery and hostdev uuid
5. Reboot VM 3 times    
     Verify VMs are running properly after each reboot with correct hostdev uuid and mdevType property
6. Remove mdev_type from VM and run VM
     Verify mdev_type device removed from VM XML and from VM_devices in Webadmin


Note You need to log in before you can comment on or make changes to this bug.