Bug 1699274

Summary: RFE: Persist mediated devices on reboot
Product: Red Hat Enterprise Linux Advanced Virtualization Reporter: Sylvain Bauza <sbauza>
Component: libvirtAssignee: Jonathon Jongsma <jjongsma>
Status: CLOSED ERRATA QA Contact: yafu <yafu>
Severity: unspecified Docs Contact:
Priority: high    
Version: 8.0CC: alex.williamson, chhu, cohuck, dyuan, eskultet, jdenemar, jsuchane, knoel, lcheng, lmen, rbalakri, xuzhang, yafu, yalzhang, zhetang, zhguo
Target Milestone: pre-dev-freezeKeywords: FutureFeature, Triaged
Target Release: 8.1   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: libvirt-7.3.0-1.el8 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-11-16 07:49:54 UTC Type: Feature Request
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version: 7.3.0
Embargoed:
Bug Depends On: 1746043    
Bug Blocks: 1758964    

Description Sylvain Bauza 2019-04-12 09:49:04 UTC
Description of problem:

Since libvirt doesn't provide an API for mediated devices, we continue to rely on sysfs for creating and deleting mediated devices.

That said, those devices disappear when a host is rebooted, leading to guests using mediated device information that is invalid.


Version-Release number of selected component (if applicable):
Any.

How reproducible:

Always.

Steps to Reproduce:
1. create a mediated device using sysfs
2. assign the mdev to a guest
3. reboot the host
4. check the mdev existence

Actual results:

The mdev disappears, leading to an inconsistent guest definition.

Expected results:
The mdev should be recreated.

Additional info:

Comment 3 Alex Williamson 2019-05-03 22:36:31 UTC
Hi Sylvain, sorry we didn't get to talk about this in person, but I'd like to understand why mdev devices are fundamentally different from SR-IOV VFs in this respect.  AIUI libvirt does not persist SR-IOV configurations across boots either, and like mdev I don't think libvirt actually creates SR-IOV VFs itself.  Please correct me if I'm mistaken in either of these accounts.  SR-IOV VF persistence can be accomplished in a number of ways, NetworkManager can enable VFs for SR-IOV NICs, module options can instantiate VFs, modprobe.d post install scripts can create VFs, custom systemd scripts, etc.  It makes sense to separate VF persistence from libvirt because libvirt is not necessarily the only consumer of VFs.  The same is true of mdev devices.  We're working on proposals upstream where mdev is also used for creating virtual devices used by drivers within the host kernel.  Would libvirt also be responsible for persisting those devices?

If rather than persistence of mdev devices we want to change the requirement dynamic instantiation of an mdev device in support of a VM, then it starts to make more sense for this to be managed through libvirt.  There is however a lot of meta data and policy that needs to be worked through to make that possible.  libvirt currently doesn't record what sort of mdev type backs a given uuid.  Presumably libvirt would need to understand the mdev type associated with a given uuid in order to manage the device.  Then come the policy decisions around where a device of a given type is created.  Is locality considered?  Is performance vs power consumption vs breadth of available mdev types considered?  Is libvirt intended to simply record the uuid to mdev type and parent device mapping an recreate it on the next instantiation of the VM?

It would be helpful to understand the usage model driving this request to determine if libvirt is really the correct target for this or if it should be managed elsewhere or if we can contribute to the discussion of your usage model to create a better solution.  Thanks, Alex

Comment 4 Alex Williamson 2019-05-15 19:50:15 UTC
Should we have a utility like driverctl that handles persistent virtual devices and include both mdev and SR-IOV VF use cases?  The SR-IOV side would overlap a bit with NetworkManager support for persistent VF NICs, but NM is specialized for network devices and can therefore do NIC specific things and would not make a good target for generic, non-NIC VF persistence.  libvirt seems not to be the right place for any of this as there are use cases beyond VMs and other management tools for VMs.

What would be reasonable interactions with such a utility?

The driverctl utility creates simple mappings of device to driver, I think this tool would need to index from the parent device to create either some number of VFs or some list of mdev-type:UUID pairs triggered from udev.

For the SR-IOV case the interface is limited to setting or clearing the number of VFs and managing the sriov_drivers_autoprobe attribute for a given PF device.  A listing interface would also be useful to list SR-IOV capable devices with current and possible VF configurations.

For mdev, I think a user would want to add and remove mdev types per parent, specifying the mdev type name and UUID.  A listing interface would be useful as well, showing available mdev types in the system (names, description, parent, available instances, etc), as well as existing mdevs with types and UUIDs.  vfio-ap support might get complicated, Cc Connie.

The non-persistence of PCI addresses for parent devices is problematic, but is potentially a mostly theoretical problem as driverctl has ignored it.

Does anything like this already exist?  Is the NM support mentioned above the only shipping utility that does this sort of thing (with limited scope)?

Comment 5 Cornelia Huck 2019-05-16 09:33:57 UTC
(In reply to Alex Williamson from comment #4)
> Should we have a utility like driverctl that handles persistent virtual
> devices and include both mdev and SR-IOV VF use cases?  The SR-IOV side
> would overlap a bit with NetworkManager support for persistent VF NICs, but
> NM is specialized for network devices and can therefore do NIC specific
> things and would not make a good target for generic, non-NIC VF persistence.
> libvirt seems not to be the right place for any of this as there are use
> cases beyond VMs and other management tools for VMs.

I agree, a standalone utility looks like the best place for this.

> 
> What would be reasonable interactions with such a utility?
> 
> The driverctl utility creates simple mappings of device to driver, I think
> this tool would need to index from the parent device to create either some
> number of VFs or some list of mdev-type:UUID pairs triggered from udev.

We also need udev rules (or something else) to ensure that devices end up being
bound to the correct driver.

> 
> For the SR-IOV case the interface is limited to setting or clearing the
> number of VFs and managing the sriov_drivers_autoprobe attribute for a given
> PF device.  A listing interface would also be useful to list SR-IOV capable
> devices with current and possible VF configurations.
> 
> For mdev, I think a user would want to add and remove mdev types per parent,
> specifying the mdev type name and UUID.  A listing interface would be useful
> as well, showing available mdev types in the system (names, description,
> parent, available instances, etc), as well as existing mdevs with types and
> UUIDs.  vfio-ap support might get complicated, Cc Connie.

vfio-ap will need more care. The base ap driver also needs to be configured to
make sure that vfio can bind to the correct queues in the first place ('alternate
driver' infrastructure). The matrix device stuff also makes vfio-ap different
from other mdev devices, but I think this is something that can be handled. The
IBM maintainers of vfio-ap need to be involved with this upstream.

> 
> The non-persistence of PCI addresses for parent devices is problematic, but
> is potentially a mostly theoretical problem as driverctl has ignored it.

For vfio-ccw, we rely on the subchannel id, which is not exactly stable (the
device number of the attached devices is, but this is not exposed via sysfs on
the subchannel). This would be a problem on z/VM hosts (and QEMU, which uses
the same approach), as they assign subchannel ids dynamically; on LPAR, this
is usually hardcoded and unlikely to change, and LPAR hosts are our target anyway.

For vfio-ap, I do not expect problems with id persistence.

> 
> Does anything like this already exist?  Is the NM support mentioned above
> the only shipping utility that does this sort of thing (with limited scope)?

I dimly recall ideas being floated at conferences, but not anything else beyond
NM support.

Comment 6 Erik Skultety 2019-05-20 10:05:47 UTC
(In reply to Alex Williamson from comment #4)
> Should we have a utility like driverctl that handles persistent virtual
> devices and include both mdev and SR-IOV VF use cases?  The SR-IOV side
> would overlap a bit with NetworkManager support for persistent VF NICs, but
> NM is specialized for network devices and can therefore do NIC specific
> things and would not make a good target for generic, non-NIC VF persistence.
> libvirt seems not to be the right place for any of this as there are use
> cases beyond VMs and other management tools for VMs.

If handling VFs are to be combined in mdevs (which makes sense IMHO) in a common scenario then introducing an utility for such a specific use case seems like a way to go. However, unless this utility is built around a library exporting APIs, libvirt will still have to handle mdevs on its own. So, preferably there should be an API libvirt could use (something like libudev) to query detailed information about mdevs, physical parent devices, potentially device creation and removal (even though libvirt could handle these on its own, but the information gathered might help in that process).

> 
> What would be reasonable interactions with such a utility?
> 
> The driverctl utility creates simple mappings of device to driver, I think
> this tool would need to index from the parent device to create either some
> number of VFs or some list of mdev-type:UUID pairs triggered from udev.
> 
> For the SR-IOV case the interface is limited to setting or clearing the
> number of VFs and managing the sriov_drivers_autoprobe attribute for a given
> PF device.  A listing interface would also be useful to list SR-IOV capable
> devices with current and possible VF configurations.
> 
> For mdev, I think a user would want to add and remove mdev types per parent,
> specifying the mdev type name and UUID.  A listing interface would be useful
> as well, showing available mdev types in the system (names, description,
> parent, available instances, etc), as well as existing mdevs with types and
> UUIDs.  vfio-ap support might get complicated, Cc Connie.
> 
> The non-persistence of PCI addresses for parent devices is problematic, but
> is potentially a mostly theoretical problem as driverctl has ignored it.
> 
> Does anything like this already exist?  Is the NM support mentioned above
> the only shipping utility that does this sort of thing (with limited scope)?

I'm not sure, but when I was hunting down a timing bug in libvirt/libudev which only occurred on M10s which allow you to spawn up to 128 mdevs, I wrote a bash script which listed the parent devices and you could add/remove a device, specify the parent, UUID and a number of devices to be created/removed in a batch (the bash code turned out to be very ugly..and buggy)

Comment 16 Jonathon Jongsma 2021-04-09 21:27:19 UTC
This has now been merged upstream and should be picked up when libvirt is rebased to 7.3. Relevant commits so far:

e2f82a3704f680fbb37a733476d870c19232c23e api: Add 'flags' param to virNodeDeviceCreate/Undefine()
e7b7c87a577223347f890aae29150e4e9c23cfe1 nodedev: fix release version in comments for new API
afda589d0574bd30b96f0d43489194f5b4c05355 nodedev: avoid delay when defining a new mdev
9e8e93dc6afbd2d5c9a7606983a24190e6105d77 nodedev: factor out function to add mediated devices
fd90678e3e71d513c1ecbb8856c29822ac36e177 nodedev: add docs about mdev attribute order
f25b13b6e5927da3b7e0546986a8d3f9a9fefe1b nodedev: fix hang when destroying an mdev in use
62a73c525ce514bb2dc1c560cb598da19c17611f nodedev: add ability to specify UUID for new mdevs
07666e292e731910208a86ee1cafab34af71e112 nodedev: add <uuid> element to mdev caps
45741a4a2d354a2bf13a782d2d30a46f0decdad3 virsh: add "nodedev-start" command
c0db1af2f8f35c0a65580169fb6398290c00d655 api: add virNodeDeviceCreate()
5dc935805e7897e795a18f6204b563bbac559533 virsh: add nodedev-undefine command
732a5eecbc26b417d856a28003332726c818eebf virsh: Factor out function to find node device
bb311cede795213f02938f68aaa5504548eccafd api: add virNodeDeviceUndefine()
f98c415f8a9c6f8073f2193e831f3397d57970d0 nodedev: refactor tests to support mdev undefine
725dfb6c36805fd836170a2fc2e7648e9f12b177 virsh: add nodedev-define command
7d5d29a72730182774ea9db02bae54d28df2dec7 virsh: Add --inactive, --all to nodedev-list
7e386cde1f7f6761ac189277e07d74f7d98a8254 api: add virNodeDeviceDefineXML()
a48a2abe601d3efbd97de4215ffa2d44ddf071c6 nodedev: add function to generate mdevctl define command
2c57b28191b93c48eeac4185dde46cc7bc70a370 nodedev: Refresh mdev devices when changes are detected
259ed0ff285f4a57aa2e53d91fcfc08d081e1e6f nodedev: handle mdevs that disappear from mdevctl
00b649d0cfdc0e9170ec2ff1afc43a25c08ebe49 nodedev: add helper functions to remove node devices
aa897d46d5894bf0f7f346588ea11caa4c81d3c8 nodedev: add mdevctl devices to node device list
94187b800472595db9c15c1843b572f327b99ca6 nodedev: add DEFINED/UNDEFINED lifecycle events
d4375403ff80f64eeace3f6b57a50f1f6914a31a nodedev: add persistence to virNodeDeviceObj
066c13de660318552c121c369dc6d811307e2f4f nodedev: add ability to list defined mdevs
58d093a55f8e183b5c14152c16706d19e1a2135a nodedev: add ability to parse mdevs from mdevctl
eb27a233f27380f59905d0a64457777d79f54049 tests: trivial change to mdevctl test macro
8fed1d9636c1d0be058c1ca93c9630e25380e04f nodedev: expose internal helper for naming devices
e3107a18623405adeeece4854cbea28b4a2e64a6 nodedev: fix docs for virConnectListAllNodeDevices()
b1bfe3e5c435fc42efea7e475621102f496032be nodedev: Add ability to filter by active state
b7a823177b836d82798d2ae104152368f097bacc nodedev: introduce concept of 'active' node devices
682a65a322ec9ad7dca5c786e5c6f76fe0c4d2b4 tests: remove extra trailing semicolon
ab1703191b4afca19e7289e3db56fb8d87e50ffa nodedev: capture and report stderror from mdevctl

Comment 20 yafu 2021-05-26 02:53:36 UTC
Verified with 

Scenario 1: Define/start/list mdev device and reboot os to check the mdev device after booting:
1.Prepare a mdev device xml:
#cat mdev.xml
<device>
  <name>mdev_341c76dc_ef0e_4d23_8a9d_a223d935ae39</name>
  <parent>0000:84:00.0</parent>
  <capability type='mdev'>
    <type id='nvidia-12'/>
    <uuid>341c76dc-ef0e-4d23-8a9d-a223d935ae39</uuid>
    <iommuGroup number='0'/>
  </capability>
</device>

2.Define the mdev device:
# virsh nodedev-define mdev.xml 
Node device 'mdev_341c76dc_ef0e_4d23_8a9d_a223d935ae39' defined from 'mdev.xml'

3.List the mdev device:
# virsh nodedev-list --cap mdev --inactive 
mdev_341c76dc_ef0e_4d23_8a9d_a223d935ae39

4.Start a guest with the mdev device:
#virsh edit vm1
...
<hostdev mode='subsystem' type='mdev' managed='no' model='vfio-pci' display='off'>
      <source>
        <address uuid='341c76dc-ef0e-4d23-8a9d-a223d935ae39'/>
      </source>
      <alias name='ua-0febf033-9953-40e1-b803-2df42b0bac4e'/>
      <address type='pci' domain='0x0000' bus='0x09' slot='0x00' function='0x0'/>
    </hostdev>
...
#virsh start guest
error: Failed to start domain 'vm1'
error: device not found: mediated device '341c76dc-ef0e-4d23-8a9d-a223d935ae39' not found

5.Start the mdev device:
# virsh nodedev-start mdev_341c76dc_ef0e_4d23_8a9d_a223d935ae39
Device mdev_341c76dc_ef0e_4d23_8a9d_a223d935ae39 started

6.List the mdev device again:
# virsh nodedev-list --cap mdev
mdev_341c76dc_ef0e_4d23_8a9d_a223d935ae39

7.Start the guest again:
# virsh start vm1
Domain 'vm1' started

8.Login the guest and check the vgpu in guest os:
(guest os)#lspci | grep -i nvidia
# lspci | grep -i nvidia
09:00.0 VGA compatible controller: NVIDIA Corporation GM204GL [Tesla M60] (rev a1)

9.Try to destroy the mdev device which used by guest:
# virsh nodedev-destroy mdev_341c76dc_ef0e_4d23_8a9d_a223d935ae39 
error: Failed to destroy node device 'mdev_341c76dc_ef0e_4d23_8a9d_a223d935ae39'
error: internal error: Unable to destroy 'mdev_341c76dc_ef0e_4d23_8a9d_a223d935ae39': device in use

10.Restart libvirtd and check the mdev device:
# systemctl restart libvirtd; virsh nodedev-list --cap mdev
mdev_341c76dc_ef0e_4d23_8a9d_a223d935ae39

11.Reboot os and check the mdev device after guest booting:
# virsh nodedev-list --cap mdev --inactive 
mdev_341c76dc_ef0e_4d23_8a9d_a223d935ae39

Comment 21 yafu 2021-05-26 02:55:18 UTC
(In reply to yafu from comment #20)
> Verified with 
> 
> Scenario 1: Define/start/list mdev device and reboot os to check the mdev
> device after booting:
> 1.Prepare a mdev device xml:
> #cat mdev.xml
> <device>
>   <name>mdev_341c76dc_ef0e_4d23_8a9d_a223d935ae39</name>
>   <parent>0000:84:00.0</parent>
>   <capability type='mdev'>
>     <type id='nvidia-12'/>
>     <uuid>341c76dc-ef0e-4d23-8a9d-a223d935ae39</uuid>
>     <iommuGroup number='0'/>
>   </capability>
> </device>
> 
> 2.Define the mdev device:
> # virsh nodedev-define mdev.xml 
> Node device 'mdev_341c76dc_ef0e_4d23_8a9d_a223d935ae39' defined from
> 'mdev.xml'
> 
> 3.List the mdev device:
> # virsh nodedev-list --cap mdev --inactive 
> mdev_341c76dc_ef0e_4d23_8a9d_a223d935ae39
> 
> 4.Start a guest with the mdev device:
> #virsh edit vm1
> ...
> <hostdev mode='subsystem' type='mdev' managed='no' model='vfio-pci'
> display='off'>
>       <source>
>         <address uuid='341c76dc-ef0e-4d23-8a9d-a223d935ae39'/>
>       </source>
>       <alias name='ua-0febf033-9953-40e1-b803-2df42b0bac4e'/>
>       <address type='pci' domain='0x0000' bus='0x09' slot='0x00'
> function='0x0'/>
>     </hostdev>
> ...
> #virsh start guest
> error: Failed to start domain 'vm1'
> error: device not found: mediated device
> '341c76dc-ef0e-4d23-8a9d-a223d935ae39' not found
> 
> 5.Start the mdev device:
> # virsh nodedev-start mdev_341c76dc_ef0e_4d23_8a9d_a223d935ae39
> Device mdev_341c76dc_ef0e_4d23_8a9d_a223d935ae39 started
> 
> 6.List the mdev device again:
> # virsh nodedev-list --cap mdev
> mdev_341c76dc_ef0e_4d23_8a9d_a223d935ae39
> 
> 7.Start the guest again:
> # virsh start vm1
> Domain 'vm1' started
> 
> 8.Login the guest and check the vgpu in guest os:
> (guest os)#lspci | grep -i nvidia
> # lspci | grep -i nvidia
> 09:00.0 VGA compatible controller: NVIDIA Corporation GM204GL [Tesla M60]
> (rev a1)
> 
> 9.Try to destroy the mdev device which used by guest:
> # virsh nodedev-destroy mdev_341c76dc_ef0e_4d23_8a9d_a223d935ae39 
> error: Failed to destroy node device
> 'mdev_341c76dc_ef0e_4d23_8a9d_a223d935ae39'
> error: internal error: Unable to destroy
> 'mdev_341c76dc_ef0e_4d23_8a9d_a223d935ae39': device in use
> 
> 10.Restart libvirtd and check the mdev device:
> # systemctl restart libvirtd; virsh nodedev-list --cap mdev
> mdev_341c76dc_ef0e_4d23_8a9d_a223d935ae39
> 
> 11.Reboot os and check the mdev device after guest booting:
> # virsh nodedev-list --cap mdev --inactive 
> mdev_341c76dc_ef0e_4d23_8a9d_a223d935ae39

Forgot test version, verified with libvirt-daemon-7.3.0-1.module+el8.5.0+11004+f4810536.x86_64

Comment 22 yafu 2021-05-26 03:28:05 UTC
Verified with libvirt-daemon-7.3.0-1.module+el8.5.0+11004+f4810536.x86_64.

Scenario 2: Test nodedev-list can refresh with the mdevctl change and check the lifecycle for nodedev event:
1.Monitor nodedev event looply:
#virsh nodedev-event --all --loop

2.Open another terminal and define a mdev device by 'mdevctl'
# mdevctl define --uuid=8e21b099-b7a0-4c79-ad9e-6744362e2bee --parent=0000:84:00.0 --type=nvidia-12 -a

3.Check the event in terminal 1:
event 'lifecycle' for node device mdev_8e21b099_b7a0_4c79_ad9e_6744362e2bee: Defined

4.Use 'virsh nodedev-list' to list the mdev device:
# virsh nodedev-list --cap mdev --inactive 
mdev_8e21b099_b7a0_4c79_ad9e_6744362e2bee

5.Start the mdev device by 'mdevctl':
# mdevctl start --uuid=8e21b099-b7a0-4c79-ad9e-6744362e2bee

6.Check the event in terminal 1:
event 'lifecycle' for node device mdev_8e21b099_b7a0_4c79_ad9e_6744362e2bee: Created

7.Use 'virsh nodedev-list' to list the mdev device:
# virsh nodedev-list --cap mdev
mdev_8e21b099_b7a0_4c79_ad9e_6744362e2bee

8.Stop the mdev device by 'mdevctl':
# mdevctl stop --uuid=8e21b099-b7a0-4c79-ad9e-6744362e2bee

9.Check the event in terminal 1:
event 'lifecycle' for node device mdev_8e21b099_b7a0_4c79_ad9e_6744362e2bee: Deleted

10.Use 'virsh nodedev-list' to list the mdev device:
# virsh nodedev-list --inactive 
mdev_8e21b099_b7a0_4c79_ad9e_6744362e2bee

11.Undefine the mdev device by 'mdevctl':
# mdevctl undefine --uuid=8e21b099-b7a0-4c79-ad9e-6744362e2bee

12.Check the event in terminal 1:event 'lifecycle' for node device mdev_8e21b099_b7a0_4c79_ad9e_6744362e2bee: Undefined

13.List all the mdev device and no mdev device found:
#virsh nodedev-list --cap mdev --all
no output

Comment 23 yafu 2021-05-26 03:37:22 UTC
Verified with libvirt-daemon-7.3.0-1.module+el8.5.0+11004+f4810536.x86_64.

Scenario 2: Test for 'virsh nodedev-create'
1.Create mdev device:
#cat mdev.xml
<device>
  <name>mdev_341c76dc_ef0e_4d23_8a9d_a223d935ae39</name>
  <parent>pci_0000_84_00_0</parent>
  <capability type='mdev'>
    <type id='nvidia-12'/>
    <uuid>341c76dc-ef0e-4d23-8a9d-a223d935ae39</uuid>
    <iommuGroup number='0'/>
  </capability>
</device>

#virsh create mdev.xml
Node device mdev_341c76dc_ef0e_4d23_8a9d_a223d935ae39 created from mdev.xml

2.Check the event:
event 'lifecycle' for node device mdev_341c76dc_ef0e_4d23_8a9d_a223d935ae39: Created

3.Destroy the mdev device:
# virsh nodedev-destroy mdev_341c76dc_ef0e_4d23_8a9d_a223d935ae39
Destroyed node device 'mdev_341c76dc_ef0e_4d23_8a9d_a223d935ae39'

4.Check the event:
event 'lifecycle' for node device mdev_341c76dc_ef0e_4d23_8a9d_a223d935ae39: Deleted

5.List all mdev device:
# virsh nodedev-list --cap mdev --all | grep -i 341c
no output

Comment 24 yafu 2021-05-26 03:44:52 UTC
Verified with libvirt-daemon-7.3.0-1.module+el8.5.0+11004+f4810536.x86_64.

Scenario 4: Negative test
1.Define the mdev device with the same uuid twice:
# virsh nodedev-define mdev.xml 
Node device 'mdev_341c76dc_ef0e_4d23_8a9d_a223d935ae39' defined from 'mdev.xml'
# virsh nodedev-define mdev.xml 
error: Failed to define node device from 'mdev.xml'
error: internal error: Unable to define mediated device: Cowardly refusing to overwrite existing config for 0000:84:00.0/341c76dc-ef0e-4d23-8a9d-a223d935ae39

2.Create the mdev device with the same uuid twice:
# virsh nodedev-create mdev.xml 
Node device mdev_341c76dc_ef0e_4d23_8a9d_a223d935ae39 created from mdev.xml
# virsh nodedev-create mdev.xml 
error: Failed to create node device from mdev.xml
error: internal error: Unable to start mediated device 'new device':

3.Start the same mdev device twice:
# virsh nodedev-start mdev_341c76dc_ef0e_4d23_8a9d_a223d935ae39
Device mdev_341c76dc_ef0e_4d23_8a9d_a223d935ae39 started
# virsh nodedev-start mdev_341c76dc_ef0e_4d23_8a9d_a223d935ae39
error: Failed to start device mdev_341c76dc_ef0e_4d23_8a9d_a223d935ae39
error: Requested operation is not valid: Device is already active

Comment 25 yafu 2021-05-31 03:14:01 UTC
According to comment 22 - 24, move the bug to VERIFIED.

Comment 27 errata-xmlrpc 2021-11-16 07:49:54 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (virt:av bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:4684