Description of problem: libvirt is loosing paths to vhba storage pool Version-Release number of selected component (if applicable): libvirt-daemon-8.0.0-10.module+el8.7.0+16689+53d59bc2.x86_64 How reproducible: everytime at customer end Steps to Reproduce: 1. create 2 vhba pools Actual results: libvirt does not see some of the paths of the luns after some time Expected results: os itself does not detect any issues with those paths, it seems to be just libvirt issue. Additional info: even after libvirt service restart the storage pools are not visible, if the storage pools are destroyed, undefined, defined and started again the paths are visible libvirt complains about
Assigning to Jano for further investigation.
I can't reproduce it now. Due to the limitation of the test environment, I can only test two LUNs on the same port now. Please help point out if there is something wrong with my test steps Test Version: libvirt-8.0.0-10.4.module+el8.7.0+18295+4ee500a4.x86_64 qemu-kvm-6.2.0-22.module+el8.7.0+18170+646069c1.2.x86_64 Test Steps: 1. Prepare two vhba pools. # cat vhba-pool1.xml <pool type="scsi"> <name>fc-pool1</name> <source> <adapter type='fc_host' parent_wwnn="2000f4e9d4eb02c9" parent_wwpn="2001f4e9d4eb02c9" managed='yes' wwnn="2001f4e9d4eb02c9" wwpn="1000000000000001"/> </source> <target> <path>/dev/disk/by-path</path> </target> </pool> # cat vhba-pool2.xml <pool type="scsi"> <name>fc-pool2</name> <source> <adapter type='fc_host' parent_wwnn="2000f4e9d4eb02c9" parent_wwpn="2001f4e9d4eb02c9" managed='yes' wwnn="2001f4e9d4eb02c9" wwpn="1000000000000002"/> </source> <target> <path>/dev/disk/by-path</path> </target> </pool> 2. Define and start the first pool. # virsh pool-define vhba-pool1.xml Pool fc-pool1 defined from vhba-pool1.xml # virsh pool-start fc-pool1 Pool fc-pool1 started # virsh vol-list fc-pool1 --details Name Path Type Capacity Allocation ----------------------------------------------------------------------------------------------------------------- unit:0:1:0 /dev/disk/by-path/fc-0x1000000000000001-0x50050768030939b7-lun-0 block 10.00 GiB 10.00 GiB unit:0:1:1 /dev/disk/by-path/fc-0x1000000000000001-0x50050768030939b7-lun-1 block 20.00 GiB 20.00 GiB 3. Refresh the pool and check the vol. # virsh pool-refresh fc-pool1 Pool fc-pool1 refreshed # virsh pool-list --all Name State Autostart -------------------------------- fc-pool1 active no images active yes # virsh vol-list fc-pool1 --details Name Path Type Capacity Allocation ----------------------------------------------------------------------------------------------------------------- unit:0:1:0 /dev/disk/by-path/fc-0x1000000000000001-0x50050768030939b7-lun-0 block 10.00 GiB 10.00 GiB unit:0:1:1 /dev/disk/by-path/fc-0x1000000000000001-0x50050768030939b7-lun-1 block 20.00 GiB 20.00 GiB 4. Define and start the second pool. virsh pool-define vhba-pool2.xml Pool fc-pool2 defined from vhba-pool2.xml # virsh pool-start fc-pool2 Pool fc-pool2 started # virsh vol-list fc-pool2 --details Name Path Type Capacity Allocation --------------------------------------------------------------------------------------------------------------- unit:0:1:0 /dev/disk/by-path/pci-0000:06:00.1-fc-0x50050768030939b7-lun-0 block 15.00 GiB 15.00 GiB unit:0:1:1 /dev/disk/by-path/pci-0000:06:00.1-fc-0x50050768030939b7-lun-1 block 20.00 GiB 20.00 GiB 5. Refresh the pool and check the vol. # virsh pool-refresh fc-pool2 Pool fc-pool2 refreshed # virsh vol-list fc-pool1 --details Name Path Type Capacity Allocation ----------------------------------------------------------------------------------------------------------------- unit:0:1:0 /dev/disk/by-path/fc-0x1000000000000001-0x50050768030939b7-lun-0 block 10.00 GiB 10.00 GiB unit:0:1:1 /dev/disk/by-path/fc-0x1000000000000001-0x50050768030939b7-lun-1 block 20.00 GiB 20.00 GiB We found a previous WONTFIX bug "Bug 1665458 - When different fc connected block devices having same backend luns, only the latest ones displayed under /dev/disk/by-path" which is related to the /dev/disk/by-path. So I'm not sure if the customer will reappear if they switch to by-id.