Bug 2221602 - libvirt is loosing paths to vhba storage pool [NEEDINFO]
Summary: libvirt is loosing paths to vhba storage pool
Keywords:
Status: NEW
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: libvirt
Version: 8.7
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: rc
: ---
Assignee: Ján Tomko
QA Contact: Meina Li
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-07-10 09:12 UTC by Marian Jankular
Modified: 2023-08-07 07:45 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed:
Type: Bug
Target Upstream Version:
Embargoed:
mjankula: needinfo? (jtomko)


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHELPLAN-161847 0 None None None 2023-07-10 09:13:58 UTC

Description Marian Jankular 2023-07-10 09:12:41 UTC
Description of problem:
libvirt is loosing paths to vhba storage pool

Version-Release number of selected component (if applicable):
libvirt-daemon-8.0.0-10.module+el8.7.0+16689+53d59bc2.x86_64

How reproducible:
everytime at customer end

Steps to Reproduce:
1. create 2 vhba pools 


Actual results:
libvirt does not see some of the paths of the luns after some time 

Expected results:
os itself does not detect any issues with those paths, it seems to be just libvirt issue.

Additional info:
even after libvirt service restart the storage pools are not visible, if the storage pools are destroyed, undefined, defined and started again the paths are visible
libvirt complains about

Comment 9 Jaroslav Suchanek 2023-07-14 12:38:00 UTC
Assigning to Jano for further investigation.

Comment 10 Meina Li 2023-07-17 09:47:40 UTC
I can't reproduce it now. Due to the limitation of the test environment, I can only test two LUNs on the same port now. Please help point out if there is something wrong with my test steps

Test Version:
libvirt-8.0.0-10.4.module+el8.7.0+18295+4ee500a4.x86_64
qemu-kvm-6.2.0-22.module+el8.7.0+18170+646069c1.2.x86_64

Test Steps:
1. Prepare two vhba pools.
# cat vhba-pool1.xml 
<pool type="scsi">
<name>fc-pool1</name>
<source>
<adapter type='fc_host' parent_wwnn="2000f4e9d4eb02c9" parent_wwpn="2001f4e9d4eb02c9" managed='yes' wwnn="2001f4e9d4eb02c9" wwpn="1000000000000001"/>
</source>
<target>
<path>/dev/disk/by-path</path>
</target>
</pool>

# cat vhba-pool2.xml 
<pool type="scsi">
<name>fc-pool2</name>
<source>
<adapter type='fc_host' parent_wwnn="2000f4e9d4eb02c9" parent_wwpn="2001f4e9d4eb02c9" managed='yes' wwnn="2001f4e9d4eb02c9" wwpn="1000000000000002"/>
</source>
<target>
<path>/dev/disk/by-path</path>
</target>
</pool>
2. Define and start the first pool.
# virsh pool-define vhba-pool1.xml 
Pool fc-pool1 defined from vhba-pool1.xml
# virsh pool-start fc-pool1
Pool fc-pool1 started
# virsh vol-list fc-pool1 --details
 Name         Path                                                               Type    Capacity    Allocation
-----------------------------------------------------------------------------------------------------------------
 unit:0:1:0   /dev/disk/by-path/fc-0x1000000000000001-0x50050768030939b7-lun-0   block   10.00 GiB   10.00 GiB
 unit:0:1:1   /dev/disk/by-path/fc-0x1000000000000001-0x50050768030939b7-lun-1   block   20.00 GiB   20.00 GiB
3. Refresh the pool and check the vol.
# virsh pool-refresh fc-pool1
Pool fc-pool1 refreshed

# virsh pool-list --all
 Name       State    Autostart
--------------------------------
 fc-pool1   active   no
 images     active   yes

# virsh vol-list fc-pool1 --details
 Name         Path                                                               Type    Capacity    Allocation
-----------------------------------------------------------------------------------------------------------------
 unit:0:1:0   /dev/disk/by-path/fc-0x1000000000000001-0x50050768030939b7-lun-0   block   10.00 GiB   10.00 GiB
 unit:0:1:1   /dev/disk/by-path/fc-0x1000000000000001-0x50050768030939b7-lun-1   block   20.00 GiB   20.00 GiB
4. Define and start the second pool.
 virsh pool-define vhba-pool2.xml 
Pool fc-pool2 defined from vhba-pool2.xml

# virsh pool-start fc-pool2
Pool fc-pool2 started

# virsh vol-list fc-pool2 --details
 Name         Path                                                             Type    Capacity    Allocation
---------------------------------------------------------------------------------------------------------------
 unit:0:1:0   /dev/disk/by-path/pci-0000:06:00.1-fc-0x50050768030939b7-lun-0   block   15.00 GiB   15.00 GiB
 unit:0:1:1   /dev/disk/by-path/pci-0000:06:00.1-fc-0x50050768030939b7-lun-1   block   20.00 GiB   20.00 GiB
5. Refresh the pool and check the vol.
# virsh pool-refresh fc-pool2
Pool fc-pool2 refreshed

# virsh vol-list fc-pool1 --details
 Name         Path                                                               Type    Capacity    Allocation
-----------------------------------------------------------------------------------------------------------------
 unit:0:1:0   /dev/disk/by-path/fc-0x1000000000000001-0x50050768030939b7-lun-0   block   10.00 GiB   10.00 GiB
 unit:0:1:1   /dev/disk/by-path/fc-0x1000000000000001-0x50050768030939b7-lun-1   block   20.00 GiB   20.00 GiB

We found a previous WONTFIX bug "Bug 1665458 - When different fc connected block devices having same backend luns, only the latest ones displayed under /dev/disk/by-path" which is related to the /dev/disk/by-path. So I'm not sure if the customer will reappear if they switch to by-id.


Note You need to log in before you can comment on or make changes to this bug.