Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 2221602

Summary: libvirt is loosing paths to vhba storage pool
Product: Red Hat Enterprise Linux 9 Reporter: Marian Jankular <mjankula>
Component: libvirtAssignee: Virtualization Maintenance <virt-maint>
libvirt sub component: Storage QA Contact: Meina Li <meili>
Status: CLOSED MIGRATED Docs Contact:
Severity: high    
Priority: low CC: hhan, jsuchane, jtomko, lmen, meili, virt-maint, yafu
Version: 9.2Keywords: MigratedToJIRA, Triaged
Target Milestone: rcFlags: pm-rhel: mirror+
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-09-22 17:20:32 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Marian Jankular 2023-07-10 09:12:41 UTC
Description of problem:
libvirt is loosing paths to vhba storage pool

Version-Release number of selected component (if applicable):
libvirt-daemon-8.0.0-10.module+el8.7.0+16689+53d59bc2.x86_64

How reproducible:
everytime at customer end

Steps to Reproduce:
1. create 2 vhba pools 


Actual results:
libvirt does not see some of the paths of the luns after some time 

Expected results:
os itself does not detect any issues with those paths, it seems to be just libvirt issue.

Additional info:
even after libvirt service restart the storage pools are not visible, if the storage pools are destroyed, undefined, defined and started again the paths are visible
libvirt complains about

Comment 9 Jaroslav Suchanek 2023-07-14 12:38:00 UTC
Assigning to Jano for further investigation.

Comment 10 Meina Li 2023-07-17 09:47:40 UTC
I can't reproduce it now. Due to the limitation of the test environment, I can only test two LUNs on the same port now. Please help point out if there is something wrong with my test steps

Test Version:
libvirt-8.0.0-10.4.module+el8.7.0+18295+4ee500a4.x86_64
qemu-kvm-6.2.0-22.module+el8.7.0+18170+646069c1.2.x86_64

Test Steps:
1. Prepare two vhba pools.
# cat vhba-pool1.xml 
<pool type="scsi">
<name>fc-pool1</name>
<source>
<adapter type='fc_host' parent_wwnn="2000f4e9d4eb02c9" parent_wwpn="2001f4e9d4eb02c9" managed='yes' wwnn="2001f4e9d4eb02c9" wwpn="1000000000000001"/>
</source>
<target>
<path>/dev/disk/by-path</path>
</target>
</pool>

# cat vhba-pool2.xml 
<pool type="scsi">
<name>fc-pool2</name>
<source>
<adapter type='fc_host' parent_wwnn="2000f4e9d4eb02c9" parent_wwpn="2001f4e9d4eb02c9" managed='yes' wwnn="2001f4e9d4eb02c9" wwpn="1000000000000002"/>
</source>
<target>
<path>/dev/disk/by-path</path>
</target>
</pool>
2. Define and start the first pool.
# virsh pool-define vhba-pool1.xml 
Pool fc-pool1 defined from vhba-pool1.xml
# virsh pool-start fc-pool1
Pool fc-pool1 started
# virsh vol-list fc-pool1 --details
 Name         Path                                                               Type    Capacity    Allocation
-----------------------------------------------------------------------------------------------------------------
 unit:0:1:0   /dev/disk/by-path/fc-0x1000000000000001-0x50050768030939b7-lun-0   block   10.00 GiB   10.00 GiB
 unit:0:1:1   /dev/disk/by-path/fc-0x1000000000000001-0x50050768030939b7-lun-1   block   20.00 GiB   20.00 GiB
3. Refresh the pool and check the vol.
# virsh pool-refresh fc-pool1
Pool fc-pool1 refreshed

# virsh pool-list --all
 Name       State    Autostart
--------------------------------
 fc-pool1   active   no
 images     active   yes

# virsh vol-list fc-pool1 --details
 Name         Path                                                               Type    Capacity    Allocation
-----------------------------------------------------------------------------------------------------------------
 unit:0:1:0   /dev/disk/by-path/fc-0x1000000000000001-0x50050768030939b7-lun-0   block   10.00 GiB   10.00 GiB
 unit:0:1:1   /dev/disk/by-path/fc-0x1000000000000001-0x50050768030939b7-lun-1   block   20.00 GiB   20.00 GiB
4. Define and start the second pool.
 virsh pool-define vhba-pool2.xml 
Pool fc-pool2 defined from vhba-pool2.xml

# virsh pool-start fc-pool2
Pool fc-pool2 started

# virsh vol-list fc-pool2 --details
 Name         Path                                                             Type    Capacity    Allocation
---------------------------------------------------------------------------------------------------------------
 unit:0:1:0   /dev/disk/by-path/pci-0000:06:00.1-fc-0x50050768030939b7-lun-0   block   15.00 GiB   15.00 GiB
 unit:0:1:1   /dev/disk/by-path/pci-0000:06:00.1-fc-0x50050768030939b7-lun-1   block   20.00 GiB   20.00 GiB
5. Refresh the pool and check the vol.
# virsh pool-refresh fc-pool2
Pool fc-pool2 refreshed

# virsh vol-list fc-pool1 --details
 Name         Path                                                               Type    Capacity    Allocation
-----------------------------------------------------------------------------------------------------------------
 unit:0:1:0   /dev/disk/by-path/fc-0x1000000000000001-0x50050768030939b7-lun-0   block   10.00 GiB   10.00 GiB
 unit:0:1:1   /dev/disk/by-path/fc-0x1000000000000001-0x50050768030939b7-lun-1   block   20.00 GiB   20.00 GiB

We found a previous WONTFIX bug "Bug 1665458 - When different fc connected block devices having same backend luns, only the latest ones displayed under /dev/disk/by-path" which is related to the /dev/disk/by-path. So I'm not sure if the customer will reappear if they switch to by-id.

Comment 18 RHEL Program Management 2023-09-22 17:18:47 UTC
Issue migration from Bugzilla to Jira is in process at this time. This will be the last message in Jira copied from the Bugzilla bug.

Comment 19 RHEL Program Management 2023-09-22 17:20:32 UTC
This BZ has been automatically migrated to the issues.redhat.com Red Hat Issue Tracker. All future work related to this report will be managed there.

Due to differences in account names between systems, some fields were not replicated.  Be sure to add yourself to Jira issue's "Watchers" field to continue receiving updates and add others to the "Need Info From" field to continue requesting information.

To find the migrated issue, look in the "Links" section for a direct link to the new issue location. The issue key will have an icon of 2 footprints next to it, and begin with "RHEL-" followed by an integer.  You can also find this issue by visiting https://issues.redhat.com/issues/?jql= and searching the "Bugzilla Bug" field for this BZ's number, e.g. a search like:

"Bugzilla Bug" = 1234567

In the event you have trouble locating or viewing this issue, you can file an issue by sending mail to rh-issues. You can also visit https://access.redhat.com/articles/7032570 for general account information.