RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1687715 - different behaviors when create storage pool via NPIV in RHEL7.5 and RHEL7.6 [rhel-7.6.z]
Summary: different behaviors when create storage pool via NPIV in RHEL7.5 and RHEL7.6 ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.6
Hardware: x86_64
OS: Linux
high
medium
Target Milestone: rc
: ---
Assignee: John Ferlan
QA Contact: yisun
URL:
Whiteboard:
Depends On: 1657468
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-03-12 08:25 UTC by RAD team bot copy to z-stream
Modified: 2019-04-23 14:29 UTC (History)
10 users (show)

Fixed In Version: libvirt-4.5.0-10.el7_6.7
Doc Type: Bug Fix
Doc Text:
Cause: The algorithm used to manage volumes changed from a linked list to one based on a hash table using keys to lookup volumes. NPIV LUNs share the same serial value used as one of the unique keys. Consequence: The algorithm used to add the volume to the hash table would return a failure since the unique key would already be in use. Fix: Alter the name used to generate the unique key for the NPIV LUN to include the port value for the LUN. Result: All NPIV LUNs can have a unique key for the hash table and will be listed in the storage pool.
Clone Of: 1657468
Environment:
Last Closed: 2019-04-23 14:29:07 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 3990621 0 None None None 2019-03-15 12:45:03 UTC
Red Hat Product Errata RHBA-2019:0821 0 None None None 2019-04-23 14:29:09 UTC

Description RAD team bot copy to z-stream 2019-03-12 08:25:13 UTC
This bug has been copied from bug #1657468 and has been proposed to be backported to 7.6 z-stream (EUS).

Comment 5 yisun 2019-03-28 06:21:18 UTC
We do not have exactly same env as reporter (4 luns pointing to the same backend sotrage), our lab has limitation to have only 2 luns pointing to the same backend storage. But the scenario should be ok to verify this bug.


1. reproduced on libvirt-4.5.0-10.el7.x86_64
[root@ibm-x3250m5-04 ~]# virsh pool-undefine vp
Pool vp has been undefined

[root@ibm-x3250m5-04 ~]# cat pool
<pool type='scsi'>
<name>vp</name>
<source>
<adapter type='fc_host' wwnn='20000000c99e2b80' wwpn='1000000000000009' parent='scsi_host6'/>
</source>
<target>
<path>/dev/disk/by-path</path>
<permissions>
<mode>0700</mode>
<owner>0</owner>
<group>0</group>
</permissions>
</target>
</pool>
[root@ibm-x3250m5-04 ~]# virsh pool-define pool
Pool vp defined from pool

[root@ibm-x3250m5-04 ~]# virsh pool-start vp
Pool vp started

[root@ibm-x3250m5-04 ~]# virsh vol-list vp
Name Path
------------------------------------------------------------------------------
unit:0:0:0 /dev/disk/by-path/pci-0000:20:00.0-vport-0x1000000000000009-fc-0x50050768030939b6-lun-0
<=== only one lun displayed in pool

2. verified on libvirt-4.5.0-10.el7_6.7.x86_64
[root@ibm-x3250m5-04 ~]# cat pool
<pool type='scsi'>
<name>vp</name>
<source>
<adapter type='fc_host' wwnn='20000000c99e2b80' wwpn='1000000000000009' parent='scsi_host6'/>
</source>
<target>
<path>/dev/disk/by-path</path>
<permissions>
<mode>0700</mode>
<owner>0</owner>
<group>0</group>
</permissions>
</target>
</pool>
[root@ibm-x3250m5-04 ~]# virsh pool-define pool
Pool vp defined from pool

[root@ibm-x3250m5-04 ~]# virsh pool-start vp
Pool vp started

[root@ibm-x3250m5-04 ~]# multipath -ll
mpathd (360050763008084e6e0000000000001b4) dm-8 IBM ,2145
size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| `- 17:0:1:0 sdg 8:96 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
`- 17:0:0:0 sdf 8:80 active ready running

[root@ibm-x3250m5-04 ~]# virsh vol-list vp
Name Path
------------------------------------------------------------------------------
unit:0:0:0 /dev/disk/by-path/pci-0000:20:00.0-vport-0x1000000000000009-fc-0x50050768030939b7-lun-0
unit:0:1:0 /dev/disk/by-path/pci-0000:20:00.0-vport-0x1000000000000009-fc-0x50050768030939b6-lun-0
<==== all of the luns displayed as vols in pool



The result is PASSED
I'll cover the risk area by auto testcases of npiv and storage pool. This bug will be set as verified when these jobs finished and no regression issue found. thx

Comment 6 yisun 2019-03-29 05:38:55 UTC
Checked npiv and storage automation jobs and no regression failures, so marked this bz as VERIFIED.

Comment 8 errata-xmlrpc 2019-04-23 14:29:07 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0821


Note You need to log in before you can comment on or make changes to this bug.