Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Do you know if this continues to happen in rhel-8.2? The fixes for the sg3_utils bzs #1746414 and #1785062 added udev rules that should set SCSI_IDENT_*, so this will hopefully work correctly now.
Great. I'm closing this as a duplicate of 1785062, which adds /usr/lib/udev/rules.d/61-scsi-sg3_id.rules, and fixes this problem.
*** This bug has been marked as a duplicate of bug 1785062 ***
Description of problem: On RHEL8.x (8.0 and 8.1)systems with LUNs mapped from Nimble Storage Controllers, the Blacklist exception entry property "(SCSI_IDENT_|ID_WWN)" in multipath.conf causes the paths to be displayed as faulty in multipath -ll output. See the output below [root@rtp-hpe-ops07 ~]# multipath -ll mpatha (204e7317153404e246c9ce900d54f5505) dm-0 ##,## size=250G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw |-+- policy='service-time 0' prio=0 status=active | |- #:#:#:# sdb 8:16 active faulty running | |- #:#:#:# sdc 8:32 active faulty running | |- #:#:#:# sdg 8:96 active faulty running | `- #:#:#:# sdh 8:112 active faulty running `-+- policy='service-time 0' prio=0 status=enabled |- #:#:#:# sdd 8:48 active faulty running |- #:#:#:# sde 8:64 active faulty running |- #:#:#:# sdf 8:80 active faulty running `- #:#:#:# sdi 8:128 active faulty running Multipath.conf looks like below: defaults { user_friendly_names yes } blacklist { } blacklist_exceptions { property "(SCSI_IDENT_|ID_WWN)" } devices { device { product "Server" failback immediate path_grouping_policy group_by_prio no_path_retry 30 dev_loss_tmo infinity hardware_handler "1 alua" fast_io_fail_tmo 5 rr_min_io_rq 1 vendor "Nimble" rr_weight uniform path_checker tur prio "alua" path_selector "service-time 0" } } Once the blacklist exception entry is commented out the multipath -ll output reports the path states correctly. See below: [root@rtp-hpe-ops07 ~]# multipath -ll mpatha (204e7317153404e246c9ce900d54f5505) dm-0 Nimble,Server size=250G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw |-+- policy='service-time 0' prio=50 status=active | |- 3:0:0:0 sdb 8:16 active ready running | |- 3:0:1:0 sdc 8:32 active ready running | |- 5:0:1:0 sdg 8:96 active ready running | `- 5:0:2:0 sdh 8:112 active ready running `-+- policy='service-time 0' prio=1 status=enabled |- 3:0:2:0 sdd 8:48 active ghost running |- 3:0:3:0 sde 8:64 active ghost running |- 5:0:0:0 sdf 8:80 active ghost running `- 5:0:3:0 sdi 8:128 active ghost running Version-Release number of selected component (if applicable): device-mapper-event-1.02.163-5.el8.x86_64 device-mapper-libs-1.02.163-5.el8.x86_64 device-mapper-persistent-data-0.8.5-2.el8.x86_64 device-mapper-event-libs-1.02.163-5.el8.x86_64 device-mapper-multipath-libs-0.8.0-5.el8.x86_64 device-mapper-1.02.163-5.el8.x86_64 device-mapper-multipath-0.8.0-5.el8.x86_64 How reproducible: Steps to Reproduce: 1.Install RH8.x and enable multipath. 2.Create and map a few LUNs from Nimble Storage controller to the server 3.Add the below entry to multipath.conf file blacklist_exceptions { property "(SCSI_IDENT_|ID_WWN)" } 4.Reload multipath 5. Fire the command multipath -ll Actual results: Paths are listed as faulty [root@rtp-hpe-ops07 ~]# multipath -ll mpatha (204e7317153404e246c9ce900d54f5505) dm-0 ##,## size=250G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw |-+- policy='service-time 0' prio=0 status=active | |- #:#:#:# sdb 8:16 active faulty running | |- #:#:#:# sdc 8:32 active faulty running | |- #:#:#:# sdg 8:96 active faulty running | `- #:#:#:# sdh 8:112 active faulty running `-+- policy='service-time 0' prio=0 status=enabled |- #:#:#:# sdd 8:48 active faulty running |- #:#:#:# sde 8:64 active faulty running |- #:#:#:# sdf 8:80 active faulty running `- #:#:#:# sdi 8:128 active faulty running Expected results: Paths should be displayed as below: [root@rtp-hpe-ops07 ~]# multipath -ll mpatha (204e7317153404e246c9ce900d54f5505) dm-0 Nimble,Server size=250G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw |-+- policy='service-time 0' prio=50 status=active | |- 3:0:0:0 sdb 8:16 active ready running | |- 3:0:1:0 sdc 8:32 active ready running | |- 5:0:1:0 sdg 8:96 active ready running | `- 5:0:2:0 sdh 8:112 active ready running `-+- policy='service-time 0' prio=1 status=enabled |- 3:0:2:0 sdd 8:48 active ghost running |- 3:0:3:0 sde 8:64 active ghost running |- 5:0:0:0 sdf 8:80 active ghost running `- 5:0:3:0 sdi 8:128 active ghost running Additional info: