RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1690515 - dasdb: failed to get udev uid: Invalid argument
Summary: dasdb: failed to get udev uid: Invalid argument
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: device-mapper-multipath
Version: 8.0
Hardware: s390x
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: 8.0
Assignee: Ben Marzinski
QA Contact: Lin Li
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-03-19 15:46 UTC by Jakub Krysl
Modified: 2021-09-06 15:24 UTC (History)
8 users (show)

Fixed In Version: device-mapper-multipath-0.8.3-1.el8
Doc Type: Bug Fix
Doc Text:
Cause: Multipath wasn't properly blacklisting paths before storing them and gathering information about them. Consequence: Multipath was doing a bunch of unnecessary work on blacklisted paths, potentially causing error messages to be printed. Fix: multipath now properly blacklists paths before storing them and gathering unnecessary information on them. Result: Multipath will no longer print unnecessary error messages from attempting to gather information from blacklisted paths.
Clone Of:
Environment:
Last Closed: 2020-04-28 16:57:53 UTC
Type: Bug
Target Upstream Version:
Embargoed:
pm-rhel: mirror+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHELPLAN-28049 0 None None None 2021-09-06 15:24:44 UTC
Red Hat Product Errata RHBA-2020:1868 0 None None None 2020-04-28 16:58:09 UTC

Comment 7 Ben Marzinski 2019-05-01 23:10:28 UTC
Could you try reproducing this with 

verbosity 3

set in the defaults section of multipath.conf?

Comment 8 Jakub Krysl 2019-05-02 08:29:27 UTC
(In reply to Ben Marzinski from comment #7)
> Could you try reproducing this with 
> 
> verbosity 3
> 
> set in the defaults section of multipath.conf?
# lsblk
NAME                      MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
dasda                      94:0    0 68.8G  0 disk 
├─dasda1                   94:1    0    1G  0 part /boot
└─dasda2                   94:2    0 67.8G  0 part 
  ├─rhel_ibm--z--134-root 253:0    0   50G  0 lvm  /
  ├─rhel_ibm--z--134-swap 253:1    0  2.1G  0 lvm  [SWAP]
  └─rhel_ibm--z--134-home 253:2    0 84.5G  0 lvm  /home
dasdb                      94:4    0 68.8G  0 disk 
└─dasdb1                   94:5    0 68.8G  0 part 
  └─rhel_ibm--z--134-home 253:2    0 84.5G  0 lvm  /home

# vi /etc/multipath.conf
defaults {
    user_friendly_names     yes
    verbosity 3
}
devices {
--snip--

# systemctl restart multipathd

# udevadm info -e
P: /devices/css0/0.0.000a/0.0.0121/block/dasdb
N: dasdb
S: disk/by-path/ccw-0.0.0121
E: DEVLINKS=/dev/disk/by-path/ccw-0.0.0121
E: DEVNAME=/dev/dasdb
E: DEVPATH=/devices/css0/0.0.000a/0.0.0121/block/dasdb
E: DEVTYPE=disk
E: DM_MULTIPATH_DEVICE_PATH=0
E: ID_PATH=ccw-0.0.0121
E: ID_PATH_TAG=ccw-0_0_0121
E: MAJOR=94
E: MINOR=4
E: MPATH_SBIN_PATH=/sbin
E: SUBSYSTEM=block
E: TAGS=:systemd:
E: USEC_INITIALIZED=6318390

# udevadm trigger

# udevadm info -e
P: /devices/css0/0.0.000a/0.0.0121/block/dasdb
N: dasdb
S: disk/by-id/ccw-0X0121
S: disk/by-id/ccw-IBM.750000000FRB71.0261.40
S: disk/by-id/ccw-IBM.750000000FRB71.0261.40.00000000000187490000000000000000
S: disk/by-path/ccw-0.0.0121
E: DEVLINKS=/dev/disk/by-path/ccw-0.0.0121 /dev/disk/by-id/ccw-0X0121 /dev/disk/by-id/ccw-IBM.750000000FRB71.0261.40 /dev/disk/by-id/ccw-IBM.750000000FRB71.0261.40.00000000000187490000000000000000
E: DEVNAME=/dev/dasdb
E: DEVPATH=/devices/css0/0.0.000a/0.0.0121/block/dasdb
E: DEVTYPE=disk
E: DM_MULTIPATH_DEVICE_PATH=0
E: ID_BUS=ccw
E: ID_PATH=ccw-0.0.0121
E: ID_PATH_TAG=ccw-0_0_0121
E: ID_SERIAL=0X0121
E: ID_TYPE=disk
E: ID_UID=IBM.750000000FRB71.0261.40
E: ID_XUID=IBM.750000000FRB71.0261.40.00000000000187490000000000000000
E: MAJOR=94
E: MINOR=4
E: MPATH_SBIN_PATH=/sbin
E: SUBSYSTEM=block
E: TAGS=:systemd:
E: USEC_INITIALIZED=6318390 

# less /var/log/messages
May  2 04:20:59 ibm-z-134 multipathd[1002]: --------shut down-------  
May  2 04:20:59 ibm-z-134 systemd[1]: Stopping Device-Mapper Multipath Device Controller...
May  2 04:20:59 ibm-z-134 systemd[1]: Stopped Device-Mapper Multipath Device Controller.
May  2 04:20:59 ibm-z-134 systemd[1]: Starting Device-Mapper Multipath Device Controller...
May  2 04:20:59 ibm-z-134 multipathd[7271]: --------start up--------
May  2 04:20:59 ibm-z-134 multipathd[7271]: read /etc/multipath.conf
May  2 04:20:59 ibm-z-134 multipathd[7271]: loading /lib64/multipath/libchecktur.so checker
May  2 04:20:59 ibm-z-134 multipathd[7271]: loading /lib64/multipath/libprioconst.so prioritizer   
May  2 04:20:59 ibm-z-134 multipathd[7271]: foreign library "nvme" loaded successfully
May  2 04:20:59 ibm-z-134 multipathd[7271]: set open fds limit to 1048576/1048576
May  2 04:20:59 ibm-z-134 multipathd[7271]: using fd 3 from sd_listen_fds
May  2 04:20:59 ibm-z-134 multipathd[7271]: uxsock: startup listener
May  2 04:20:59 ibm-z-134 multipathd[7271]: path checkers start up
May  2 04:20:59 ibm-z-134 multipathd[7271]: factorize_hwtable: duplicate device section for COMPELNT:Compellent Vol:(null) in /etc/multipath.conf
May  2 04:20:59 ibm-z-134 multipathd[7271]: No configuration dir '/etc/multipath/conf.d'
May  2 04:20:59 ibm-z-134 multipathd[7271]: dasda: mask = 0x1f
May  2 04:20:59 ibm-z-134 multipathd[7271]: dasda: dev_t = 94:0
May  2 04:20:59 ibm-z-134 multipathd[7271]: dasda: size = 144244800
May  2 04:20:59 ibm-z-134 multipathd[7271]: dasda: vendor = IBM
May  2 04:20:59 ibm-z-134 multipathd[7271]: dasda: product = S/390 DASD ECKD
May  2 04:20:59 ibm-z-134 multipathd[7271]: dasda: h:b:t:l = 0:0:288:0
May  2 04:20:59 ibm-z-134 multipathd[7271]: dasda: 65534 cyl, 15 heads, 12 sectors/track, start at 0
May  2 04:20:59 ibm-z-134 multipathd[7271]: dasda: get_state
May  2 04:20:59 ibm-z-134 multipathd[7271]: dasda: detect_checker = yes (setting: multipath internal)
May  2 04:20:59 ibm-z-134 multipathd[7271]: loading /lib64/multipath/libcheckdirectio.so checker
May  2 04:20:59 ibm-z-134 multipathd[7271]: dasda: path_checker = directio (setting: storage device configuration)
May  2 04:20:59 ibm-z-134 multipathd[7271]: dasda: checker timeout = 30 s (setting: multipath internal)
May  2 04:20:59 ibm-z-134 multipathd[7271]: directio: starting new request
May  2 04:20:59 ibm-z-134 multipathd[7271]: directio: io finished 4096/0
May  2 04:20:59 ibm-z-134 multipathd[7271]: dasda: directio state = up
May  2 04:20:59 ibm-z-134 multipathd[7271]: dasda: uid_attribute = ID_UID (setting: storage device configuration)
May  2 04:20:59 ibm-z-134 multipathd[7271]: dasda: no ID_UID attribute
May  2 04:20:59 ibm-z-134 multipathd[7271]: dasda: failed to get udev uid: Invalid argument
May  2 04:20:59 ibm-z-134 multipathd[7271]: dasdb: mask = 0x1f
May  2 04:20:59 ibm-z-134 multipathd[7271]: dasdb: dev_t = 94:4
May  2 04:20:59 ibm-z-134 multipathd[7271]: dasdb: size = 144244800
May  2 04:20:59 ibm-z-134 multipathd[7271]: dasdb: vendor = IBM
May  2 04:20:59 ibm-z-134 multipathd[7271]: dasdb: product = S/390 DASD ECKD
May  2 04:20:59 ibm-z-134 multipathd[7271]: dasdb: h:b:t:l = 0:0:289:0
May  2 04:20:59 ibm-z-134 multipathd[7271]: dasdb: 65534 cyl, 15 heads, 12 sectors/track, start at 0
May  2 04:20:59 ibm-z-134 multipathd[7271]: dasdb: get_state
May  2 04:20:59 ibm-z-134 multipathd[7271]: dasdb: detect_checker = yes (setting: multipath internal)
May  2 04:20:59 ibm-z-134 multipathd[7271]: dasdb: path_checker = directio (setting: storage device configuration)
May  2 04:20:59 ibm-z-134 multipathd[7271]: dasdb: checker timeout = 30 s (setting: multipath internal)
May  2 04:20:59 ibm-z-134 multipathd[7271]: directio: starting new request
May  2 04:20:59 ibm-z-134 multipathd[7271]: directio: io finished 4096/0
May  2 04:20:59 ibm-z-134 multipathd[7271]: dasdb: directio state = up
May  2 04:20:59 ibm-z-134 multipathd[7271]: dasdb: uid_attribute = ID_UID (setting: storage device configuration)
May  2 04:20:59 ibm-z-134 multipathd[7271]: dasdb: no ID_UID attribute
May  2 04:20:59 ibm-z-134 multipathd[7271]: dasdb: failed to get udev uid: Invalid argument
May  2 04:20:59 ibm-z-134 multipathd[7271]: dm-0: device node name blacklisted
May  2 04:20:59 ibm-z-134 multipathd[7271]: dm-1: device node name blacklisted
May  2 04:20:59 ibm-z-134 multipathd[7271]: dm-2: device node name blacklisted
May  2 04:20:59 ibm-z-134 multipathd[7271]: dasda: (IBM:S/390 DASD ECKD) vendor/product blacklisted
May  2 04:20:59 ibm-z-134 multipathd[7271]: directio checker refcount 2
May  2 04:20:59 ibm-z-134 multipathd[7271]: dasdb: (IBM:S/390 DASD ECKD) vendor/product blacklisted
May  2 04:20:59 ibm-z-134 multipathd[7271]: directio checker refcount 1  
May  2 04:20:59 ibm-z-134 multipathd[7271]: libdevmapper version 1.02.155-RHEL8 (2019-01-04)
May  2 04:20:59 ibm-z-134 multipathd[7271]: DM multipath kernel driver v1.13.0
May  2 04:20:59 ibm-z-134 systemd[1]: Started Device-Mapper Multipath Device Controller.
May  2 04:22:32 ibm-z-134 restraintd[5609]: *** Current Time: Thu May 02 04:22:32 2019 Localwatchdog at:  * Disabled! *
May  2 04:22:57 ibm-z-134 multipath[7360]: set open fds limit to 1048576/1048576
May  2 04:22:57 ibm-z-134 multipath[7360]: loading /lib64/multipath/libchecktur.so checker
May  2 04:22:57 ibm-z-134 multipath[7360]: loading /lib64/multipath/libprioconst.so prioritizer
May  2 04:22:57 ibm-z-134 multipath[7360]: foreign library "nvme" loaded successfully
May  2 04:22:57 ibm-z-134 multipath[7360]: dasdb: mask = 0x31
May  2 04:22:57 ibm-z-134 multipath[7360]: dasdb: dev_t = 94:4
May  2 04:22:57 ibm-z-134 multipath[7360]: dasdb: size = 144244800
May  2 04:22:57 ibm-z-134 multipath[7360]: dasdb: vendor = IBM
May  2 04:22:57 ibm-z-134 multipath[7360]: dasdb: product = S/390 DASD ECKD
May  2 04:22:57 ibm-z-134 multipath[7360]: dasdb: h:b:t:l = 0:0:289:0
May  2 04:22:57 ibm-z-134 multipath[7360]: dasdb: (IBM:S/390 DASD ECKD) vendor/product blacklisted
May  2 04:22:57 ibm-z-134 multipath[7360]: unloading const prioritizer
May  2 04:22:57 ibm-z-134 multipath[7360]: unloading tur checker
May  2 04:22:57 ibm-z-134 multipathd[7271]: uevent 'change' from '/devices/css0/0.0.000a/0.0.0121/block/dasdb'
May  2 04:22:57 ibm-z-134 multipathd[7271]: Forwarding 1 uevents
May  2 04:22:57 ibm-z-134 multipathd[7271]: dasdb: mask = 0x31
May  2 04:22:57 ibm-z-134 multipathd[7271]: dasdb: dev_t = 94:4
May  2 04:22:57 ibm-z-134 multipathd[7271]: dasdb: size = 144244800
May  2 04:22:57 ibm-z-134 multipathd[7271]: dasdb: vendor = IBM
May  2 04:22:57 ibm-z-134 multipathd[7271]: dasdb: product = S/390 DASD ECKD
May  2 04:22:57 ibm-z-134 multipathd[7271]: dasdb: h:b:t:l = 0:0:289:0
May  2 04:22:57 ibm-z-134 multipathd[7271]: dasdb: (IBM:S/390 DASD ECKD) vendor/product blacklisted
May  2 04:22:57 ibm-z-134 multipathd[7271]: dasdb: spurious uevent, path is blacklisted
May  2 04:22:57 ibm-z-134 multipath[7355]: set open fds limit to 1048576/1048576
May  2 04:22:57 ibm-z-134 multipath[7355]: loading /lib64/multipath/libchecktur.so checker
May  2 04:22:57 ibm-z-134 multipath[7355]: loading /lib64/multipath/libprioconst.so prioritizer
May  2 04:22:57 ibm-z-134 multipath[7355]: foreign library "nvme" loaded successfully
May  2 04:22:57 ibm-z-134 multipath[7355]: dasda: mask = 0x31
May  2 04:22:57 ibm-z-134 multipath[7355]: dasda: dev_t = 94:0
May  2 04:22:57 ibm-z-134 multipath[7355]: dasda: size = 144244800
May  2 04:22:57 ibm-z-134 multipath[7355]: dasda: vendor = IBM
May  2 04:22:57 ibm-z-134 multipath[7355]: dasda: product = S/390 DASD ECKD
May  2 04:22:57 ibm-z-134 multipath[7355]: dasda: h:b:t:l = 0:0:288:0
May  2 04:22:57 ibm-z-134 multipath[7355]: dasda: (IBM:S/390 DASD ECKD) vendor/product blacklisted
May  2 04:22:57 ibm-z-134 multipath[7355]: unloading const prioritizer
May  2 04:22:57 ibm-z-134 multipath[7355]: unloading tur checker
May  2 04:22:57 ibm-z-134 multipathd[7271]: uevent 'change' from '/devices/css0/0.0.0009/0.0.0120/block/dasda'
May  2 04:22:57 ibm-z-134 multipathd[7271]: uevent 'change' from '/devices/virtual/block/dm-0'
May  2 04:22:57 ibm-z-134 multipathd[7271]: uevent 'change' from '/devices/virtual/block/dm-1'
May  2 04:22:57 ibm-z-134 systemd-udevd[7335]: Process 'ccw_init' failed with exit code 1.
May  2 04:22:57 ibm-z-134 systemd-udevd[7337]: Process 'ccw_init' failed with exit code 1.
May  2 04:22:57 ibm-z-134 multipathd[7271]: uevent 'change' from '/devices/virtual/block/dm-2'
May  2 04:22:57 ibm-z-134 systemd-udevd[7341]: Process 'ccw_init' failed with exit code 1.
May  2 04:22:57 ibm-z-134 systemd-udevd[7342]: Process 'ccw_init' failed with exit code 1.
May  2 04:22:57 ibm-z-134 systemd-udevd[7338]: Process 'ccw_init' failed with exit code 1.
May  2 04:22:58 ibm-z-134 multipathd[7271]: Forwarding 4 uevents
May  2 04:22:58 ibm-z-134 multipathd[7271]: dasda: mask = 0x31
May  2 04:22:58 ibm-z-134 multipathd[7271]: dasda: dev_t = 94:0
May  2 04:22:58 ibm-z-134 multipathd[7271]: dasda: size = 144244800
May  2 04:22:58 ibm-z-134 multipathd[7271]: dasda: vendor = IBM
May  2 04:22:58 ibm-z-134 multipathd[7271]: dasda: product = S/390 DASD ECKD
May  2 04:22:58 ibm-z-134 multipathd[7271]: dasda: h:b:t:l = 0:0:288:0
May  2 04:22:58 ibm-z-134 multipathd[7271]: dasda: (IBM:S/390 DASD ECKD) vendor/product blacklisted
May  2 04:22:58 ibm-z-134 multipathd[7271]: dasda: spurious uevent, path is blacklisted
May  2 04:23:32 ibm-z-134 restraintd[5609]: *** Current Time: Thu May 02 04:23:32 2019 Localwatchdog at:  * Disabled! *

Comment 9 Ben Marzinski 2019-05-02 20:52:54 UTC
So if you add

blacklist_exceptions {
        device {
                vendor "IBM"
                product "S/390"
        }
}

to /etc/multipath.conf, that should make multipath work on these paths.  It turns out that for some reason, the builtin config for these devices blacklist all of them.  By adding that exception, multipathd will no longer blacklist them. When this happens, multipathd will notice that the udev info is missing and automatically trigger the change event. You will still see the
"failed to get udev uid" messages, but shortly after that, the change event should happen, and multipath will be able to use them.

I'll see if I can figure out where that builtin config came from, and why it is set to blacklist all devices of this type.

Comment 10 Ben Marzinski 2019-05-02 21:26:28 UTC
Ah. It turns out that multipath isn't supposed to work with ECKD and FBA dasd devices. It's only supported on FCP dasd devices, so these should be blacklisted. So the answer is that these error messages appear because the devices set the necessary udev attribute until a change event occurs, and they aren't multipathed because they aren't supposed to be multipathed (at least using device-mapper-multipath, they apparently do their own multipathing). So, is given that, I'm not sure that there actually is an issue here.

Comment 11 Jakub Krysl 2019-05-03 09:10:48 UTC
(In reply to Ben Marzinski from comment #10)
> Ah. It turns out that multipath isn't supposed to work with ECKD and FBA
> dasd devices. It's only supported on FCP dasd devices, so these should be
> blacklisted. So the answer is that these error messages appear because the
> devices set the necessary udev attribute until a change event occurs, and
> they aren't multipathed because they aren't supposed to be multipathed (at
> least using device-mapper-multipath, they apparently do their own
> multipathing). So, is given that, I'm not sure that there actually is an
> issue here.

The thing is, I am not using them for multipath at all. I use multipath for iSCSI on system with dasd disks. 

So if they are not supported, should multipath even try them? The devices are blacklisted by default, so why is multipath discovering them? Line from manpage "blacklist        This section defines which devices should be excluded from the multipath topology discovery." sounds to me like multipath should not touch them at all. Or at least not complain about them. So the main issue is probably along this question: "Why is multipath complaining about dasd device when it is blacklisted?"

Comment 12 Ben Marzinski 2019-05-03 17:23:54 UTC
So, looking into this some more, it appears that the answer is that there was an upstream multipath commit that turned off the early check to see if a path should be blacklisted by device type, when you run multipath, or start up multipathd. This means that multipath will gather all the information about a path first, and then decide if it should be blacklisted. Looking at the commit that did this, it makes no mention of this change, and it appears to be unintentional.  I can't think of a good reason why multipath needs to continue gathering information in these cases, once it has enough information to see that the device should be blacklisted.

The reason that this change went unnoticed is that the default multipath.conf file will still caused these devices to be blacklisted early. If you run

# mpathconf --enable

when no /etc/multipath.conf file exists, a default one will be created. It includes this section

blacklist_exceptions {
        property "(SCSI_IDENT_|ID_WWN)"
}

This will cause it to blacklist any devices that don't have those udev properties. Looking at the udev info, these dasd devices don't have either of these udev properties, even after the change event. The check to blacklist devices missing those udev
properties still happens before other information about the path is gathered. If you add those lines to /etc/multipath.conf
do the messages stop?

Comment 13 Jakub Krysl 2019-05-21 10:02:31 UTC
Adding the property exception to multipath.conf works, the messages are gone now:

(right after boot)
# systemctl status multipathd
● multipathd.service - Device-Mapper Multipath Device Controller
   Loaded: loaded (/usr/lib/systemd/system/multipathd.service; enabled; vendor preset: enabled)
   Active: active (running) since Tue 2019-05-21 05:53:09 EDT; 7min ago
  Process: 1151 ExecStartPre=/sbin/multipath -A (code=exited, status=0/SUCCESS)
  Process: 1146 ExecStartPre=/sbin/modprobe -a scsi_dh_alua scsi_dh_emc scsi_dh_rdac dm-multipath (code=exited, status=0/SUCCESS)
 Main PID: 1153 (multipathd)
   Status: "up"
    Tasks: 7
   Memory: 11.9M
   CGroup: /system.slice/multipathd.service
           └─1153 /sbin/multipathd -d -s

May 21 05:53:09 ibm-z-114.rhts.eng.bos.redhat.com systemd[1]: Starting Device-Mapper Multipath Device Controller...
May 21 05:53:09 ibm-z-114.rhts.eng.bos.redhat.com multipathd[1153]: --------start up--------
May 21 05:53:09 ibm-z-114.rhts.eng.bos.redhat.com multipathd[1153]: read /etc/multipath.conf
May 21 05:53:09 ibm-z-114.rhts.eng.bos.redhat.com multipathd[1153]: path checkers start up
May 21 05:53:09 ibm-z-114.rhts.eng.bos.redhat.com systemd[1]: Started Device-Mapper Multipath Device Controller.

Comment 16 Lin Li 2020-03-04 10:15:05 UTC
Reproduced on device-mapper-multipath-0.7.8-7.el8
[root@storageqe-05 ~]# rpm -qa | grep multipath
device-mapper-multipath-0.7.8-7.el8.x86_64
device-mapper-multipath-libs-0.7.8-7.el8.x86_64

[root@storageqe-05 ~]# modprobe scsi_debug vpd_use_hostno=0 add_host=2 dev_size_mb=1024

[root@storageqe-05 ~]# multipath -ll
360a98000324669436c2b45666c56786b dm-6 NETAPP,LUN
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 1:0:1:3 sdi 8:128 active ready running
| `- 4:0:1:3 sdq 65:0  active ready running
`-+- policy='service-time 0' prio=10 status=enabled
  |- 1:0:0:3 sde 8:64  active ready running
  `- 4:0:0:3 sdm 8:192 active ready running
mpatha (333333330000007d0) dm-8 Linux,scsi_debug   <-----------------------------------------------------------
size=1.0G features='0' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| `- 6:0:0:0 sds 65:32 active ready running
`-+- policy='service-time 0' prio=1 status=enabled
  `- 5:0:0:0 sdr 65:16 active ready running
360a98000324669436c2b45666c567869 dm-5 NETAPP,LUN
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 1:0:1:2 sdh 8:112 active ready running
| `- 4:0:1:2 sdp 8:240 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
  |- 1:0:0:2 sdd 8:48  active ready running
  `- 4:0:0:2 sdl 8:176 active ready running
360a98000324669436c2b45666c567867 dm-4 NETAPP,LUN
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 1:0:1:1 sdg 8:96  active ready running
| `- 4:0:1:1 sdo 8:224 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
  |- 1:0:0:1 sdc 8:32  active ready running
  `- 4:0:0:1 sdk 8:160 active ready running
360a98000324669436c2b45666c567865 dm-3 NETAPP,LUN
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 1:0:1:0 sdf 8:80  active ready running
| `- 4:0:1:0 sdn 8:208 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
  |- 1:0:0:0 sdb 8:16  active ready running
  `- 4:0:0:0 sdj 8:144 active ready running


Edit /etc/multipath.conf 
[root@storageqe-05 ~]# cat /etc/multipath.conf 
defaults {
	user_friendly_names yes
	find_multipaths yes
}

#blacklist_exceptions {
#        property "(SCSI_IDENT_|ID_WWN)"
#}

blacklist {
    device {
        vendor Linux
        product scsi_debug
    }
}


[root@storageqe-05 ~]# service multipathd reload
Redirecting to /bin/systemctl reload multipathd.service

[root@storageqe-05 ~]# multipath -ll
360a98000324669436c2b45666c56786b dm-6 NETAPP,LUN
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 1:0:1:3 sdi 8:128 active ready running
| `- 4:0:1:3 sdq 65:0  active ready running
`-+- policy='service-time 0' prio=10 status=enabled
  |- 1:0:0:3 sde 8:64  active ready running
  `- 4:0:0:3 sdm 8:192 active ready running
360a98000324669436c2b45666c567869 dm-5 NETAPP,LUN
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 1:0:1:2 sdh 8:112 active ready running
| `- 4:0:1:2 sdp 8:240 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
  |- 1:0:0:2 sdd 8:48  active ready running
  `- 4:0:0:2 sdl 8:176 active ready running
360a98000324669436c2b45666c567867 dm-4 NETAPP,LUN
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 1:0:1:1 sdg 8:96  active ready running
| `- 4:0:1:1 sdo 8:224 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
  |- 1:0:0:1 sdc 8:32  active ready running
  `- 4:0:0:1 sdk 8:160 active ready running
360a98000324669436c2b45666c567865 dm-3 NETAPP,LUN
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 1:0:1:0 sdf 8:80  active ready running
| `- 4:0:1:0 sdn 8:208 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
  |- 1:0:0:0 sdb 8:16  active ready running
  `- 4:0:0:0 sdj 8:144 active ready running


[root@storageqe-05 ~]# multipathd show paths
hcil    dev dev_t pri dm_st  chk_st dev_st  next_check      
0:1:0:0 sda 8:0   1   undef  undef  unknown orphan          
1:0:0:0 sdb 8:16  10  active ready  running XXXXXXXX.. 16/20
1:0:0:1 sdc 8:32  10  active ready  running XXXXXXXX.. 16/20
1:0:0:2 sdd 8:48  10  active ready  running XXXXXXXX.. 16/20
1:0:0:3 sde 8:64  10  active ready  running XXXXXXXX.. 16/20
1:0:1:0 sdf 8:80  50  active ready  running XXXXXXXX.. 16/20
1:0:1:1 sdg 8:96  50  active ready  running XXXXXXXX.. 16/20
1:0:1:2 sdh 8:112 50  active ready  running XXXXXXXX.. 16/20
1:0:1:3 sdi 8:128 50  active ready  running XXXXXXXX.. 16/20
4:0:0:0 sdj 8:144 10  active ready  running XXXXXXXX.. 16/20
4:0:0:1 sdk 8:160 10  active ready  running XXXXXXXX.. 16/20
4:0:0:2 sdl 8:176 10  active ready  running XXXXXXXX.. 16/20
4:0:0:3 sdm 8:192 10  active ready  running XXXXXXXX.. 16/20
4:0:1:0 sdn 8:208 50  active ready  running XXXXXXXX.. 16/20
4:0:1:1 sdo 8:224 50  active ready  running XXXXXXXX.. 16/20
4:0:1:2 sdp 8:240 50  active ready  running XXXXXXXX.. 16/20
4:0:1:3 sdq 65:0  50  active ready  running XXXXXXXX.. 16/20


[root@storageqe-05 ~]# multipathd add path /dev/sds
ok

[root@storageqe-05 ~]# multipathd show paths
hcil    dev dev_t pri dm_st  chk_st dev_st  next_check      
0:1:0:0 sda 8:0   1   undef  undef  unknown orphan          
1:0:0:0 sdb 8:16  10  active ready  running XXXXXXXX.. 16/20
1:0:0:1 sdc 8:32  10  active ready  running XXXXXXXX.. 16/20
1:0:0:2 sdd 8:48  10  active ready  running XXXXXXXX.. 16/20
1:0:0:3 sde 8:64  10  active ready  running XXXXXXXX.. 16/20
1:0:1:0 sdf 8:80  50  active ready  running XXXXXXXX.. 16/20
1:0:1:1 sdg 8:96  50  active ready  running XXXXXXXX.. 16/20
1:0:1:2 sdh 8:112 50  active ready  running XXXXXXXX.. 16/20
1:0:1:3 sdi 8:128 50  active ready  running XXXXXXXX.. 16/20
4:0:0:0 sdj 8:144 10  active ready  running XXXXXXXX.. 16/20
4:0:0:1 sdk 8:160 10  active ready  running XXXXXXXX.. 16/20
4:0:0:2 sdl 8:176 10  active ready  running XXXXXXXX.. 16/20
4:0:0:3 sdm 8:192 10  active ready  running XXXXXXXX.. 16/20
4:0:1:0 sdn 8:208 50  active ready  running XXXXXXXX.. 16/20
4:0:1:1 sdo 8:224 50  active ready  running XXXXXXXX.. 16/20
4:0:1:2 sdp 8:240 50  active ready  running XXXXXXXX.. 16/20
4:0:1:3 sdq 65:0  50  active ready  running XXXXXXXX.. 16/20
6:0:0:0 sds 65:32 50  active ready  running XXXXXXX... 15/20    <--------------------------------the path was added

[root@storageqe-05 ~]# multipathd add path /dev/sdr
ok

[root@storageqe-05 ~]# multipathd show paths
hcil    dev dev_t pri dm_st  chk_st dev_st  next_check      
0:1:0:0 sda 8:0   1   undef  undef  unknown orphan          
1:0:0:0 sdb 8:16  10  active ready  running XXXXX..... 11/20
1:0:0:1 sdc 8:32  10  active ready  running XXXXX..... 11/20
1:0:0:2 sdd 8:48  10  active ready  running XXXXX..... 11/20
1:0:0:3 sde 8:64  10  active ready  running XXXXX..... 11/20
1:0:1:0 sdf 8:80  50  active ready  running XXXXX..... 11/20
1:0:1:1 sdg 8:96  50  active ready  running XXXXX..... 11/20
1:0:1:2 sdh 8:112 50  active ready  running XXXXX..... 11/20
1:0:1:3 sdi 8:128 50  active ready  running XXXXX..... 11/20
4:0:0:0 sdj 8:144 10  active ready  running XXXXX..... 11/20
4:0:0:1 sdk 8:160 10  active ready  running XXXXX..... 11/20
4:0:0:2 sdl 8:176 10  active ready  running XXXXX..... 11/20
4:0:0:3 sdm 8:192 10  active ready  running XXXXX..... 11/20
4:0:1:0 sdn 8:208 50  active ready  running XXXXX..... 11/20
4:0:1:1 sdo 8:224 50  active ready  running XXXXX..... 11/20
4:0:1:2 sdp 8:240 50  active ready  running XXXXX..... 11/20
4:0:1:3 sdq 65:0  50  active ready  running XXXXX..... 11/20
6:0:0:0 sds 65:32 50  active ready  running XXXXX..... 11/20
5:0:0:0 sdr 65:16 1   active ready  running XXXXXXXXX. 18/20  <-------------------------------------the path was added




Verified on device-mapper-multipath-0.8.3-3.el8
[root@storageqe-05 ~]# rpm -qa | grep multipath
device-mapper-multipath-0.8.3-3.el8.x86_64
device-mapper-multipath-libs-0.8.3-3.el8.x86_64
 
[root@storageqe-05 ~]# modprobe scsi_debug vpd_use_hostno=0 add_host=2 dev_size_mb=1024

[root@storageqe-05 ~]# multipath -ll
360a98000324669436c2b45666c56786b dm-6 NETAPP,LUN
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 1:0:1:3 sdi 8:128 active ready running
| `- 4:0:1:3 sdq 65:0  active ready running
`-+- policy='service-time 0' prio=10 status=enabled
  |- 1:0:0:3 sde 8:64  active ready running
  `- 4:0:0:3 sdm 8:192 active ready running
mpatha (333333330000007d0) dm-8 Linux,scsi_debug  <--------------------------------------
size=1.0G features='0' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| `- 6:0:0:0 sds 65:32 active ready running
`-+- policy='service-time 0' prio=1 status=enabled
  `- 5:0:0:0 sdr 65:16 active ready running
360a98000324669436c2b45666c567869 dm-5 NETAPP,LUN
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 1:0:1:2 sdh 8:112 active ready running
| `- 4:0:1:2 sdp 8:240 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
  |- 1:0:0:2 sdd 8:48  active ready running
  `- 4:0:0:2 sdl 8:176 active ready running
360a98000324669436c2b45666c567867 dm-4 NETAPP,LUN
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 1:0:1:1 sdg 8:96  active ready running
| `- 4:0:1:1 sdo 8:224 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
  |- 1:0:0:1 sdc 8:32  active ready running
  `- 4:0:0:1 sdk 8:160 active ready running
360a98000324669436c2b45666c567865 dm-3 NETAPP,LUN
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 1:0:1:0 sdf 8:80  active ready running
| `- 4:0:1:0 sdn 8:208 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
  |- 1:0:0:0 sdb 8:16  active ready running
  `- 4:0:0:0 sdj 8:144 active ready running


Edit /etc/multipath.conf 
[root@storageqe-05 ~]# cat /etc/multipath.conf 
defaults {
	user_friendly_names yes
	find_multipaths yes
}

#blacklist_exceptions {
#        property "(SCSI_IDENT_|ID_WWN)"
#}

blacklist {
    device {
        vendor Linux
        product scsi_debug
    }
}


[root@storageqe-05 ~]# service multipathd reload
Redirecting to /bin/systemctl reload multipathd.service

[root@storageqe-05 ~]# multipath -ll
360a98000324669436c2b45666c56786b dm-6 NETAPP,LUN
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 1:0:1:3 sdi 8:128 active ready running
| `- 4:0:1:3 sdq 65:0  active ready running
`-+- policy='service-time 0' prio=10 status=enabled
  |- 1:0:0:3 sde 8:64  active ready running
  `- 4:0:0:3 sdm 8:192 active ready running
360a98000324669436c2b45666c567869 dm-5 NETAPP,LUN
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 1:0:1:2 sdh 8:112 active ready running
| `- 4:0:1:2 sdp 8:240 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
  |- 1:0:0:2 sdd 8:48  active ready running
  `- 4:0:0:2 sdl 8:176 active ready running
360a98000324669436c2b45666c567867 dm-4 NETAPP,LUN
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 1:0:1:1 sdg 8:96  active ready running
| `- 4:0:1:1 sdo 8:224 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
  |- 1:0:0:1 sdc 8:32  active ready running
  `- 4:0:0:1 sdk 8:160 active ready running
360a98000324669436c2b45666c567865 dm-3 NETAPP,LUN
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 1:0:1:0 sdf 8:80  active ready running
| `- 4:0:1:0 sdn 8:208 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
  |- 1:0:0:0 sdb 8:16  active ready running
  `- 4:0:0:0 sdj 8:144 active ready running

[root@storageqe-05 ~]# multipathd show paths
hcil    dev dev_t pri dm_st  chk_st dev_st  next_check     
0:1:0:0 sda 8:0   1   undef  undef  unknown orphan         
1:0:0:0 sdb 8:16  10  active ready  running XX........ 5/20
1:0:0:1 sdc 8:32  10  active ready  running XX........ 5/20
1:0:0:2 sdd 8:48  10  active ready  running XX........ 5/20
1:0:0:3 sde 8:64  10  active ready  running XX........ 5/20
1:0:1:0 sdf 8:80  50  active ready  running XX........ 5/20
1:0:1:1 sdg 8:96  50  active ready  running XX........ 5/20
1:0:1:2 sdh 8:112 50  active ready  running XX........ 5/20
1:0:1:3 sdi 8:128 50  active ready  running XX........ 5/20
4:0:0:0 sdj 8:144 10  active ready  running XX........ 5/20
4:0:0:1 sdk 8:160 10  active ready  running XX........ 5/20
4:0:0:2 sdl 8:176 10  active ready  running XX........ 5/20
4:0:0:3 sdm 8:192 10  active ready  running XX........ 5/20
4:0:1:0 sdn 8:208 50  active ready  running XX........ 5/20
4:0:1:1 sdo 8:224 50  active ready  running XX........ 5/20
4:0:1:2 sdp 8:240 50  active ready  running XX........ 5/20
4:0:1:3 sdq 65:0  50  active ready  running XX........ 5/20


[root@storageqe-05 ~]# multipathd add path /dev/sds
blacklisted  <-------------------------the path was not added 

[root@storageqe-05 ~]# multipathd add path /dev/sdr
blacklisted  <--------------------------the path was not added 

[root@storageqe-05 ~]# multipathd show paths
hcil    dev dev_t pri dm_st  chk_st dev_st  next_check      
0:1:0:0 sda 8:0   1   undef  undef  unknown orphan          
1:0:0:0 sdb 8:16  10  active ready  running XXXXX..... 10/20
1:0:0:1 sdc 8:32  10  active ready  running XXXXX..... 10/20
1:0:0:2 sdd 8:48  10  active ready  running XXXXX..... 10/20
1:0:0:3 sde 8:64  10  active ready  running XXXXX..... 10/20
1:0:1:0 sdf 8:80  50  active ready  running XXXXX..... 10/20
1:0:1:1 sdg 8:96  50  active ready  running XXXXX..... 10/20
1:0:1:2 sdh 8:112 50  active ready  running XXXXX..... 10/20
1:0:1:3 sdi 8:128 50  active ready  running XXXXX..... 10/20
4:0:0:0 sdj 8:144 10  active ready  running XXXXX..... 10/20
4:0:0:1 sdk 8:160 10  active ready  running XXXXX..... 10/20
4:0:0:2 sdl 8:176 10  active ready  running XXXXX..... 10/20
4:0:0:3 sdm 8:192 10  active ready  running XXXXX..... 10/20
4:0:1:0 sdn 8:208 50  active ready  running XXXXX..... 10/20
4:0:1:1 sdo 8:224 50  active ready  running XXXXX..... 10/20
4:0:1:2 sdp 8:240 50  active ready  running XXXXX..... 10/20
4:0:1:3 sdq 65:0  50  active ready  running XXXXX..... 10/20
<-------------------------------the path was not added from "multipathd show paths"



Test result: Multipath will no longer print unnecessary error messages from attempting to gather information from blacklisted paths.

Comment 18 errata-xmlrpc 2020-04-28 16:57:53 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:1868


Note You need to log in before you can comment on or make changes to this bug.